AIDA’s regulation of AI in Canada: questions, criticisms and recommendations

AIDA - Regulation of AI systems in Canada

1. Background and Overview

Canada is planning to enact Bill C‑27, the Digital Implementation Act (“DIA”) to enact the Consumer Privacy Protection Act (CPPA), Personal Information and Data Protection Tribunal Act (PIDTA), and Artificial Intelligence and Data Act (AIDA). In prior blog posts, I addressed some of the more important problems with the CPPA. This post focuses on questions, criticisms, and recommendations about AIDA, the government’s draft law to regulate AI systems in Canada.

AI systems are the new electricity. They may be, as Brad Smith President of Microsoft recently emphasized in an article, “the most consequential technology advance of our lifetime”. They will revolutionize existing applications and systems and adapt or replace them, fundamentally affecting every product, service, and organization. Their use will be tectonic and pervasive. AI systems will positively affect the lives of individuals in indescribable ways. But AI systems also have the potential to create new health and safety challenges, fears about unaccountable algorithms creating systemic discrimination and using personal information for undesirable purposes.

There is a growing international consensus that AI systems pose unique threats and should be subject to some type of regulation, although the regulatory frameworks are still the subject to considerable debate. AIDI is the federal government’s attempt to regulate certain AI systems that could cause harm or result in bias. But there are significant questions and concerns about AIDA.

AIDA was subject to only very limited public consultations prior to its release and there are many questions about it. These questions include whether AIDA is sufficiently detailed for Parliamentarians to give it sufficient consideration in its present form; the appropriateness of the substantial delegation of policy and enforcement choices to the executive; whether AIDA is the right framework to regulate AI systems, and in particular whether a cross‑sectorial regulatory approach similar to what the United Kingdom, and Israel  are doing is a preferable structure; whether this is the time to enact the AI specific law; whether AIDA could impose impractical responsibilities on the ecosystem of persons that design and develop AI systems, put AI systems into production, or make data available for use with AI systems; whether, on balance, AIDA will promote trust and confidence in AI without substantially inhibiting innovation in a critical technology that will power the 4th generation industrial revolution; whether AIDA fails to protect the public by exempting public sector uses of AI systems from regulation; and whether AIDA’s disproportionate and overlapping penalty regime is appropriate.

The following roadmap provides an overview of the extensive discussion that follows in this blog post. First, it summarizes AIDA. It then describes legislative initiatives to regulate AI systems in the European Union (EU), the United Kingdom, Israel, and the United States. It then summarizes existing, and potentially overlapping, regimes in Canada that also apply to regulating health and safety and bias in AI systems. This is followed by a discussion addressing these specific questions about AIDA:

    • Does AIDA lack Parliamentary oversight?
    • Is AIDA’s scope too narrow?
    • Should AI systems be regulated by ISED?
    • Does AIDA take the wrong approach to regulating AI?
    • Does AIDA impose responsibilities on AI actors that are impossible to meet?
    • Is it premature to regulate AI now via AIDA?
    • Will AIDA impede innovation by imposing new restrictions on uses of anonymized data, with its duplicative regulatory regimes and harsh and disproportionate penalties?

This blog concludes with recommendations for officials at ISED, the Minister, and Parliament to consider when debating and amending AIDA.

Table of Contents

2. How AIDA will regulate AI systems[1]

(a)   The purposes of AIDA

The purposes of AIDA, as set out in the DCIA Summary,[2] the Pre‑amble to AIDA,[3] and Section 4 of AIDA[4] are:

  • To establish common requirements across Canada, for the design, development and use of artificial intelligence systems, consistent with international standards and to uphold Canadian norms and values in line with the principles of international human rights law.
  • To require that certain persons adopt measures to mitigate risks of harm and biased output related to high‑impact artificial intelligence systems.
  • To prohibit the making available for use of an artificial intelligence system if its use causes serious harm to individuals or harm to their interests.
  • To prohibit the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use artificial intelligence systems.
  • To establish an agile regulatory framework to regulate AI to promote innovation.

AIDA has two parts. Part 1 regulates AI systems in the private sector. Part 1 is simply a framework for the regulation of certain AI systems with everything of substance that pertains to its scope left to regulations to be established in the future.

AIDA, is in effect, a box of morphable puzzle pieces with no pictures of the puzzle on the box.

Part 2 establishes criminal offenses in relation to AI systems.

(b)   AIDAobligations on persons who are responsible for AI systems

AIDA’s regulation of AI systems in the private sector is comprised of various provisions that require a “person who is responsible for an artificial intelligence system” to do a number of things including:

  • in accordance with the regulations, assess whether it is a “high‑impact system”;[5]
  • if an AI system is a “high-impact system”, in accordance with the regulations, establish measures to identify, assess and mitigate the “risks of harm” or “biased output” that could result from the use of the system;[6]
  • if an AI system is a “high-impact system”, in accordance with the regulations, establish measures to monitor compliance with the mitigation measures and the effectiveness of those mitigation measures;[7] and
  • if an AI system is a “high-impact system”, “in accordance with the regulations and as soon as feasible, notify the Minister if the use of the system results or is likely to result in material harm.”[8]

As is apparent, all of the obligations with respect to AI systems are to be left to future regulations.

(c)   AIDAobligations on persons who carry out a regulatory activity

AIDA also has additional obligations on a person who carries out a “regulated activity”.

Under AIDA a person that carries out a regulated activity must “in accordance with the regulations, keep records describing in general terms, as the case may be (a) the measures they are required to establish under sections 6, 8 and 9; and (b) the reasons supporting their assessment of whether the AI system is a high-impact system and any other records prescribed by regulations.[9]

If the person “processes or makes available for use anonymized data in the course of a regulated activity” the person must, “in accordance with the regulations, establish measures with respect to (a) the manner in which data is anonymized; and (b) the use or management of anonymized data.”[10] These obligations are in addition to, and possibly inconsistent with, those that would apply under the CPPA.

A person “who makes available for use” or “who manages the operation”, of a high‑impact system, must also “in the time and manner that may be prescribed by regulation, publish on a publicly available website a plain‑language description of the system that includes an explanation of (a) how the system is intended to be used; (b) the types of content that it is intended to generate and the decisions, recommendations or predictions that it is intended to make; (c) the mitigation measures established under section 8 in respect of it; and (d) any other information that may be prescribed by regulation.”[11]

(d)   AIDAwhat AI systems are regulated by AIDA

To understand the potential scope of AIDA, it is important to understand some of the key definitions that apply to high impact systems. As can be seen, the scope of what can be regulated is left to prescribed by regulation.

AIDA defines the term high‑impact systemto mean “an artificial intelligence system that meets the criteria for a high‑impact system that are established in regulations”. In other words, all of the AI systems that could be regulated by AIDA are to be defined at a later time.

AIDA defines the term artificial intelligence system (referred here in short as an “AI system”) as follows:

artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.

The term harm is defined to mean

  • physical or psychological harm to an individual;

  • damage to an individual’s property; or

  • economic loss to an individual.

As can be seen, the definition of harm does not require any degree of significance. Any scintilla of harm can meet the definition.

AIDA defines the term biased output as follows:

biased output means content that is generated, or a decision, recommendation or prediction that is made, by an artificial intelligence system and that adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination set out in section 3 of the Canadian Human Rights Act, or on a combination of such prohibited grounds. It does not include content, or a decision, recommendation or prediction, the purpose and effect of which are to prevent disadvantages that are likely to be suffered by, or to eliminate or reduce disadvantages that are suffered by, any group of individuals when those disadvantages would be based on or related to the prohibited grounds.

The assessment, risk mitigation, and monitoring obligations apply to persons that are responsible for artificial intelligence systems. Under AIDA,

a person is responsible for an artificial intelligence system, including a high‑impact system, if, in the course of international or interprovincial trade and commerce, they design, develop or make available for use the artificial intelligence system or manage its operation.”

AIDA defines the term regulated activity, as a superset that includes a person “responsible for an artificial intelligence system”, as follows:

regulated activity means any of the following activities carried out in the course of international or interprovincial trade and commerce:

  • processing or making available for use any data relating to human activities for the purpose of designing, developing or using an artificial intelligence system;

  • designing, developing or making available for use an artificial intelligence system or managing its operations.

As can be seen, all of the obligations that can be imposed by AIDA on AI actors to assess whether an AI system is a high-impact system, to take steps to mitigate the risks of harm or bias, and to monitor compliance with the risk mitigation measures, could apply to the entire ecosystem of persons that are engaged in one or more of the regulated activities.

(e)   ISED enforcement rights under AIDA

Under AIDA, the Minister of Innovation, Science and Economic Development (the “Minister”) (ISED) has extensive enforcement rights (in addition to the extensive rights to define which systems are regulated as high impact systems). In summary, the Minister’s enforcement rights include the following:

  • The right to order any person that carries out a regulated activity to provide the Minister with any of the records that must be maintained.[12]
  • If the Minister has reasonable grounds to believe that the use of a high‑impact system could result in harm or biased output, the Minister may by order require a person that carries out a regulated activity to provide the Minister with any of the records that must be kept that relate to that system.[13]
  • If the Minister has reasonable grounds to believe that a person has contravened any of sections 6 to 12 or an order made under section 13 or 14 (which together are all of the sections summarized above), the Minister may, by order, require that the person conduct or have an independent auditor conduct the audit, at the person’s own cost, and provide the Minister with the audit report.[14]
  • Order any person that has been audited implement any measure to address anything referred to in the audit report.[15]
  • Order any person who is responsible for a high‑impact system to cease using it or making it available for use, if the Minister has reasonable grounds to believe that the use of the system gives rise to a serious risk of imminent harm. It is silent with respect to bias risks. AIDA does not provide for any rights of appeal.[16]
  • Disclose information obtained to third persons including the Office of the Privacy Commissioner and the Human Rights Commission, but not provincial human rights commissions. While confidential business information may not be disclosed in some cases, the Minister has the right to disclose such information to certain other regulatory authorities. The Minister has no obligation to inform the person of such disclosures.[17]

(f)    Sanctions for breaching AIDA

In addition to the order-making powers of the Minister, AIDA is intended to have tough sanctions for breaches.

  • First, AIDA contemplates establishing administrative monetary penalties (AMPs) via regulations.[18]
  • Second, it will be an offense to violate sections 6 to 12 of AIDA or to obstruct or to provide false or misleading information to the Minister. The offense does not require the person to be acting “knowingly”, although there is a due diligence defense. The fines can reach $10,000,000 or 3% of the person’s gross global revenues.[19] AIDA does not provide any express right of appeal.

AIDA is to be administered entirely by the ISED Minister. However, the Minister can designate a senior official of the department over which the Minister presides to be called the Artificial Intelligence and Data Commissioner, whose role is to assist the Minister in the administration and enforcement of the law including the right to establish regulations to be made by the Governor in Council.[20]

(g)   Offenses under AIDA

Part 2 of AIDA also establishes general offenses related to AI systems

.The first offense relates to possession or use of personal information. Under section 38:

Every person commits an offence if, for the purpose of designing, developing, using or making available for use an artificial intelligence system, the person possesses — within the meaning of subsection 4(3) of the Criminal Code — or uses personal information, knowing or believing that the information is obtained or derived, directly or indirectly, as a result of:

  • the commission, in Canada, of an offence under an Act of Parliament or a provincial legislature; or

  • an act or omission anywhere that, if it had occurred in Canada, would have constituted such an offence.

The second offense relates to making a dangerous AI system available for use. Section 39 states:

Every person commits an offence if the person

  • without lawful excuse and knowing that or being reckless as to whether the use of an artificial intelligence system is likely to cause serious physical or psychological harm to an individual or substantial damage to an individual’s property, makes the artificial intelligence system available for use and the use of the system causes such harm or damage; or

  • with intent to defraud the public and to cause substantial economic loss to an individual, makes an artificial intelligence system available for use and its use causes that loss.

The punishment for a conviction of one of these offenses can result in a fine of up to $25,000,000 and 5% of the person’s gross global revenues in its financial year before the one in which the person is sentenced in the case of a person who is not an individual, and can result in jail time for individuals of up to 5 years.[21]

To assess AIDA’s approach to regulating AI systems, it is important to understand how the proposed framework compares to developments internationally as well as the approach Canada has taken to regulate other products, systems or practices that potentially cause harm or biased decisions. I discuss next the international developments to specifically regulate AI systems.

3. How AI systems are regulated internationally

Many countries internationally have been developing policies and frameworks to support innovation in AI. These include investing in AI research and development, fostering a digital ecosystem for AI, shaping an enabling policy environment for AI, building human capacity and preparing for labour market transformation, and international cooperation for trustworthy AI.[22] The Canadian government supports AI innovation in multiple ways including via its Digital and Scale AI superclusters.

There is a growing international consensus that AI systems pose unique threats and should be subject to some type of regulation, although the regulatory frameworks are still the subject to considerable debate.

One thing appears clear: countries seem much more reluctant to take the “hands off” approach they took to the regulation of the Internet based on the assumption – which was not borne out in fact – that any harms resulting from the lack of regulation were outweighed by the benefits of a laisse faire approach. It took decades of experience with online harms for governments to begin to regulate them such as what the EU has done with its Digital Services Act and the U.K. with its draft Online Safety Bill, and for international organizations such as UNESCO to start developing a multi-stakeholder approach to the regulation of online platforms. Many countries including Canada have still not moved to tackle these real and present threats through national laws.

Despite governments’ neglect to address online harms when they were foreseen or started to emerge, they seem more willing to act proactively to address potential harms and bias with AI systems. Perhaps they are more worried about AI harms, or perhaps they have learned lessons from leaving the Internet unregulated in many important respects, or perhaps they believe that trustworthy AI is more important than a trustworthy Internet, or perhaps they understand the public distrust of AI systems, particularly those that can dramatically affect human rights and human autonomy. Whatever the reason, regulatory approaches to AI seem bound to surpass regulation of online harms in many countries.

There is also a growing international consensus that AI technologies should be shaped by democratic values, and should protect and respect human rights. There has been a significant convergence over the value-based principles associated with the use of AI including the ethical AI principles of human-centred values (AI systems should respect human rights and values, diversity, and the autonomy of individuals), fairness and non-discrimination, the need for robust security and safety, human control of technology, the protection of privacy, transparency and explainability, contestability, and accountability (those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled).[23]

These ethical values and principles have been variously embodied in non‑legally binding “soft law” instruments such as the 2019 OECD Recommendation of the Council on Artificial Intelligence and the OECD Principles on AI, which were produced by the AI Group of Experts on behalf of the OECD Committee on Digital Economy Policy; the G7 and G20 AI principles to achieve inclusive and sustainable growth and promote a human rights-centred AI; and UNESCO’s Recommendation on the Ethics of Artificial Intelligence, the first global standard‑setting instrument on the ethics of artificial intelligence in the form of a Recommendation.[24]

The Council of Europe is also working on establishing a new AI Convention to develop an

“Appropriate legal instrument on the development, design, and application of artificial intelligence systems based on the Council of Europe’s standards on human rights, democracy and the rule of law, and conducive to innovation, in accordance with the relevant decisions of the Committee of Ministers”.

The Council of Europe is an international organization comprising 46 members, including the 27 EU member states, the United Kingdom, Turkey and Ukraine. The United States, Canada, Mexico and Israel are not members of the COE; instead they are observer countries, not bound by the body but could sign onto the AI Convention.[25]

There has similarly been significant efforts to develop AI ethical governances frameworks including the widely known Singapore Model AI Governance Framework. The Model Framework focuses primarily on the four broad areas of internal governance structures and measures, human involvement in AI‑augmented decision‑making, operations management and, stakeholder interaction and communication.[26] There are also AI governance testing frameworks and toolkits such as Singapore’s A.I.Verify that enables industries to be more transparent about their deployment of AI through technical tests and process checks. There is also considerable literature on the subject of trustworthy AI including the recent book by Beena Ammanath, Trustworthy AI.

There has also been significant development of standards for trustworthy AI through associations like the IEEE and ISO, and national agencies like the NIST in the United States, and the CEN, CENELEC, AFNOR, Agoria and Dansk Standards in Europe.[27] Recent noteworthy standards are the ISO/IEC DIS 42001 Information technology — Artificial intelligence — Management system and the NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) along with the NIST the AI RMF Playbook. There are also international initiatives towards the development of standards for risk management and trustworthy AI, such as between the EU and the United States via the EU‑US Trade and Technology Council (TTC). 

Many countries have also passed legislation that one way or another deal with use of AI in the private sector. To date, however, globally there has not been any international consensus on how AI should be specifically regulated, the values that should support such regulation, or a groundswell of new AI specific national laws to regulate AI. There isn’t even an international consensus on how to define AI for regulatory purposes[28] including whether to adopt a singular definition, or to have definitions apply contextually to the sector being regulated. Concerns also exist about the capability of politicians to understand and make complex decisions about how to regulate AI technology.[29]

There have, however, been numerous initiatives, and local or state laws to regulate various aspects of AI, and in particular to address the potential for biased decision-making affecting individuals.

Below, I provide a high level summary of important developments and proposals to regulate AI systems. The intent of this section is to position AIDA within the context of international initiatives.[30]

(a)   How AI systems are regulated in the European Union

The EU is in the process of enacting two important regulations to regulate AI. In April 2021, the EU released a draft Artificial Intelligence Regulation. Following much debate and multiple amendments, in December 2022 the Council of the EU approved an updated draft of the proposed regulation (referred to here as the “AI Act”). Much still has to be done before the draft regulation becomes law and implemented in the national laws of member states. The EU has also proposed a directive to harmonize the liability rules for certain AI systems (the “AI Liability Directive“).[31]

(i)    The EU AI Act risk-based approach to regulating AI

The AI Act lays down risk-based harmonized rules for commercializing the use of AI systems. It divides AI systems into four categories:

  • those that are prohibited because of unacceptable risks
  • high‑risk AI systems for which there are obligations for operators of such systems
  • more limited risk AI systems for which there are harmonized transparency rules
  • those with minimal risks which will be unregulated

(ii)   What are the prohibited AI practices under the EU AI Act?

The updated draft of the AI Act prohibits a limited category of AI practices namely, AI systems that,

  • deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behaviour in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
  • exploit any of the vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation that materially distort the behaviour of a person pertaining to that group in a manner that causes or is reasonably likely to cause that person or another person physical or psychological harm;
  • evaluate or classify natural persons based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to certain types of detrimental or unfavourable treatment; and
  • use ‘real‑time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, except in specified situations.

(iii)  How high-risk AI systems are regulated under the EU AI Act

A substantial portion of the AI Act addresses regulatory requirements for “high‑risk AI systems”. To fall within the high risk regime, the system must be both an “artificial intelligence system” as defined in the AI Act and must fall within the identified categories of systems deemed to be high risk.

The AI Act defines the term AI systems as follows:

artificial intelligence system (“AI system”) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human‑provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic‑ and knowledge-based approaches, and produces system‑generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts.

The recitals to the AI Act clarify that the notion of AI system should be clearly defined to ensure legal certainty, while providing the flexibility to accommodate future technological developments. The definition is intended to be based on key functional characteristics of artificial intelligence such as its learning, reasoning or modelling capabilities, distinguishing it from simpler software systems and programming approaches. In particular, AI systems are intended to have the ability, on the basis of machine and/or human‑based data and inputs, to infer the way to achieve a set of final objectives given to them by humans, using machine learning[32] and/or logic‑ and knowledge-based approaches[33] and to produce outputs such as content for generative AI systems (e.g. text, video or images), predictions, recommendations or decisions, influencing the environment with which the system interacts, be it in a physical or digital dimension. The recitals confirm that a system that uses rules defined solely by natural persons to automatically execute operations should not be considered an AI system.

An AI system is considered to be “high risk” if it falls into one of the specific categories listed in in Annexes to the regulation or if it meets identified criteria and is added later.

The first high risk category are AI systems that comprise products, or safety components of products, that fall within other EU harmonization legislation if they are already required to undergo a third‑party conformity assessment before being commercialized in the EU. This is a closed set of AI systems.

The second category are AI systems referred to in Annex III, unless the output of the system is purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights of the public These fall into the following categories (each of which is defined in more detail in the Annex or in definitions):

  • Remote biometric identification systems
  • Critical infrastructure
  • Education and vocational training
  • Employment, workers management and access to self‑employment
  • Access to and enjoyment of essential private services and essential public services and benefit
  • Law enforcement
  • Migration, asylum and border control management
  • Administration of justice and democratic processes

The third category are AI systems that are added by the Commission later. This list is also not open ended. The Commission is empowered to amend the list in Annex III by adding high‑risk AI systems but only where both of the following conditions are fulfilled: (a) the AI systems are intended to be used in any of the areas listed in Annex III; and (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high‑risk AI systems already referred to in the Annex. The AI Act also provides specific criteria that must be assessed when considering whether the conditions in (b) are met.[34]

The Commission also has the power to remove an AI system from the high risk category where, applying listed criteria, the high‑risk AI system no longer poses any significant risks to fundamental rights, health or safety, and the deletion does not decrease the overall level of protection of health, safety and fundamental rights under EU law.

There are numerous obligations associated with high-risk AI systems. These include the following which are set out in detail in Chapter 2 of the AI Act:

  • A risk management system must be established, implemented, documented and maintained in relation to high‑risk AI systems.
  • High‑risk AI systems must be tested and assessed in order to ensure that high‑risk AI systems perform in a manner that is consistent with their intended purpose.
  • Data and data governance obligations which require that high-risk AI systems which make use of techniques involving the training of models with data must be developed on the basis of training, validation and testing data sets that meet specified quality criteria.
  • Technical documentation must be drawn up before the high-risk AI system is placed on the market or put into service and must be kept up‑to date.
  • High-risk AI systems must technically allow for the automatic recording of events (‘logs’) over the duration of the life cycle of the system.
  • High‑risk AI systems must be designed and developed to ensure that their operation is sufficiently transparent and to enable users to understand and use the system appropriately.
  • High‑risk AI systems must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which the AI system is in use.
  • High-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle.

(iv)  Who is regulated under the EU AI Act?

The AI Act specifies which AI actors it applies to. In general, it applies to providers that place AI systems on the market or put the AI system into service. These providers include manufacturers, importers, and distributors, regardless of their physical presence within the EU. The term provider is defined to mean “a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed and places that system on the market or puts it into service under its own name or trademark, whether for payment or free of charge.”

Providers of high-risk AI systems must ensure that the high‑risk AI systems are compliant with the requirements set out in Chapter 2 of the AI Act. They have other obligations, which include upholding marking obligations, having a quality management system in place, keeping specified documentation and logs (under their control), ensuring that the AI systems undergo conformity assessments, taking corrective action when a system falls out of conformity, and various notification obligations. Importers, distributors and users must also comply with specific obligations.

However, to avoid overlapping or disproportionate regulation, the AI Act carves out certain obligations or limits obligations to certain AI actors. For example:

  • For AI systems that are listed in Annex II and that are already subject to regulation under one or more EU regulations, only certain obligations apply.[35]
  • The AI Act does not apply to AI systems specifically developed and put into service for the sole purpose of scientific research and development, to any research and development activity regarding AI systems, or generally to users who are natural persons using AI systems in the course of a purely personal non‑professional activity.
  • Providers of general purpose AI systems[36] that are used as high-risk AI systems or as components of high-risk AI systems are subject to the obligations of providers of high-risk AI systems, but member states must specify and adapt these requirements “in the light of their characteristics, technical feasibility, specificities of the AI value chain and of market and technological developments, taking into account the state of the art.” These providers are also required to cooperate with and provide necessary information to other providers who put high risk systems or components thereof into service. This obligation does not apply to general-purpose AI systems that have expressly excluded their systems for high risk uses, subject to certain exceptions.
  • Financial institutions can fulfill obligations pursuant to applicable EU financial services laws.
  • Some of the obligations and enforcement remedies are more limited for SMEs in order to promote and protect innovation.

There are significant differences between the AI Act and AIDA including:

    • The scope of the AI Act will be passed by an elected body and not left to regulation by a Minster or Ministry.
    • The AI Act takes a risk based approach to regulating AI systems with a clearly defined scope of what AI systems will be regulated whereas AIDA could take any approach to regulating AI systems with no defined boundary for private sector regulation.
    • The AI Act recognizes the importance of regulation of AI systems for public sector decisions, but this is out of scope for AIDA.
    • The AI Act has express measures to ameliorate disproportionate and undesirable effects such as effects on SMEs, R&D activities, AI algorithm providers, and duplicitous regimes, whereas AIDA leaves all of this to regulation which may or may not include these features.
    • The AI Act targets persons that put AI systems on the market. AIDA can target a must broader ecosystems of AI actors, but the obligations will not be known until the regulations are established.

(v)   What the EU AI Liability Directive does

The draft AI Liability Directive is intended to facilitate victims’ pursuit of fault-based claims for damages suffered from the use of high-risk AI systems. The purpose is summarized in the draft directive as follows:

Current national liability rules, in particular based on fault, are not suited to handling liability claims for damage caused by AI‑enabled products and services. Under such rules, victims need to prove a wrongful action or omission by a person who caused the damage. The specific characteristics of AI, including complexity, autonomy and opacity (the so‑called “black box” effect), may make it difficult or prohibitively expensive for victims to identify the liable person and prove the requirements for a successful liability claim. In particular, when claiming compensation, victims could incur very high up‑front costs and face significantly longer legal proceedings, compared to cases not involving AI. Victims may therefore be deterred from claiming compensation altogether.

The AI Liability Directive lays down several common rules.

  • First, it addresses the disclosure of evidence on high‑risk AI systems to enable a claimant to substantiate a non‑contractual fault‑based civil law claim for damages. It provides that a court may order the disclosure of relevant evidence about specific high‑risk AI systems that are suspected of having caused damage.
  • Second, it addresses the burden of proof in the case of non‑contractual fault‑based civil law claims brought before national courts for damages caused by an AI system. It does so by creating a targeted rebuttable presumption of causality.[37]

(b)   How AI systems are regulated in the United Kingdom

The U.K. is planning to take a much different route than the EU to regulate AI systems. It’s national AI strategy, set out in the Department for Digital, Culture, Media and Sport, Establishing a pro‑innovation approach to regulating AI: An overview of the UK’s emerging approach, rejects the approach of adopting a universally applicable definition of AI and a horizontal and centralized regulatory regime to regulate AI systems.[38]

The government’s preferred approach is to set out the core characteristics of AI to inform the scope of AI regulatory frameworks and allow regulators to set out and evolve more detailed definitions of AI according to their specific domains or sectors. This is in line with the U.K. government’s view that it should regulate the use of AI rather than the technology itself, and a detailed, universally applicable definition is therefore not needed.

The U.K. government national AI strategy also sets out an actively pro‑innovation approach.

“AI is a rapidly evolving technology with scope of application and depth of capability expanding at pace.” Therefore, the U.K. government does not think it should establish rigid, inflexible requirements right now. Instead, the proposed framework “will ensure that regulators are responsive in protecting the public, by focusing on the specific context in which AI is being used, and taking a proportionate, risk‑based response. We will engage with regulators to ensure that they proactively embed considerations of innovation, competition and proportionality through their implementation and any subsequent enforcement of the framework.”

The U.K. government’s approach is to develop a set of cross‑sectoral principles that regulators will develop into sector or domain‑specific AI Act measures.

At this stage, the U.K. government is considering implementing the cross‑sectorial principles on a non‑statutory basis. This implementation of principles could be supplemented by clear guidance from the government, which would be kept under review. It does not rule out the need for legislation as part of the delivery and implementation of the principles, for example, in order to enhance regulatory powers, ensure regulatory coordination, or to create new institutional architecture.

(c)   How AI systems are regulated in Israel

The Israel Innovation Authority, in a recent draft policy for the regulation of ethics in AI, also ruled out a universal or horizontal legislation to regulate AI. In its view, this approach to regulation would not adequately address the unique needs of each AI technology. Instead, the policy proposes a multi‑layered approach, which it believes is more suitable for the diverse range of AI applications. The policy suggests that the regulation of AI should be tailored to the specific use cases of the technology, taking into account the potential risks and opportunities associated with each. The policy also encourages collaboration between different stakeholders, such as the government, industry, and academia, in order to develop the most effective and comprehensive regulation of AI systems.[39]

The U.K. and Israel ruled out the universal or horizontal legislation to regulate AI that appears to underlies AIDA. Their strategy is to promote innovation and trustworthy AI using a sectorial approach to address the unique needs of each AI technology. This regulatory structure fosters a systems approach to regulation that supports long term innovation.

(d)   How AI systems are regulated in the United States

So far, there is no comprehensive federal laws in the U.S. that specifically regulates AI systems. The U.S. approach to AI regulation of AI is characterized by the idea that companies, in general, must remain in control of industrial development and governance‑related criteria. This has led, so far, to the U.S. federal government opting for a relatively hands‑off approach to governing AI to create an environment free of burdensome regulation. The U.S. government has repeatedly stated that burdensome rules and state regulations often are considered barriers to innovation.[40] To a large degree, the U.S. has gone the route of voluntary guidelines, with the White House Blueprint For An AI Bill of Rights.

(i)    U.S. federal initiatives to regulate AI

The only federal legislative initiatives for AI specific laws were the draft Algorithmic Accountability Act of 2022 and the specific provisions in the proposed American Data Privacy and Protection Act (ADPPA) that would have provided some regulation for the uses of algorithms to protect members of the U.S. public from discrimination.

In addition to its work on standards and management frameworks, the U.S. is relying on – or ramping up to rely on – existing laws or regulatory tools. Using this sectorial approach, some agencies, such as the Food and Drug Administration and the Department of Transportation, have been incorporating AI considerations into their regulatory regimes. The Federal Trade Commission (FTC) is working on a rulemaking process making it clear that the agency considers issues of AI discrimination, fraud, and related data misuse to be within its purview. The U.S. Equal Employment Opportunity Commission is also turning its enforcement attention to artificial intelligence tools used by employers to hire workers that can also introduce discriminatory decision‑making.[41]

There are also a variety of State initiatives. For example, Madison, Wisconsin banned the use of facial recognition and associated computer vision AI algorithms. Three states (Illinois, Texas, and Washington) have enacted laws pertaining to data and privacy with facial recognition. Illinois’s Biometric Information Privacy Act remains one of the country’s strictest set of AI associated privacy regulations.[42] Some states have enacted laws such as the New York City Automated Employment Decision Tools law that would require bias audits for automated employment decision tools.[43]

The focus of the U.S. legislative initiatives to date has been on preventing and protecting the public from discrimination arising from the use of AI systems. Some of the key initiatives are summarized below.

Algorithmic Accountability Act of 2022

In February 2022, the U.S. Congress introduced the Algorithmic Accountability Act of 2022. That law would have required covered entities (very large companies) to conduct impact assessments to study, evaluate and take other steps with respect to automated decision systems or augmented critical decision processes and their impact on consumers.

The term automated decision system was defined to cover “any system, software, or process (including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques and excluding passive computing infrastructure) that uses computation, the result of which serves as a basis for a decision or judgment.” The term augmented critical decision process was defined to cover “a process, procedure, or other activity that employs an automated decision system to make a critical decision.”

The term critical decision was defined to mean “a decision or judgment that has any legal, material, or similarly significant effect on a consumer’s life relating to access to or the cost, terms, or availability of:

(A) education and vocational training, including assessment, accreditation, or certification;

(B) employment, workers management, or self‑employment;

(C) essential utilities, such as electricity, heat, water, internet or telecommunications access, or transportation;

(D) family planning, including adoption services or reproductive services;

(E) financial services, including any financial service provided by a mortgage company, mortgage broker, or creditor;

(F) healthcare, including mental healthcare, dental, or vision;

(G) housing or lodging, including any rental or short‑term housing or lodging;

(H) legal services, including private arbitration or mediation; or

(I) any other service, program, or opportunity decisions about which have a comparably legal, material, or similarly significant effect on a consumer’s life as determined by the Commission through rulemaking.”

American Data Privacy and Protection Act (ADPPA)

In July, 2022, the U.S. House Energy and Commerce Committee approved a new omnibus privacy bill entitled the American Data Privacy and Protection Act (ADPPA). The bill would have created federal privacy rules and introduced protections against discriminatory harms arising from the use of algorithms.[44]

The ADPPA would have provided that “covered entities” and “service providers” not collect, process, or transfer covered data in a manner that discriminates in or otherwise makes unavailable the equal enjoyment of goods or services on the basis of race, color, religion, national origin, sex, or disability”, subject to certain exceptions.

It would have required a “large data holder” that uses an algorithm[45] that may cause potential harm to an individual, and uses such algorithm solely or in part, to collect, process, or transfer covered data must conduct an impact assessment of such algorithm in accordance with specified requirements.[46] The ADPPA would have required a covered entity or service provider that knowingly develops an algorithm, solely or in part, to collect, process, or transfer covered data or publicly available information, prior to deploying the algorithm in interstate commerce, to evaluate the design, structure, and inputs of the algorithm, including any training data used to develop the algorithm, to reduce the risk of the potential harms. The ADPPA did not have a comprehensive definition of harms, but appears to at least have contemplated harms to minors and discriminatory treatment.[47] To the extent possible, the impact assessments and evaluations would be performed by an external, independent auditor or researcher.

(ii)   The Blueprint For An AI Bill of Rights

In October 2022, the White House released a Blueprint For An AI Bill of Rights (“U.S. AI Bill of Rights“ or “Blueprint”).The Blueprint is a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of AI, aligned with democratic values and to protect civil rights, civil liberties, and privacy. It is a White Paper and does not have the force of law and does not constitute U.S. government policy.

The framework uses a two‑part test to determine what systems are in scope. It applies to (1) automated systems[48] that (2) have the potential to meaningfully impact the American public’s rights, opportunities, or access to critical resources or services.[49]

The five principles in the Blueprint are:

  • Safe and effective systems – “You should be protected from unsafe or ineffective systems”

  • Algorithmic Discrimination Protections – “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”[50]

  • Data Privacy – “You should be protected from abusive data practices via built‑in protections and you should have agency over how data about you is used.”

  • Notice and Explanation – “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”

  • Human Alternatives, Consideration, and Fallback – “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

(iii)  Presidential Executive Order Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government

In December 2020, the President of the United States signed an Executive Order to establish additional principles for the use of AI in the Federal Government (for purposes other than national security and defense) for U.S. agencies when considering the design, development, acquisition, and use of AI in Government.  The principles are designed to foster public trust and confidence in the use of AI, protect the U.S. nation’s values, and ensure that the use of AI remains consistent with all applicable laws, including those related to privacy, civil rights, and civil liberties. The Order also establishes a process for implementing these Principles through common policy guidance across agencies.

Section 3 of the Order sets out the Principles.

Principles for Use of AI in Government. When designing, developing, acquiring, and using AI in the Federal Government, agencies shall adhere to the following Principles:

  • Lawful and respectful of our Nation’s values. Agencies shall design, develop, acquire, and use AI in a manner that exhibits due respect for our Nation’s values and is consistent with the Constitution and all other applicable laws and policies, including those addressing privacy, civil rights, and civil liberties.

  • Purposeful and performance‑driven. Agencies shall seek opportunities for designing, developing, acquiring, and using AI, where the benefits of doing so significantly outweigh the risks, and the risks can be assessed and managed.

  • Accurate, reliable, and effective. Agencies shall ensure that their application of AI is consistent with the use cases for which that AI was trained, and such use is accurate, reliable, and effective.

  • Safe, secure, and resilient. Agencies shall ensure the safety, security, and resiliency of their AI applications, including resilience when confronted with systematic vulnerabilities, adversarial manipulation, and other malicious exploitation.

  • Agencies shall ensure that the operations and outcomes of their AI applications are sufficiently understandable by subject matter experts, users, and others, as appropriate.

  • Responsible and traceable. Agencies shall ensure that human roles and responsibilities are clearly defined, understood, and appropriately assigned for the design, development, acquisition, and use of AI. Agencies shall ensure that AI is used in a manner consistent with these Principles and the purposes for which each use of AI is intended. The design, development, acquisition, and use of AI, as well as relevant inputs and outputs of particular AI applications, should be well documented and traceable, as appropriate and to the extent practicable.

  • Regularly monitored. Agencies shall ensure that their AI applications are regularly tested against these Principles. Mechanisms should be maintained to supersede, disengage, or deactivate existing applications of AI that demonstrate performance or outcomes that are inconsistent with their intended use or this Order.

  • Agencies shall be transparent in disclosing relevant information regarding their use of AI to appropriate stakeholders, including the Congress and the public, to the extent practicable and in accordance with applicable laws and policies, including with respect to the protection of privacy and of sensitive law enforcement, national security, and other protected information.

  • Agencies shall be accountable for implementing and enforcing appropriate safeguards for the proper use and functioning of their applications of AI, and shall monitor, audit, and document compliance with those safeguards. Agencies shall provide appropriate training to all agency personnel responsible for the design, development, acquisition, and use of AI.

(iv)  U.S. State initiatives to regulate AI systems

New York City Automated Employment Decision Tools, Local Law 144 of 2021 (codified at N.Y.C. Admin. Code § 20‑870 et seq.)

New York City recently amended its laws to make it unlawful in the city for an employer or an employment agency to use an automated employment decision tool to screen a candidate or employee for an employment decision unless the tool has been the subject of a bias audit conducted no more than one year prior to the use of such tool; and a summary of the results of the most recent bias audit of such tool as well as the distribution date of the tool to which such audit applies has been made publicly available on the website of the employer or employment agency prior to the use of such tool.

This New York law also has transparency requirements. Any employer or employment agency that uses an automated employment decision tool to screen an employee or a candidate who has applied for a position for an employment decision must notify each such employee or candidate who resides in the city that an automated employment decision tool will be used in connection with the assessment or evaluation of such employee or candidate that resides in the city; they must also notify each employee or candidate of the job qualifications and characteristics that such automated employment decision tool will use in the assessment of such candidate or employee. The employer or employment agency must also provide information about the type of data collected for the automated employment decision tool, the source of such data and the employer or employment agency’s data retention policy shall be available upon written request by a candidate or employee.

Several U.S. states (e.g., Illinois and Maryland) and some other U.S. cities have enacted or are considering legislation that could impact the use of AI in hiring and other employment decisions. Further, the U.S. federal government has also focused on the use of AI in employment decisions. The Equal Employment Opportunity Commission (EEOC) issued guidance in May 2022 outlining how certain employment‑related uses of AI potentially could violate the Americans with Disabilities Act (ADA).[51]   The Massachusetts Gaming Commission is now planning to establish new automated decision-making regulations.

The California Fair Employment and Housing Council (FEHC) published Draft Modifications to Employment Regulations Regarding Automated‑Decision Systems, intended to address the demonstrated potential of artificial intelligence, including algorithms, to unlawfully discriminate in the housing and employment contexts.[52] The modifications would make it unlawful “to use qualification standards, employment tests, automated decision systems, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees on the basis of a characteristic protected by this Act, unless the standards, tests, or other selection criteria, as used by the covered entity, are shown to be job‑related for the position in question and are consistent with business necessity.”[53]

The U.S. approach to regulating AI systems relies mostly on the enforcement or modification of existing laws, as well as guidance from non-binding documents such as the AI Bill of Rights and the development of AI standards and management practices. AIDA would introduce regulatory practices not required by Canada’s largest trading partner.

4. Regulation of AI systems under Current Canadian law

There are currently no laws in Canada that expressly regulate private sector artificial intelligence systems. However, once the CPPA is passed, there will be federal and provincial laws that will require organizations to provide certain levels of transparency and explanations of decisions, predictions and recommendations made or assisted using AI systems.

Despite the lack of specific legislation regulating AI systems many existing laws and regulations of general application already apply to protect the public from health and safety and biased or discriminatory risks associated using AI systems. The following are a just few examples.

(a)   Product health and safety laws in Canada that could regulate AI systems

The Canada Consumer Product Safety Act (“CCPSA”) is federal legislation intended to protect the public by addressing or preventing dangers to human health or safety that are posed by consumer products in Canada, including those that circulate within Canada and those that are imported. The CCPSA applies to all consumer products with the exception of those listed in a Schedule (for which other regulatory regimes apply).

The CCPSA defines consumer product in a broad and technologically neutral way that is apt to cover products or components of products that use AI systems. The term is defined as “a product, including its components, parts or accessories that may reasonably be expected to be obtained by an individual to be used for non‑commercial purposes, including for domestic, recreational and sports purposes, and includes its packaging.”

The CCPSA calibrates the level of risk covered by defining the term “danger to health or safety” to mean

“any unreasonable hazard — existing or potential — that is posed by a consumer product during or as a result of its normal or foreseeable use and that may reasonably be expected to cause the death of an individual exposed to it or have an adverse effect on that individual’s health — including an injury — whether or not the death or adverse effect occurs immediately after the exposure to the hazard, and includes any exposure to a consumer product that may reasonably be expected to have a chronic adverse effect on human health”.

The CCPSA’s regulatory regime includes the following features:

  • It prohibits certain products entirely and prohibits persons from manufacturing, importing, advertising, or selling a consumer product that does not meet regulatory requirements.
  • It prohibits manufacturers and importers from manufacturing, importing, advertising or selling a consumer product that is a danger to human health or safety or is the subject of a recall order because the product is a danger to human health or safety.
  • It has prohibitions against persons advertising or selling a consumer product that they know is a danger to human health or safety or is the subject of a recall order or making false, misleading, or deceptive claims that the product is not a danger to health or safety.
  • It gives the Minister rights to require manufacturers and importers to conduct tests or studies on the product in order to obtain the information that the Minister considers necessary to verify compliance or prevent non‑compliance with the Act or the regulations; compile any information that the Minister considers necessary to verify compliance or prevent non‑compliance with this Act or the regulations; and provide him or her with the documents that contain that information and the results of the tests or studies in the time and manner that the Minister specifies.
  • It requires certain persons to maintain certain documentation including documentation that can be prescribed by regulation, to report incidents, and gives inspectors appointed by the Minister broad inspection rights including rights to examine and test anything or to have analysts do so.
  • If the Minister believes on reasonable grounds that a consumer product is a danger to human health or safety, he or she may order a person who manufactures, imports or sells the product for commercial purposes to recall it. There is some procedural protection in the form of a right of review of any recall order.
  • It provides for offenses including for any person who contravenes the CCPSA, with fines of up to $5,000,000 or to a term of imprisonment.
  • There is also an Administrative Monetary Penalty regime that sets penalties by assignment of gravity factors. It is enacted by regulation, so can be amended as needed to address risks having more severe health and safety consequences to the public.

The CCPSA also gives the Minister very broad powers to make regulations that include the following:

  • exempting a consumer product or class of consumer products or persons or class of persons from the law;
  • prescribing the preparation and maintenance of documents to be prepared and maintained;
  • prohibiting the manufacturing, importation, packaging, storing, sale, advertising, labelling, testing or transportation of a consumer product or class of consumer products;
  • respecting the communication of warnings or other health or safety information to the public;
  • respecting the recall of a consumer product or class of consumer products; and
  • incorporating by reference documents produced by an organization established for the purpose of writing standards, including an organization accredited by the Standards Council of Canada; an industrial or trade organization; or a government.

As of the writing of this post, there are over 35 regulations in force covering a diverse array of products and for which there are many existing standards including CSA standards.

Given its breadth, the CCPSA would already regulate many consumer products that incorporate AI systems. Where it could apply, the CCPSA has significant scope to prescribe specific regulatory requirements, and to require documentation to be produced to verify health and safety. There is also an existing enforcement regime. AIDA does not exclude the CCPSA where both laws could apply thus potentially creating duplicitous regulatory regimes. The CCPSA would not, however, cover all AI systems including potentially AI that is built into online platforms or services or items that are not sold as tangible consumer products.

Schedule 1 of the CCPSA also lists many products that are exempt because they are regulated under another law. Many of these exempted products also contain or will in the future potentially include AI systems. Examples of products regulated under other federal legislation includes medical devices, food and drugs, vehicles (which are regulated under the Motor Vehicle Safety Act), vessels, aeronautical products, pest control products, and other hazardous products (which are regulated under the Hazardous Products Act).[54] The Competition Act also provides protections against persons promoting the supply or use of a product – which could include a product that uses AI – knowingly or recklessly making a representation to the public that is false or misleading in a material respect.

The federal legislation is complimented by an array of other provincial laws. These include, such as, in Ontario, those that regulate electrical product safety, and the Sale of Goods Act which provides implied conditions of fitness for purpose and merchantability for goods (which has been construed to include systems comprised of hardware and software). These implied conditions cannot be disclaimed under and the Consumer Protection Act.

In common law jurisdictions, tort claims can also be brought in a variety of circumstances including claims based on design or manufacturing defects, or for failures to warn users of dangers which the maker knew or ought to have known.[55] Product liability claims can often be difficult to prove, especially without detailed technical information that explains how the product operates. This can pose problems for complicated AI systems.

(b)   Human rights legislation that could regulate AI systems in Canada

There are statutory measures in place federally and provincially that already provide remedies for prohibited grounds of discrimination. These do not specifically address bias or discrimination resulting from the use of AI systems. But they are generally applicable and would thus encompass prohibited grounds of discrimination arising from the use of an AI system.

Federally, the main statute that addresses discrimination is the Canadian Human Rights Act (the “CHRA”). The prohibited grounds of discrimination are “race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity or expression, marital status, family status, genetic characteristics, disability and conviction for an offence for which a pardon has been granted or in respect of which a record suspension has been ordered.”

The CHRA sets out multiple discriminatory practices based on prohibited grounds of discrimination. These include:

  • to deny, or to deny access to, any good, service, facility or accommodation to any individual, or to differentiate adversely in relation to any individual;
  • in the provision of commercial premises or residential accommodation, to deny occupancy of such premises or accommodation to any individual, or to differentiate adversely in relation to any individual;
  • to refuse to employ or continue to employ any individual, or in the course of employment, to differentiate adversely in relation to an employee;
  • for an employer to establish or maintain differences in wages between male and female employees employed in the same establishment who are performing work of equal value; and
  • to harass an individual on a prohibited ground of discrimination, in the provision of goods, services, facilities or accommodation customarily available to the general public, in the provision of commercial premises or residential accommodation, or in matters related to employment.

Under CHRA, individuals can initiate complaints, as can the Commission where the Commission has reasonable grounds for believing that a person is engaging or has engaged in a discriminatory practice.

The CHRA has detailed provisions that allow for investigators to investigate complaints. The Governor in Council may also make regulations prescribing procedures to be followed by investigators, and for authorizing the manner in which complaints are to be investigated.

The CHRA provides for a hearing and gives the member or panel the right to make an order against the person found to be engaging or to have engaged in the discriminatory practice. The order can also include various remedial terms to redress the discriminatory practice, such as requiring that the practice cease, that the person make available to the victim of the discriminatory practice, the rights, opportunities or privileges that are being or were denied the victim as a result of the practice, and certain forms of compensation and special compensation.

The CHRA is given a large and liberal interpretation by the courts to advance and fulfil its purpose. The order-making power is also intended to be broad enough to meet the problem of systemic discrimination “to prevent the same or a similar [discriminatory] practice occurring in the future.” It thus potentially provides tools to address systematic algorithm discrimination.[56]

The CHRA only applies to people within the purview of matters coming within the legislative authority of Parliament. This includes people who work for or receive benefits from the federal government; to First Nations; and to federally-regulated private companies such as airlines and banks.

To fill the gap, there are similar human rights laws provincially. For example, Ontario’s Human Rights Code seeks to provide individuals a right to equal treatment with respect to services, goods and facilities, without discrimination because of race, ancestry, place of origin, colour, ethnic origin, citizenship, creed, sex, sexual orientation, gender identity, gender expression, age, marital status, family status or disability. British Columbia also has a Human Rights Code.

In addition to remedies provided under federal and provincial human rights laws, aggrieved individuals sometimes also have the right to institute claims in class actions in the courts for breach of their statutory rights or for claims that leverage these rights in the contexts of other civil claims.[57]  As shown by the recent case in Quebec certifying a class action against Meta, these claims could involve the use of algorithms to violate the rights of individuals under the Quebec Charter of Human Rights and Freedoms.[58]

The CHRA suffers from a number of limitations. It only applies within federal jurisdiction and thus the regulation of bias and discrimination across the country is thus fragmented. It also does not have an administrative monetary penalty regime that can be used to impose significant penalties on offenders. There are powers under the CHRA to establish regulations to facilitate investigations, but the CHRA does not provide all of the information gathering and self-reporting tools that will be available under AIDA.

AIDA could strengthen the risks associated with AI systems that produce biased results. However, AIDA establishes a parallel regulatory regime to the human rights regulatory authorities that already exist federally and provincially. AIDA’s approach to dealing with AI systems that cause discrimination does not strengthen the current jurisdiction of regulatory authorities. Instead, it further fragments the regulatory authority over certain AI systems by creating a potentially new overlapping and duplicitous regulatory regime

5. Questions and comments about AIDA

(a)   Does AIDA lack Parliamentary oversight?

As summarized above, AIDA leaves all of the important regulatory structure to regulations. This includes, most importantly, what AI systems will be considered “high impact” and subject to regulation. John Beardwood noted in an article on AIDA that, “this Canadian legislative draft is so thin on content that this ‘birth’ is really premature at best.”[59]  This raises important questions about Parliament’s oversight of how AI systems will be regulated in Canada.

The framework as to what could be subject to regulation in the private sector will, as the AI technology evolves, could become practically unlimited. Take for example the breadth of the definition of AI system. It will cover future autonomous systems, but will also include systems that are only partly autonomous. While the attempt may be to capture systems with little human input, it could also capture many systems including existing and future systems where the level of automation is small and human input is substantial.[60] This issue can be illustrated by considering the 5 levels of vehicle automation. AIDA’s definition of AI system would capture basic cruise control, a “technique” that has been in vehicles since the 1900’s.

The regulatory scope is even broader when considered in relation to the definition of “harm”. This definition includes physical or psychological harm to an individual, damage to an individual’s property; or economic loss to an individual. But the definition of “harm” does not have any materiality threshold associated with the harm. Any scintilla of potential harm with any autonomous or semi‑autonomous system would be enough to trigger regulation of the AI system.

AIDA’s breadth could regulate everything from dangerous robots to autonomous robot vacuum cleaners that could knock over and break a flower pot.

What’s more, the government has not provided any guidance on what AIDA is intended to cover including in Parliament where Bill C‑27 has been debated. AIDA may be intended to cover some of the AI systems that would be prohibited under the EU AI Act or which are identified as high risk under the EU AI Act, such as those that deploy subliminal techniques to materially distort a person’s behaviour, exploit vulnerable groups, or remote biometric identification systems.

But, what is extremely troubling is the total lack of guidance in AIDA as to what can be regulated. In this regard, AIDA lacks the standards of harm set out in analogous Canadian laws such as the CCPSA, which contains a definition of “danger to health a safety” that guides the scope of the products covered and the regulations that can be established.

AIDA also lacks the criteria in the AI Act that governs what AI systems can be added to that regulatory regime. AIDA provides no guidance on whether the systems to be regulated under AIDA will also be subject to other overlapping technologically neutral regimes such as the products already regulated under the CCPSA or other federal or provincial regimes.

AIDA may be used to regulate AI systems that are already part of products that have safety concerns, those that make errors that can have dramatic adverse consequences to individuals (the Australian Robodebt scandal being a good example), or those that have the potential to discriminate against individuals based on identified grounds of discrimination in the CHRA. But, the scope potentially also covers much more since practically all AI systems can cause some type of “harm”.

For example, there is a buzz recently over the availability and widespread use of OpenAI’s ChatGPT’s generative artificial intelligence system. While the technology has practically boundless uses – and will likely transform how people search for information and answers on the web – it has been reported including here, here, and here that it already is being used in areas of disinformation and cybercrime and is becoming a tool for hackers (such as to create malware, or software that is specifically designed to damage, disrupt or gain unauthorized access to a computer system) and phishing schemes, and as a tool for sextortion. AIDA could regulate certain aspects of such generative AI systems that cause certain types of harms to individuals, but not other harmful uses. But, regulation of AI systems such as ChatGPT will likely divide transatlantic regulators.

While it may be useful for a general AI law to be able to address any and all possible harms caused by AI to individuals, there are also considerable challenges in regulating AI systems, many of which entail balancing risks of harm against other values including freedom of speech values. It is debatable whether these decisions should be left to non‑democratically elected officials.

An example of regulation of AI that raises freedom of speech issues is the regulation of online harms that are covered by the EU Digital Services Act. The DSA establishes horizontal rules covering illegal content such as terrorist content, child sexual abuse material, and illegal hate speech. Platforms are required to mitigate against risks such as disinformation or election manipulation, cyber violence against women, and harms to minors online. These well known harms could be propagated by AI systems. Therefore, AIDA could be used to combat these threats by the Minister designating by regulation such AI systems as high-impact systems. This may be a good thing and, in fact, give the government tools to address these harms in lieu of passing a specific law to address online harms. That would be consistent with the government policy of putting in place a transparent and accountable regulatory framework for online safety in Canada. However, these measures must be carefully balanced against restrictions of freedom of expression and the government has had difficulties getting a consensus on the appropriate regulatory approaches. In view of these difficult policy challenges should these kinds of delicate balancing choices only be within the purview of Parliament?

Similarly, generative AI systems which can produce content of all types, including music and art, is causing significant harm to creators and has recently spawned several lawsuits over whether the harvesting of copyright materials for training AI algorithms constitutes copyright infringement. These could theoretically be regulated under AIDA even though there are important policy questions about the uses of copyright content in generative AI systems that may be best left for Parliament.

Moreover, a regulatory process provides very little opportunity for members of the public to provide input and effect change. The regulatory process could give members of the public a short time period, such as 60 days, to provide comments on draft regulations. But, this is far from the much more meaningful input the public could have when a Bill is debated in Parliament and reviewed in a Parliamentary committee.

There will also undoubtedly be questions of what proportional steps should be taken to lessen the impacts of horizontal legislation such as those in the EU AI Act (such as to mitigate some regulatory burdens on SMEs and to impose lesser standards for AI actors that make available generic AI algorithms). Yet, Parliament will have no say over these important policy questions.

Decisions about how to regulate AI raise important questions about how lines should be drawn. These questions, as noted recently by Microsoft’s President Brad Smith, will require that “[C]ountries and communities will need to use democratic law-making processes to engage in whole-of-society conversations about where the lines should be drawn to ensure that people have protection under the law”.

In short, AIDA leaves unfettered discretion on the Minister to establish what systems will be subject to regulation, what harms or degrees of risk will be regulated, which AI actors will have responsibilities and what those will be, how to balance sensitive fundamental rights such as the right to freedom of expression with other concerns, and the penalties including administrative monetary remedies (AMPs) for non‑compliance. The Minister will also have unfettered discretion to impose significant penalties and make prohibition orders for non‑compliance. Is this massive delegation of authority consistent with Parliamentary sovereignty?

Professor Scassa in a blog post Oversight and Enforcement Under Canada’s Proposed AI and Data Act also questioned the scope of AIDA’s missing pieces, focusing in particular on its oversight and enforcement mechanisms.

This lack of important detail makes it hard not to think of the oversight and enforcement scheme in the AIDA as a rough draft sketched out on a cocktail napkin after an animated after‑hours discussion of what enforcement under the AIDA should look like. Clearly, the goal is to be ‘agile’, but ‘agile’ should not be confused with slapdash. Parliament is being asked to enact a law that leaves many essential components undefined. With so much left to regulations, one wonders whether all the missing pieces can (or will) be put in place within this decade….

While AIDA’s structure may be intended to have maximum, even unlimited, flexibility, this flexibility is inconsistent with the principle of parliamentary sovereignty.

The concern is not that this broad scope of Parliamentary delegation is necessarily unconstitutional. The Supreme Court has acknowledged the broad powers of Parliament to delegate authority.[61] However, as Justice Cote explained in her dissent in References re Greenhouse Gas Pollution Pricing Act, there can be deleterious consequences of excessive delegation. She noted:

Legislatures are high‑profile bodies where law and policy making on contentious issues can occur with a degree of public awareness, scrutiny, and input. Courts and executive bodies, on the other hand, while themselves institutionally distinct, both lack the open and broadly‑deliberative character that gives legislatures their unique position in a democratic society…

She also noted, quoting from the Privy Council decision in Hodge v. The Queen,[62]  that delegated authority without “varying details and machinery to carry them out might become oppressive, or absolutely fail”. She also stated that

The rule of law does not require that official or judicial decision‑makers should be deprived of all discretion, but it does require that no discretion should be unconstrained so as to be potentially arbitrary. No discretion may be legally unfettered.

These principles are reflected in leading jurisprudence of the United States Supreme Court. Under U.S. Supreme Court precedents, a delegation of authority should at least set out an “intelligible principle” to guide the delegee’s exercise of authority, or at least make clear to the delegee “the general policy” that must be pursued and the “boundaries of [his] authority.[63] The origins of this non-delegation doctrine, as interpreted in the U.S., can be traced back to, at least, 1690, from the seminal writings of the political philosopher John Locke.[64]

Another concern, which is difficult to assess because of AIDA’s vagueness, is whether AIDA impermissibly intrudes into Provincial constitutional jurisdiction. AIDA tries to limit its scope to activities carried out in the course of international or interprovincial trade and commerce, a federal head of jurisdiction. There may be other bases for its jurisdiction as well.[65] Yet, the reality is that AI systems will be included in products and services that will invariably and almost universally cross provincial and national borders or be offered or managed from public clouds that are accessible throughout Canada. Parliament has never tried to regulate a specific and ubiquitous technology (such as electricity or the micro chip), thus raising real questions as to how the trade and commerce power would be interpreted by the Supreme Court if AIDA is challenged on constitutional grounds.

The extreme vagueness of AIDA also raises questions as to how Parliament is even able to meaningfully debate AIDA. The fact is, AIDA is only a framework that could be used to regulate almost anything that uses AI. It has not even a scintilla of detail about how it will work or why it could be counted on to build appropriate risk management and trustworthy frameworks that interoperates with other existing regulatory regimes and in a manner that promotes and does not unduly hinder innovation and adoption of AI in Canada.

AIDA lacks Parliamentary control over the regulation of AI systems. AIDA is like an algorithmic black box. It lacks transparency as to what is covered. It lacks explainability as there is no way of knowing how AI systems will be regulated. It lacks details which calls into question its robustness. There is no mechanism for assessing its effectiveness against its impacts on innovation, which calls into question its safety as a regulatory vehicle. It fractures the regulation of consumer products and discrimination potentially dissipating regulatory authority accountability. It lacks human oversight by Parliament. Should Parliament delegate away regulatory authority over AI systems under a regulatory model that that would not satisfy ethical principles for the AI systems that will be regulated?

(b)   Is AIDA’s scope too narrow?

While AIDA’s scope is potentially extremely broad, it is much more limited than the EU AI Act, which does not limit its scope to private sector harm to individuals. For example, the AI Act expressly identifies the following as high-risk systems: critical infrastructure, access to and enjoyment of essential private services and essential public services and benefits, law enforcement, migration, asylum and border control management, and administration of justice and democratic processes. It would also ban social scoring AI systems and the use of ‘real‑time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities, except in specified situations. All of these are out of AIDA’s scope.

AIDA also has no mechanism to outright ban AI systems, or ban specific uses of AI systems, or even require that AI systems be used in an ethical manner where they are fundamentally inconsistent with respect for human rights and dignity. For example, there is no outright authority for any regulation to ban the use of biometric identification systems, facial recognition technologies for certain purposes, or to prohibit certain developments in neurotechnology (which has the ability to record and directly stimulate neural activity, with the potential to create increasingly effective brain-computer interfaces (BCIs)). To be clear, I am not advocating that AIDA contain a right to ban any technology. Any such decision would involve sensitive balancing that is more appropriate for Parliament.

While AIDA would cover discrimination by private sector organizations contrary to the CHRA, it leaves uncovered well-recognized other forms of discrimination caused by the use of AI systems which have direct impact on equality of access to fundamental rights, including access to justice, the right to a fair trial, and access to public services and welfare. It would not encompass the use of problematic AI systems in the public sector. The Federal Directive on Automated Decision-Making may provide some protection to individuals, but its scope and avenues for redress are limited.

In short, AIDA also does not address what may be the most impactful aspects of AI, namely its uses in public sector and ensuring respect for democracy, human rights and the rule of law. Some of these challenges were summarized in a report of the Parliamentary Assembly of the Council of Europe on the need for democratic governance of artificial intelligence as follows:

AI‑based technologies have an impact on the functioning of democratic institutions and processes, as well as on social and political behaviour of citizens. Its use may produce both beneficial and damaging impact on democracy. Indeed, the rapid integration of AI technologies into modern communication tools and social media platforms provides unique opportunities for targeted, personalized and often unnoticed influence on individuals and social groups, which different political actors may be tempted to use to their own benefit…

However, AI can be – and reportedly is – used to disrupt democracy through interference in electoral processes, personalized political targeting, shaping voters’ behaviours, and manipulating public opinion. Furthermore, AI has seemingly been used to amplify the spread of misinformation, “echo chambers”, propaganda and hate speech, eroding critical thinking, contributing to rising populism and the polarization of democratic societies.

Moreover, the broad use by States and private actors of AI‑based technologies to control individuals such as automated filtering of information amounting to censorship, mass surveillance using smartphones, gathering of personal data and tracking one’s activity on‑ and offline may lead to the erosion of citizens’ psychological integrity, civil rights and political freedoms and the emergence of digital authoritarianism – a new social order competing with democracy.

Another drawback to AIDA’s approach is that it can be used to regulate only certain harmful uses of a technology but not others. To use generative AI systems like ChatGDP as an example again, certain uses might be regulated under AIDA, but other uses, such as to promote disinformation, might be out of scope. Should a general AI law that targets harm be so limited?

Professor Scassa in her blog post, The unduly narrow scope for “harm” and “biased output” under the AIDA, also draws attention to two limitations in AIDA. First, harm is defined only in terms of impacts to individuals, which may overlook the collective dimension of significant harms to groups affected by AI. She sums up this criticism stating:

With its excessive focus on individuals, the AIDA is simply tone deaf to the growing global understanding of collective harm caused by the use of human‑derived data in AI systems…

… biased AI will tend to replicate systemic discrimination. Although it will affect individuals, it is the collective impact that is most significant – and this should be recognized in the law. The somewhat obsessive focus on individual harm in the AIDA may unwittingly help perpetuate denials of systemic discrimination.

Professor Scassa also notes that AIDA provides less protection in some instances to risks of biased decisions than it does to risks of harm. This is because the term “harm” as used in AIDA does not include biased output. Accordingly, enforcement rights that target risks of harm may not extend to harm caused by biased decisions. On this criticism she states:

It is also important to note that the definition of “harm” does not include “biased output”, and while the terms are used in conjunction in some cases (for example, in section 8’s requirement to “identify, assess and mitigate the risks of harm or biased output”), other obligations relate only to “harm”. Since the two are used conjunctively in some parts of the statute, but not others, a judge interpreting the statute might presume that when only one of the terms is used, then it is only that term that is intended. Section 17 of the AIDA allows the Minister to order a person responsible for a high‑impact system to cease using it or making it available if there is a “serious risk of imminent harm.” Section 28 permits the Minister to order the publication of information related to an AI system where there are reasonable grounds to believe that the use of the system gives rise to “a serious risk of imminent harm.” In both cases, the defined term ‘harm’ is used, but not ‘biased output.’

The goals of the AIDA to protect against harmful AI are both necessary and important, but in articulating the harm that it is meant to address, the Bill underperforms.

Professor Scassa’s criticisms may be features rather than bugs. Oher federal legislation, such as the CCPSA, also targets harms to individuals. The government may intend to keep their scope relatively aligned. As for criticism related to the lack of certain remedies related to bias, human rights commissions already have jurisdiction to make orders to address grounds of discrimination. AIDA may intend that only human rights commissions or tribunals have this order making power. But, as AIDA lacks transparency as to how AIDA will interact with existing human rights laws, there is no way to know whether the problems Professor Scassa refers to are features or bugs.

(c)   Should AI systems be regulated by ISED?

AIDA concentrates enormous powers in the executive. This includes everything from what, how and who AIDA regulates, the quantum of AMPs and when they will apply, and all enforcement powers. Thus, the law, policy, administration and enforcement will all fall within a single Ministry, ISED.

This regulatory structure is somewhat similar to how food and drugs, hazardous products and consumer safety is regulated federally, except that what is regulated is much better defined in the law, rather than only in regulations, and AI will be much more ubiquitous. But, this structure is much different from how privacy under PIPEDA and human rights under the CHRA are regulated.

Mardi Witzel notes in her article published by the think tank CIGI, A Few Questions about Canada’s Artificial Intelligence and Data Act,

The most curious aspect of the proposed law is also the most foundational thing about it: the overarching governance arrangement. A single ministry, ISED, is proposed as the de facto regulator for AI in terms of law and policy making and administration and enforcement.

Mardi Witzel notes that best‑in‑class governance “stresses the importance of independent regulatory decision-making, conducted at arm’s length from the political process in instances where perception of impartiality drives public confidence and where the decisions of the regulator could have a significant impact on particular interests.” AIDA’s structure does not meet these criteria. She concludes her article stating:

The AIDA is still at first reading and we can expect it will undergo revision as it moves through the legislative process. That’s a good thing. In addition to the search for answers to the inevitable questions around the rules and requirements for AI, there should be a healthy and transparent discussion of the institutional arrangements by which the legislation and ensuing policy are to be governed.

There are also unanswered questions as to whether ISED has the expertise and capacity to regulate AI systems. AI research and adoption is a very fast‑moving area – is it possible or practical for ISED to stay on top of everything? There are also a lot of nuances so direct regulations by a single body like ISED may have huge downsides and unintended consequences.

(d)   Does AIDA take the wrong approach to regulating AI?

The lack of any meaningful debate in Canada on the appropriate approach to regulating AI systems is concerning. There is, as yet, no international consensus on the approach to be taken for any such regulation. Had there been any meaningful debates, AIDA’s structure may have been much different.

The alternative approach of the U.K. government, for example, of implementing cross‑sectoral guidelines is a sensible common‑sense approach that could have been adopted. The U.K. approach eschews regulating any particular technologies or systems in favour of delegating authority over the regulation of AI systems to existing regulatory authorities that already regulate particular products or systems. This permits these regimes to be adapted as needed to the particular contexts of the specific sectorial regime.

AIDA is also not consistent with the approach being taken in Israel which has also rejected the horizontal approach to regulating AI systems.

AIDA also appears to reject the approach taken in the AI Act of delegating much of the regulatory work to regulatory bodies that are already engaged in the regulation of product safety throughout the EU.

AIDA is also inconsistent with the regulatory approach of Canada’s largest trading partner, the U.S.

In the Canadian context, as noted above, there are already numerous laws and specific sectorial regulatory regimes that could be used or adapted to regulate AI systems that create serious potential health or safety concerns or concerns about systemic bias or discrimination. If the U.K. approach was taken, AIDA could have been replaced with cross‑sectorial guidance to regulators about approaches to take in their regulatory domains. These guidelines could be developed with federal and provincial collaboration to achieve a pan-Canadian approach that respects existing constitutional division of powers.

There is a challenge of ensuring that Canadian laws keep up with the potential for systemic algorithmic bias or discrimination. To be sure, discrimination or bias was not created by the shift to algorithmic decision making. While algorithmic processing of data could make such discrimination more systemic, as Professor Orly Lobel has explained in her book The equality Machine,[66] biased algorithmic decision-making actually can be more easily detected and corrected than biased decisions made by human beings. But, had there been a debate on the best way of preventing or correcting algorithmic decision-making, the government may have instead considered strengthening the CHRA rather than creating a new law that stands on its own.

The CHRA could be modernized to address algorithmic bias. A current difficulty is that the CHRA only applies, in the private sector context, to federally regulated organizations such as banks and airlines and some other entities under federal jurisdiction. AIDA takes a much broader jurisdiction because it regulates AI systems that produce biased output in activities carried out in the course of international or interprovincial trade and commerce. It thus extends federal jurisdiction for discriminatory practices involving AI, without also expressly extending the jurisdiction of the Commission to address AI system-based complaints.

Another approach to addressing AI bias would be to extend the jurisdiction of the federal Human Rights Commission and to amend the CHRA to give the Minister of Justice (the Minister responsible for the CHRA) and/or the Commission new powers to obtain information from organizations or establish new regulations to give the Commission greater processes for investigating possible violations of the CHRA. That way, the Commission could use its existing expertise and authority to investigate and enforce the existing laws. The government could also significantly increase the penalties associated with the violation of the CHRA, as the current penalties are small compared, for example, to what is proposed in the CPPA. The government could also collaborate with the provinces that could adopt similar changes to their own human rights laws.

The U.K. government approach to regulating AI is also instructive here. Under AIDA there is a single definition of AI system that is a compromise definition that potentially over regulates products or services, but under regulates for concerns about bias and discrimination by algorithms. As the U.K. government points out, a contextual approach to regulation is likely far more effective. For example, when it comes to algorithmic discrimination, it should not matter what form of AI is used. All prohibited forms of discrimination are problematic whether arising from the use of autonomous, semi‑autonomous, or other techniques. On the other hand, if a new and onerous regime is established to regulate every possible product and service that uses AI, then a more restricted definition is warranted.

AIDA’s approach to regulating AI systems also doesn’t take into account how the law will need to evolve to deal with AI applications. AIDA’s model is designed primarily as a regulatory point solution. As explained by Professor Agrawal in his book Power and Prediction: The Disruptive Economics of Artificial Intelligence,[67] point solutions are applications of AI that are dropped into an existing system, without changing the overall system. An example is the approach to regulating autonomous vehicles. The initial regulations will focus, for example, on the safety of such vehicles. But, these vehicles are being designed to operate under the current “rules of the road.” But, the massive efficiencies and benefits of AI will only be realized when the regulatory systems are also adapted to take into account the potential for system deployments of AI. Continuing with the example of autonomous vehicles, a system regulatory framework could rethink all of the ”rules of the road” that apply to autonomous vehicles, many of which will eventually become unnecessary and hold innovations in AI back. As Professor Agrawal points out, fostering AI system changes requires redesigning older regulatory models.

AIDA takes a “point solution” rather than a “system solution” to regulating AI systems. Delegating regulatory authority over AI systems to the regulators of existing regimes may provide a better long-term path to realizing the benefits of AI. AIDA may deal with mitigating risks of harm in AI applications and AI systems, but falls far short of providing a regulatory framework that is adaptable to legal system changes that are required to maximize the innovative potential of AI systems.

(e)   Will AIDA impose responsibilities on AI actors that are impossible to meet?

The breadth of the persons responsible for an AI system under AIDA are staggeringly broad and appear to be inconsistent with other regulatory regimes. The persons responsible for an AI system are not limited to organizations that make available an AI system or mange its operation. The obligations imposed by AIDA potentially apply to the entire ecosystem of AI actors involved in the design, development and exploitation of an AI system. It is far broader than the AI actors to which the AI Act applies or to the persons that are regulated under analogous hazardous products regulatory regimes in Canada such as the CCPSA.

AIDA’s model focuses on the lifecycle of AI systems and responsible AI actors. It appears to be premised on, or at least inspired by, the OECD Principles on Artificial Intelligence and the extensive literature that has developed describing the attributes of AI ecosystems.[68] These OECD ethical principles, which the government appears ready to enshrine in AIDA, define the terms AI system lifecycle and AI Actors as follows:

AI system lifecycle: AI system lifecycle phases involve: i) ‘design, data and models’, which is a context‑dependent sequence encompassing planning and design, data collection and processing, as well as model-building; ii) ‘verification and validation’; iii) ‘deployment’; and iv) ‘operation and monitoring’. These phases often take place in an iterative manner and are not necessarily sequential. The decision to retire an AI system from operation may occur at any point during the operation and monitoring phase.

AI Actors: AI Actors are those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI.

In theory there may be significant collaboration and exchange of information by AI actors in all aspects of the design, development, implementation, and operation or management of a particular AI system. However, AIDA’s model appears to assume that this level of collaboration and knowledge sharing and shared responsibilities will exist, or is even feasible, for every AI system, whether the system gets deployed or not. It is questionable, however, whether these assumptions are correct.

Many AI projects are developed and implemented by diverse inter‑disciplinary teams comprised of, among others, consultants, developers, AI architects, ML engineers, DevOps engineers, designers, data scientists and analysts, and workflow automation solution providers. The projects also utilize or adapt and customize a range of tools and technologies including machine learning platforms and existing state-of-the-art AI algorithms, or build on and integrate a range of services such as machine learning as a service (MLaaS), an umbrella definition of various cloud‑based platforms, such as those offered by Google AI Platform, Microsoft Azure AI, Amazon Machine Learning WS, and IBM Watson that allow for fast model training and deployment. As Meredith Hammond pointed out in a recent blog post, these hyperscalers use tools such as AI toolkits, cloud-based services, and open source licencing and APIs to enable AI developers to to leverage AI models and tools to launch new applications. Many developers start with building blocks from these providers such as speech to text, chatbots, translation services, to name just a few

It will be immediately apparent that all of the entities involved in an AI project, and especially a complex one, will not have the complete knowledge or wherewithal to comply with the assessment, risk mitigation, and monitoring obligations. By way of example only, MLaaS providers do not have visibility into how all of their AI systems are used by others. They are akin to neutral intermediaries who typically are not accountable for the acts of users, at least until they acquire knowledge and fail to act or otherwise lose their neutrality. An entity that licenses another entity to use and customize one of its algorithms is in a similar position. An entity that sells or licenses an AI system tailored to a vertical segment e.g., an AI application that is used by retailers to help customers locate products that may want to buy, or an advanced CRM solution for businesses, could be required under AIDA to contract to receive reams of data including potentially personal data in order to comply with these government proposed rules.

There are also issues placing regulatory burdens on open source developers. As Paul Sawers noted in a recent article, much of AI has been built on an open source foundation. But, the open source community is not a community of entities. It’s a community of people including hobbyists, scientists, academics, doctors, professors and university students that don’t usually stand to profit from their contributions and don’t have “their own compliance department”.

If enacted, AIDA could require an unprecedented level of contacting and indemnification obligations that would make partnering on AI development projects cumbersome, if indeed these terms could be negotiated at all. Some foreign entities may decide not to establish or continue to use Canadian developers for systems that will be deployed internationally, as these same regulatory burdens will not apply to developers outside of Canada. This could impact the Canadian AI ecosystem.

The government may say all of this could be handled through the regulations. However, while the regulations provide flexibility to define high impact systems and the responsibilities of persons that are responsible for AI systems and regulated entities, the definitions of the terms regulated entities and persons responsible for an AI system are hard coded. This raises questions about whether AIDA can distinguish between the responsibilities of these different AI actors if to do so would be inconsistent with the text of AIDA definitions that makes no such distinctions. This is something that could be clarified in AIDA before it is passed. But, the bigger question for Parliament should be whether to leave the important policy questions as to who and how different AI actors will be regulated to the executive.

(f)    Is it premature to regulate AI now via AIDA?

Legislation in areas dominated by advanced and impactful technologies like AI-based systems can have the effect of creating trust and acceptance. Legal regulation can set minimum standards for the technology to increase the probability of engendering trust in it.[69] Building trust and confidence in has been the express backbone for the regulation of privacy and e‑commerce legislation in Canada including the policy basis for PIPEDA, Canada’s anti‑spam law (CASL), and the legal recognition of electronic documents via federal and provincial laws.

Whether new laws to regulate AI systems will foster trust and confidence will depend on many factors. Besides what AIDA will actually regulate and how it will do it – which is unknown – these factors include whether there has been robust debates and consultation that help provide legitimacy for the law. As for AIDA, as the Cybersecure Policy Exchange noted, it was proposed with very limited public consultation.[70] As a result, there has been no meaningful public debate regarding central issues that concern the regulation of AI systems, including considerations of whether AIDA is the appropriate mechanism for regulating AI systems and whether now is the most opportune time to regulate them.

There has been no meaningful dialog or debate in Canada about whether Canada should regulate AI systems now, especially given that no country as yet has enacted national AI specific laws to regulate the health and safety of AI systems including by Canada’s major trading partner, the United States. The government may believe that Canada should lead in this regard, or at least have a structure in place that could adapt Canada’s laws in keeping with what the EU may do.

This approach may not, however, adequately account for potential adverse impacts of regulation or potential new regulatory burdens on AI start‑ups in Canada. A recent survey of AI start‑ups and VC firms in the EU regarding the EU AI Act revealed that they are seen as a significant challenge in terms of technical and organizational complexity and compliance cost. The survey indicated that 50% of start‑ups surveyed are concerned that the AI Act will slow down AI innovation in Europe. Most of the VCs expect the AI Act to reduce the competitiveness of European start‑ups in AI.[71]

While regulation of AI may increase trust and confidence in these technologies, regulation can also impede innovation, particularly in the short term, by increasing the cost of entry into markets and distorting competition. Unnecessary and overly burdensome regulations can create barriers to entry and limit the ability of firms to innovate and capture the social benefits of AI. It may be that more upstream governance will translate to less downstream innovation.[72]

The approach of being a “first mover” in AI regulation may also not adequately take into account the significant repercussions of slowing down the development of AI in Canada. As Professor Agrawal explains in his book Power and Prediction: The Disruptive Economics of Artificial Intelligence,[73] there are massive investments being placed now on new AI systems in every domain. A key reason is the tremendous first mover advantage associated with early entries into an area. The early mover advantages are driven by the benefits of obtaining early accumulation of training data and the feedback loops that further develop trained algorithms. New entrants can have real trouble ever catching up. Thus, AIDA could have damaging effects on the development or deployment of AI systems from which our economy may never recover.

The risk to innovation in AI is compounded because AIDA provides no guidance as to what it will actually regulate. This uncertainty will affect investments in AI developments in Canada as capital decisions take into account many considerations including what may be considered to be an unfriendly or more onerous investment environment.

But, there are even bigger problems. If enacting AIDA now inhibits adoption of AI in Canada this will also impede the development of new applications and systems that rely on AI. This would have the effect of lowering opportunities for Canadian organizations to innovate their products and services, reduce costs for consumers, and affect our global competitiveness when compared to countries with more open and friendly environments for AI adoption.

Moreover, Canadians could lose the benefits that AI brings to make predictions and decisions better. While attention is usually given to decisions that, on occasion, may be worse than decisions made by people, the reason AI is often deployed in existing applications and in new systems is that it is often better than human decision-makers, even when operating autonomously, or semi‑autonomously. This has been shown time after time including in mastering games like chess and Go, reading x‑rays and diagnosing certain diseases, determining what crops to plant and when to plant them, designing vessels, boats and airplanes, factory designs, the ecosystem of smart city products and design tools, recommendation systems, search engines, pipeline monitoring and control systems, and designing machines and other products, to identify a few. Inhibiting the adoption of AI could lead in many cases to poorer recommendations and decisions, products and services. These impacts would not only be economic. For example, retarding the development or implementation of an emergency medicine diagnostic system would not only likely result in higher health care costs, but would likely also result in poorer patient outcomes including, in some cases, death of patients.

The impacts of stalling the deployment of AI systems in Canada could have an even greater negative impact on the Canadian economy. As economist and former Bank of Canada Governor Stephen Poloz pointed out in his book The Next Age of Uncertainty,  AI has the potential to be a tectonic general purpose technology. It could, like the widespread adoption of the micro-chip during the third industrial revolution, deliver untold benefits to society in terms of quality of life, productivity, and a boost to national income. The loss of these benefits would be felt throughout our economy.

This issue was touched on by Mardi Witzel in an article published by CIGI, The Greatest AI Risk Could Be Opportunities Missed, where she noted that “The greatest risk is that companies will be too cautious, not take chances with it” and that “On a practical level, that may pose a greater risk than the admittedly legitimate concerns around AI ethics and impacts that are making headlines today.”

The government may also believe that enacting AIDA now will help position Canadian businesses to succeed in EU markets. But, unless the businesses can scale in the first place in Canada and the U.S., there may be fewer start‑ups that could ever reach bringing their innovations to market in the EU.

There are obviously some benefits the government hopes to achieve with AIDA. But, has the government done a full benefit/cost analysis of enacting AIDA now? While there are existing regulatory frameworks that could address some of the impacts of AI systems, there will likely be gaps. (And to be clear, I am not arguing that the existing frameworks such as the CCPSA and CHRA are sufficient in their current forms to regulate AI or at least certain aspects of AI.) But, has the government done a complete macro and micro analysis of the benefits of being a “first mover” to regulate AI compared to the adverse societal and economic risks of being first mover? These are not trivial concerns and the government should be asked to make its analysis, if any, publicly available.

(g)   Will AIDA impede innovation by imposing new restrictions on uses of anonymized data, with its duplicative regulatory regimes, and disproportionate penalties?

Many AI systems, and particularly those that rely on machine learning, require access to reams of data to train, test and recalibrate their algorithms. But, as the panel on AI the Future of Health explained in the recent Osgoode Hall Law School conference on AI Bracing for Impact Conference: The Future of AI for Society AI, access to data is a significant impediment to the adoption of AI in Canada. Therefore, the regulation of the use of personal information, and in particular, the anonymization and use of anonymized information, must reflect an appropriate balance between the protection of the public and the benefits of the adoption of AI systems to society at large including members of the public.

AIDA would create conflicting and even more onerous obligations, and double jeopardy for AI actors that seek to use anonymized information for AI system purposes than other generally applicable obligations with respect to anonymization of personal information.

The CPPA will establish general standards for the anonymization of personal information in Canada. While the current proposals are flawed as they set the standard in a problematic way, as I explained in my blog post CPPA: problems and criticisms – anonymization and pseudonymization of personal information, when enacted the CPPA will establish the national standards for anonymization of personal information.

As noted above, under AIDA a person who processes or makes available for use anonymized data in the course of a regulated activity will now also have to comply with new and possibly conflicting regulations with respect to the manner in which data is anonymized, and the use or management of anonymized data.[74] There will also be a new offense under section 38 of AIDA that will criminalize using personal information knowing or believing that the information is obtained or derived, directly or indirectly, as a result of the commission of an offence under an Act of Parliament or a provincial legislature, regardless of whether the act took place in Canada. This new criminal offense would cover, for example, the use of personal information that was not de‑identified properly under section 74 of the CPPA or was re‑identified contrary to section 75 of the CPPA.

The government has not explained why it needs two separate and possibly conflicting regimes for regulating the use of de‑identified or anonymized personal information for a person who processes or makes anonymized data available for use in the design, development or deployment of AI systems. The government has clear policy goals of promoting the development and adoption of AI systems in Canada. This would suggest caution in balancing the use of personal information beyond what is reasonably required for organizations to use personal information in accordance with the general laws of the land in a responsible and trustworthy way.

The AIDA provisions related to uses of personal information will also create double jeopardy risks under the CPPA and AIDA regimes. It is quite possible that both the CPPA and AIDA’s provisions related to anonymization could be triggered by the same activates. Under the combination of the two criminal offense provisions the liability of an organization could theoretically be $25,000,000 and 5% of the organization’s gross global revenue under the CPPA and $10,000,000 and 3% of the person’s gross global revenues under AIDA. This is on top of the possible AMPs and class action liability of a person that designs, develops or makes an AI system available in Canada that violates AIDA and the CPPA’s privacy provisions.

The government should also explain why offences under AIDA are treated so much more harshly than offences under analogous legislation such as the CCPSA, the Food and Drugs Act, and other hazardous products laws or sanctions for violating the CHRA. A comparison of the maximum fines for offenses prosecuted by indictment against organizations are set out in the table below.

· Possession or use of Personal Information (s. 38).

·Making dangerous AI Systems available for use (s. 39).

Up to $25,000,000 fine; and 5% of gross global revenue (s. 40(a)(i)).
·Obstructing or providing false or misleading information to the Minister OR contravening any of sections 6-12 (s. 30(1) and (2). Up to $10,000,000 fine; and 3% of gross global revenue (s. 30(3)(a)(i)).
Canada Consumer Product Safety Act
· Contravention of a provision in this Act (excluding s. 81011 or 20) (s. 41(1)). Up to $5,000,000 fine.
Hazardous Products Act
· Contravention of a provision of this Act, the regulations or an order made under this Act (s. 28(1)). Up to $5,000,000 fine.
Food and Drugs Act
· Contravention of a provision of this Act, or the regulations (s. 31). Up to $5,000 fine.
· Offences that relate to food (s. 31.1(1)). Up to $250,000 fine.
· Offences that relate to therapeutic products (s. 31.2(1)). Up to $5,000,000 fine.
· Contravention of s. 21.6 (makes a false or misleading statement to the Minster re therapeutic products) or causes serious risk of injury to human health (s. 31.4(1)). A fine at the discretion of the court.


While there may be fears of the risks associated with AI, this penal treatment of AI systems is not technologically neutral or proportionate to risks associated with products or drugs that can also cause significant health or safety risks, and even death.

The proposed liability regime also leaves open parallel regulatory obligations and sanctions for AI systems under other existing regulating regimes, such as for consumer products that could be regulated under both AIDA and other federal or provincial laws like the CCPSA. The interrelation between AIDA and other existing laws is something Parliament should focus on when considering the appropriateness of AIDA.

While the current “real politic” may be for the government to show it is serious about mitigating risks associated with AI systems, at some point one would expect a fair balance to be applied in sanctions for violating AIDA, especially if the government really wants to incentivize the development and deployment of AI systems in Canada. The government may want stratospherically high fines to be able to threaten large foreign multinational technology companies. But, these penalties also threaten Canadian start-ups and other SMEs and can influence where key decisions to develop and commercialize AI systems are made.

Parliament needs to focus on the combination of questions raised above about AIDA. This includes its extreme vagueness, its potential to chill innovation and AI adoption in Canada, the benefits and costs of being at the “forefront” of AI regulation, the imposition of impractical responsibilities on the ecosystem of persons that design, develop or put AI systems into production, and the dissuasive penalty regime. This combination of factors was commented on by Canadian author Stephen Marche in an Op‑Ed in the Globe and Mail entitled Canada’s new artificial intelligence laws in Bill C‑27 are not very intelligent.

But the problem with AIDA is the combination of extreme vagueness of terms combined with the severity of its punishments…

There is no question that this stuff is incredibly powerful and needs regulation, and a clear regulatory framework would be a national advantage. But any meaningful regulation that won’t just strangle the industry will have to focus on outcomes rather than processes.

Let’s also be clear about how much power the Canadian regulators have to affect the future of AI. AIDA won’t stop its development by half a beat. It might alter the geography, shifting the process of innovation outside the country, but it won’t stall the innovation itself.

The danger is that the nascent AI industries in Canada, which are already being pulled away from Toronto and Montreal to Silicon Valley and London by the forces of money and power, will stop holding on and let themselves be pulled. Nobody in San Francisco or London will propose massive fines for unspecific activities determined by whether or not they’re “high impact.” They’ll just want the tech.

To lose the potential of AI would amount to a national catastrophe, the wasted opportunity of a century…

6. Recommendations for improving AIDA’s regulation of AI systems in Canada

Based on the foregoing, I recommend the following with respect to AIDA:

Recommend: Parliamentarians should give serious consideration to questions about AIDA. These questions include whether AIDA is sufficiently detailed for Parliamentarians to give it sufficient consideration in its present form; the appropriateness of the substantial delegation of policy and enforcement choices to the executive and to ISED; whether AIDA is the appropriate framework for addressing harms and bias in AI systems and whether a cross sectorial regulatory approach similar to what the United Kingdom, and Israel are doing is a preferable structure; whether AIDA could impose impractical responsibilities on the ecosystem of persons that design and develop AI systems, put AI systems into production, or make data available for use with AI systems; whether, on balance, AIDA will promote trust and confidence in AI without substantially inhibiting innovation in a critical technology that will power the 4th generation industrial revolution; whether AIDA fails to protect the public by exempting public sector uses of AI systems from regulation; whether this is the time to enact the AI specific law; and whether AIDA’s disproportionate and overlapping penalty regime is appropriate.

Recommend: Should Parliament decide to move forward with AIDA, the following are suggested changes that can be made to AIDA:

1. Amend AIDA to include key definitions and to provide at least intelligent guidance to assess and scope proposed regulations to enable the public, including civil society, Canadian entrepreneurs and members of the public, to understand the government’s regulatory intentions, to understand potential overlaps with other existing regulatory regimes, to understand how the regulations will apply to AI actors throughout the AI ecosystems, to determine whether it will promote trust and confidence in AI, promote innovation, and whether it will achieve an appropriate balance between policy objectives.
2. AIDA should split regulatory authority between ISED, which will have authority over health and safety issues, and the Minister of Justice who is the responsible Minister for the Canada Human Rights Commission and the Human Rights Commission which should have sole authority over issues related to bias and discrimination. The CHRA should also be amended to include the regulatory tools proposed in AIDA.
3. Amend the provisions governing which AI actors AIDA applies to and align it with existing domestic laws, e.g. the CCPSA and the EU AI Act. In the alternative provide guidance in the law as to the approaches to be taken with respect to particular AI actors. Further, AIDA should clarify that the regulations could impose different obligations on persons responsible for AI systems and other regulated entities.
4. Amend the provisions to avoid overlapping or disproportionate regulation of the AI systems such as by including exceptions reflected in the AI Act, or at least provide a requirement for the regulations to achieve these objectives.
5. Remove the provisions that will regulate the standards for anonymization, as these are adequately dealt with under the CPPA, would create conflicting regulatory requirements, and could impede access to use of data, which is essential to AI adoption and would not materially undermine the already robust provisions of the CPPA.
6. AIDA should provide a mechanism to establish exemptions for SMEs from certain obligations, similar to proposals in the EU and U.S., or at least provide a requirement for the regulations to achieve these objectives.
7. That AI systems can be removed from being a high-impact system by regulation, as under the AI Act.
8. Consider whether AIDA should apply to harms to organizations and to critical infrastructure and to apply more broadly to the public sector where human rights can be severely impacted by AI systems.
9. Where information or confidential information is disclosed by the ISED to another regulatory authority, notice of same should be given to the organization potentially affected by the disclosure.
10 An enforcement order made by the Minister should be subject to a right of appeal on questions of law or mixed questions of fact and law.
11. Align the offense penalties to accord with the fines under the CCPSA and remove the double jeopardy under AIDA, the CPPA and the CCPSA. Any AMPs should be assessed by an independent tribunal. All criminal offenses should require that the offending act be done knowingly.

** I would like to thank the McCarthy Tetrault library staff for their research assistance. I would also like to thank Novalee Davy, a McCarthy Tetrault articling student, for her help in proofing this  blog and for her assistance in preparing the chart comparing AIDA’s fines with those in other Canadian statutes.

** As this blog is somewhat of a work in progress, it may be updated from time to time.

[1]     For a good summary of AIDA, see Teresa Scassa, “Oversight and Enforcement Under Canada’s Proposed AI and Data Act” (29 August 2022), online (blog): Teresa Scassa.

[2]     The DCIA summarizes AIDA as follows “Part 3 enacts the Artificial Intelligence and Data Act to regulate international and interprovincial trade and commerce in artificial intelligence systems by requiring that certain persons adopt measures to mitigate risks of harm and biased output related to high‑impact artificial intelligence systems. That Act provides for public reporting and authorizes the Minister to order the production of records related to artificial intelligence systems. That Act also establishes prohibitions related to the possession or use of illegally obtained personal information for the purpose of designing, developing, using or making available for use an artificial intelligence system and to the making available for use of an artificial intelligence system if its use causes serious harm to individuals.

[3]     The objectives to be achieve by AIDA referred to in the Bill’s Preamble which, in part, states:

Whereas there is a need to modernize Canada’s legislative framework so that it is suited to the digital age;

Whereas the design, development and deployment of artificial intelligence systems across provincial and international borders should be consistent with national and international standards to protect individuals from potential harm;

Whereas organizations of all sizes operate in the digital and data‑driven economy and an agile regulatory framework is necessary to facilitate compliance with rules by, and promote innovation within, those organizations;

Whereas Parliament recognizes that artificial intelligence systems and other emerging technologies should uphold Canadian norms and values in line with the principles of international human rights law;

And whereas this Act aims to support the Government of Canada’s efforts to foster an environment in which Canadians can seize the benefits of the digital and data‑driven economy and to establish a regulatory framework that supports and protects Canadian norms and values, including the right to privacy;

[4]     The objectives to be achieve by AIDA are also in Section 4 which states that the purposes are:

(a) to regulate international and interprovincial trade and commerce in artificial intelligence systems by establishing common requirements, applicable across Canada, for the design, development and use of those systems; and

(b) to prohibit certain conduct in relation to artificial intelligence systems that may result in serious harm to individuals or harm to their interests

[5]     AIDA, s.7

[6]     AIDA, s.8

[7]     AIDA, s.9

[8]     AIDA, s.12

[9]     AIDA, s.10

[10]   AIDA, s.6

[11]   AIDA, s.11

[12]   AIDA, s.13

[13]   AIDA, s.14

[14]   In either case, the Minister can prescribe the qualifications of the person to conduct the audit. (s.15(2)) If the audit is conducted by an independent auditor, the person who is audited must give all assistance that is reasonably required to enable the auditor to conduct the audit, including by providing any records or other information specified by the auditor.(s15(3))

[15]   AIDA, s.16

[16]   AIDA, s.17

[17]   AIDA, ss.18‑28

[18]   AIDA, s.29

[19]   AIDA, s.30

[20]   AIDA, s.33

[21]   AIDA, s.40

[22]   Elizabeth Tydd and Samantha Gavel, “Scan of the Artificial Intelligence Regulatory Landscape – Information Access & Privacy” (October 2022), online: Information and Privacy Commission NSW

[23]   See, European Commission, “TTC Joint Roadmap for Trustworthy AI and Risk Management” (1 December 2022) online: European Commision; Tydd, supra note 21; Nyman Gibson Miralis, “Australia’s Artificial Intelligence Ethics Framework: Making Australia a global leader in responsible and inclusive AI” (8 December 2022), online: Lexology; Vatican News, “Multi‑religious signature of the Rome Call for AI Ethics” (10 January 2023), online (video): YouTube; RenAIssance Foundation, “Call for AI Ethics, signed by the Pontifical Academy for Life (and others)” online: RenAIssance Foundation; Berkman Klein Centre, “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles for AI”.

[24]   Summarized in Switzerland, Federal Department of Foreign Affairs, “Artificial intelligence and international rules” (13 April 2022), online: Switzerland, Federal Department of Foreign Affairs; Baker McKenzie, “UNESCO members adopt a global agreement on the ethics of artificial intelligence” (17 January 2022), online: Lexology.

[25] What is unclear at this point is whether the AI Convention will be restricted to instruments affecting only the public sector, or will (contrary to the apparent views of the U.S) also include rules effecting the private sector and whether civil society organizations will be allowed to participate.

[26]   Withers, “Ethics in AI: Where it fits and Singapore’s approach” (9 September 2022), online: Lexology.

[27]   David Restrepo Amariles, “Regulating Artificial Intelligence – Is Global Consensus Possible?” (9 September 2022), online: Forbes.

[28]   Matt O’Shaughnessy, “One of the Biggest Problems in Regulating AI Is Agreeing on a Definition” (6 October 2022), online: Carnegie Endowment for International Peace.

[29]   Restrepo Amariles, supra note 25.

[30]   For a summary of international developments see also, Kerem, Gulen, “Round Table: Will there be a global consensus over AI Act?” (24 October 2022), online: I Dataconomy; Tydd, supra note 21; Switzerland, Federal Department of Foreign Affairs, “Artificial intelligence and international rules” (13 April 2022), online: Switzerland, Federal Department of Foreign Affairs.  For information about Chinese developments, see, Covington & Burling LLP, “China Takes the Lead on Regulating Novel Technologies: New Regulations on Algorithmic Recommendations and Deep Synthesis Technologies” (8 February 2022), online: Lexology.

[31]   For a comparison of AIDA and the AI Act, see Fasken, “The Regulation of Artificial Intelligence in Canada and Abroad: Comparing the Proposed AIDA and EU AI Act” (18 October 2022), online: Fasken – Privacy & Cybersecurity Bulletin.

[32]   Recital 6a explains what is intended by machine learning “(6a) Machine learning approaches focus on the development of systems capable of learning and inferring from data to solve an application problem without being explicitly programmed with a set of step‑by‑step instructions from input to output. Learning refers to the computational process of optimizing from data the parameters of the model, which is a mathematical construct generating an output based on input data. The range of problems addressed by machine learning typically involves tasks for which other approaches fail, either because there is no suitable formalization of the problem, or because the resolution of the problem is intractable with non‑learning approaches. Machine learning approaches include for instance supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning with neural networks, statistical techniques for learning and inference (including for instance logistic regression, Bayesian estimation) and search and optimization methods.”

[33]   (Recital 6b explains what is meant by logic and knowledge based approaches “Logic‑ and knowledge based approaches focus on the development of systems with logical reasoning capabilities on knowledge to solve an application problem. Such systems typically involve a knowledge base and an inference engine that generates outputs by reasoning on the knowledge base. The knowledge base, which is usually encoded by human experts, represents entities and logical relationships relevant for the application problem through formalisms based on rules, ontologies, or knowledge graphs. The inference engine acts on the knowledge base and extracts new information through operations such as sorting, searching, matching or chaining. Logic‑ and knowledge based approaches include for instance knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimization methods.

[34]   The AI Act described the criteria as follows: “When assessing for the purposes of paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high‑risk AI systems already referred to in Annex III, the Commission shall take into account the following criteria: (a) the intended purpose of the AI system; (b) the extent to which an AI system has been used or is likely to be used; (c) the extent to which the use of an AI system has already caused harm to the health and safety or adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialization of such harm or adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities; (d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons; (e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt‑out from that outcome; (f) the extent to which potentially harmed or adversely impacted persons are in a vulnerable position in relation to the user of an AI system, in particular due to an imbalance of power, knowledge, economic or social circumstances, or age; (g) the extent to which the outcome produced with an AI system is not easily reversible, whereby outcomes having an impact on the health or safety of persons shall not be considered as easily reversible (h) the extent to which existing Union legislation provides for: (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for damages; (ii) effective measures to prevent or substantially minimise those risks; (i) the magnitude and likelihood of benefit of the AI use for individuals, groups, or society at large.

[35]   The rationale is explained in Recital 63 “It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high‑risk AI systems related to products which are covered by existing Union harmonization legislation following the New Legislative Framework approach, the compliance of those AI systems with the requirements of this Regulation should be assessed as part of the conformity assessment already foreseen under that legislation. The applicability of the requirements of this Regulation should thus not affect the specific logic, methodology or general structure of conformity assessment under the relevant specific New Legislative Framework legislation. This approach is fully reflected in the interplay between this Regulation and the [Machinery Regulation]. While safety risks of AI systems ensuring safety functions in machinery are addressed by the requirements of this Regulation, certain specific requirements in the [Machinery Regulation] will ensure the safe integration of the AI system into the overall machinery, so as not to compromise the safety of the machinery as a whole. The [Machinery Regulation] applies the same definition of AI system as this Regulation. With regard to high‑risk AI systems related to products covered by Regulations 745/2017 and 746/2017 on medical devices, the applicability of the requirements of this Regulation should be without prejudice and take into account the risk management logic and benefit‑risk assessment performed under the medical device framework.”

[36]   The AI Act defines the term “general purpose AI system” to means “an AI system that ‑ irrespective of how it is placed on the market or put into service, including as open source software ‑ is intended by the provider to perform generally applicable functions such as image and speech recognition, audio and video generation, pattern detection, question answering, translation and others; a general purpose AI system may be used in a plurality of contexts and be integrated in a plurality of other AI systems”

[37]   For a more complete summary, see Deborah J. Kirk et al., “European Commission Proposes Reform on Liability Rules for Artificial Intelligence” (22 December 2022), online: Latham & Watkins.

[38]   See also, Marianna Drake et al., “UK Government Sets Out Sector‑Specific Vision for Regulating AI” (10 August 2022), online (blog): Covington – Inside Privacy; Morgan, Lewis & Bockius LLP, “AI and Regulation: UK Government Proposes Pro‑Innovation Approach” (18 August 2022), online: Lexology.

[39]   Pearl Cohen Zedek et al., “Israel Innovation Authority Publishes Draft Policy for Regulation and Ethics in Artificial Intelligence” (30 November 2022), online: Lexology.  Note the latter part of this summary was taken from a query to OpenAI.

[40]   Benjamin Cedric Larsen, “The geopolitics of AI and the rise of digital sovereignty” (8 December 2022), online: Brookings.

[41]   Alex Engler, “The EU and U.S. are starting to align on AI regulation” (1 February 2022), online: Brookings; J. Edward Moreno, “EEOC Targets AI‑Based Hiring Bias in Draft Enforcement Plan (1)” (12 January 2023), online: Bloomberg Law.

[42]   Bishop Garrison, “Regulating Artificial Intelligence Requires Balancing Rights, Innovation” (11 January 2023), online: Just Security.

[43]   While there was a dearth law laws to regulated AI, an AI Index analysis shows that in the U.S. and around the world there was a plethora of laws passed and bills proposed that used the words artificial intelligence. See, Stanford Institute for Human‑Centered Artificial Intelligence, “Artificial Intelligence Index Report” (March 2022) online: Stanford University at Ch 5.

[44]   For a summary of the ADPPA, see Mayer Brown, “The American Data Privacy and Protection Act: Is Federal Regulation of AI Finally on the Horizon?” (19 October 2022), online: Lexology.

[45]   The ADPPA defines the term “algorithm” to mean “a computational process that uses machine learning, natural language processing, artificial intelligence techniques, or other computational processing techniques of similar or greater complexity that makes a decision or facilitate human decision making with respect to covered data, including to determine the provision of products or services or to rank, order, promote, recommend, amplify, or similarly determine the delivery or display of information to an individual.”

[46]   The Impact Assessment scope in the ADPPA is set out as follows: “The impact assessment required under subparagraph (A) shall provide the following:

(i) A detailed description of the design process and methodologies of the algorithm.

(ii) A statement of the purpose, proposed uses, and foreseeable capabilities outside of the articulated proposed use of the algorithm.

(iii) A detailed description of the data used by the algorithm, including the specific categories of data that will be processed as input and any data used to train the model that the algorithm relies on.

(iv) A description of the outputs produced by the algorithm.

(v) An assessment of the necessity and proportionality of the algorithm in relation to its stated purpose, including reasons for the superiority of the algorithm over nonautomated decision‑making methods.

(vi) A detailed description of steps the large data holder has taken or will take to mitigate potential harms to individuals, including potential harms related to—

(I) any individual under the age of 17;

(II) making or facilitating advertising for, or determining access to, or restrictions on the use of housing, education, employment, healthcare, insurance, or credit opportunities;

(III) determining access to, or restrictions on the use of, any place of public accommodation, particularly as such harms relate to the protected characteristics of individuals, including race, color, religion, national origin, sex, or disability; or

(IV) disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status.”

[47]   See s207(c)(1)(B)(vi) of the ADPPA which listed these factors:

(I) any individual under the age of 17;

(II) making or facilitating advertising for, or determining access to, or restrictions on the use of housing, education, employment, healthcare, insurance, or credit opportunities;

(III) determining access to, or restrictions on the use of, any place of public accommodation, particularly as such harms relate to the protected characteristics of individuals, including race, color, religion, national origin, sex, or disability; or

(IV) disparate impact on the basis of individuals’ race, color, religion, national origin, sex, or disability status.

[48]   The Blueprint defines “An “automated system” as “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure. “Passive computing infrastructure” is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity. Throughout this framework, automated systems that are considered in scope are only those that have the potential to meaningfully impact individuals’ or communities’ rights, opportunities, or access.”

[49]   The Blueprint defines “Rights, opportunities, or access” “to indicate the scoping of this framework. It describes the set of: civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts; equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or, access to critical resources or services, such as healthcare, financial services, safety, social services, non‑deceptive information about goods and services, and government benefits.

[50]   According to the Blueprint “Algorithmic discrimination” occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Throughout this framework the term “algorithmic discrimination” takes this meaning (and not a technical understanding of discrimination as distinguishing between items).”

[51]   See, Willis Towers Watson, “United States: New York City regulating artificial intelligence in employment decisions” (21 November 2022), online: Lexology; also, Hogan Lovells, “New York City delays enforcement of law on artificial intelligence in employment decisions” (21 December 2022), online: Lexology.

[52]   See generally, Davis Wright Tremaine LLP, “California FEHC Proposes Sweeping Regulations Regarding Use of Artificial Intelligence and Machine Learning in Connection With Employment Decision Making” (5 April 2022), online: Lexology; Littler Mendelson PC, “Two Developments Could Impact California’s Proposed Regulations Governing AI and Automated Decision‑making” (4 April 2022), online: Lexology.

[53]   The FEHC draft modifications include these new definitions:

“Algorithm.” A process or set of rules or instructions, typically used by a computer, to make a calculation, solve a problem, or render a decision.

“Automated‑Decision System.” A computational process, including one derived from machine‑learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants. An “Automated‑Decision System” includes, but is not limited to, the following: (1) Algorithms that screen resumes for particular terms or patterns; (2) Algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices; (3) Algorithms that employ gamified testing that include questions, puzzles, or other challenges used to make predictive assessments about an employee or applicant, or to measure characteristics including but not limited to dexterity, reaction‑time, or other physical or mental abilities or characteristics; (4) Algorithms that employ online tests meant to measure personality traits, aptitudes, cognitive abilities, and/or cultural fit.

“Machine Learning Algorithms.” Algorithms that identify patterns in existing datasets and use those patterns to analyze and assess new information, and revise the algorithms themselves based upon their operations. (m) “Machine‑Learning Data.” All data used in the process of developing and/or applying machine‑learning algorithms that are utilized as part of an automated‑decision system, including but not limited to the following: (1) Datasets used to train a machine‑learning algorithm utilized as part of an automated‑decision system; (2) Data provided by individual applicants or employees, or that includes information about individual applicants and employees that has been analyzed by an automated decision system; (3) Data produced from the application of an automated‑decision system operation.

[54]   Examples of products regulated under other federal laws include:

[55]   For a summary, see, Cassels, “At a glance: the sources of product liability law in Canada” (16 September 2022), online: Lexology.

[56]   See, CN v. Canada (Canadian Human Rights Commission), [1987] 1 SCR 1114

[57]   See, Arleen Huggins, “Individual Human Rights Remedies” (11 May 2017), online: KM Law.

[58]   See Beaulieu v. Facebook inc., 2022 QCCA 1736, certifying a class action against Facebook (now Meta) identifying these grounds of alleged discrimination:

  1. In enabling or facilitating the use of its advertising services so that group members are deprived of receiving advertisements for jobs or housing, and based on race, gender or age, Facebook, Inc. and Did Facebook Canada Ltd. infringe their Charter rights? Quebec human rights and freedoms?

  2. In distributing job or housing advertisements on a preferential basis to certain people based on race, gender or age, Did Facebook, Inc. and Facebook Canada Ltd. infringe the rights that the Quebec Charter of Human Rights and Freedoms confers on members of the group? (English translations via Google Translate)

[59]   Beardwood, J Bill C‑27 Births New Regulation of Artificial Intelligence in Canada – Part 2. CRi, 2022;23:230502. doi: 10.9785/cri‑2022‑230502

[60]   The definition of AI system is also broader than that proposed in the EU AI Act which focuses on systems with elements of autonomy that “infers how to achieve a given set of objectives using machine learning and/or logic‑ and knowledge based approaches, and produces system‑generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”.

[61]   See, References re Greenhouse Gas Pollution Pricing Act, 2021 SCC 11 (CanLII), <>

[62]   (1883), 9 App. Cas. 117,

[63]   See, Gundy v. US, 139 S. Ct. 2116 summarizing the U.S law.

[64]   Wikipedia, “Non delegation doctrine”, online: Wikipedia. (last modified: 2 December 2022). “The Legislative cannot transfer the Power of Making Laws to any other hands. For it being but a delegated Power from the People, they, who have it, cannot pass it over to others. … And when the people have said, We will submit to rules, and be govern’d by Laws made by such Men, and in such Forms, no Body else can say other Men shall make Laws for them; nor can the people be bound by any Laws but such as are Enacted by those, whom they have Chosen, and Authorised to make Laws for them. The power of the Legislative being derived from the People by a positive voluntary Grant and Institution, can be no other, than what the positive Grant conveyed, which being only to make Laws, and not to make Legislators, the Legislative can have no power to transfer their Authority of making laws, and place it in other hands.

[65] For a summary of the possible federal powers to enact AIDA, see Teresa Scassa, “Regulating AI in Canada ‑ The Federal Government and the AIDA” (11 October 2022), online (blog): Teresa Scassa; and

Teresa Scassa, “Canada’s Proposed AI & Data Act ‑ Purpose and Application” (8 August 2022), online (blog): Teresa Scassa.

[66]   Orly Lobel, The Equality Machine: Harnessing Digital Technology for a Brighter, More Inclusive Future (New York: PublicAffairs, 2022).

[67]   Ajay Agrawal et al, Power and Prediction: The Disruptive Economics of Artificial Intelligence (Boston, MA: Harvard Business Review, 2022).

[68]   See, for example, Stahl B.C. Responsible innovation ecosystems: Ethical implications of the application of the ecosystem concept to artificial intelligence. Int. J. Inf. Manag. 2022;62:102441. doi: 10.1016/j.ijinfomgt.2021.102441.

[69]   Christian Djeffal, “The Regulation of Artificial Intelligence in the EU” (30 December 2021), online: Heinrich‑Böll‑Stiftung.

[70]   Christelle Tessono et al., “AI Oversight, Accountability and Protecting Human Rights” (November 2022), online: Cybersecure Policy Exchange.

[71]   Tom Whittaker, “EU AI Act: how will startups be impacted?, Tom Whittaker” (4 January 2023), online: Burges Salmon.

[72]   Andrew McAfee, “How EU Proposals to Regulate AI Will Stifle Innovation” (10 September 2021), online: MIT IDE.

[73]   Ajay Agrawal et al, Power and Prediction: The Disruptive Economics of Artificial Intelligence (Boston, MA: Harvard Business Review, 2022).

[74]   AIDA, s.6

Print Friendly, PDF & Email

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: