On March 13, 2023 Innovation, Science and Economic Development Canada (“ISED”) released a companion document (the “Companion Document”), seeking to provide better clarity respecting the Artificial Intelligence and Data Act (the “AIDA”), the AIDA’s path to enactment, and how the AIDA is expected to function
In June of 2022 the Minister of ISED tabled Bill C-27 introducing a new law on artificial intelligence (“AI”). If and when passed, the will be Canada’s first national regulatory scheme for AI systems.
As currently drafted, however, the AIDA lacks substance, leaving much of its essential content to be settled in yet-to-be created regulations. Recognizing the significance the AIDA will have on Canadians and Canadian businesses, and cognizant of the fact that both individuals and businesses will need to adequately prepare themselves for compliance with the AIDA and its related regulations, Innovation, Science and Economic Development Canada.
Timelines to enactment
Bill C-27 is currently at second reading in the House of Commons. The Companion Document highlights that the development and assessment of the regulations to accompany AIDA, in effect where its most important substantive content will be fleshed out, is expected to be completed on a 24 month timeline. This development and assessment timeline is approximately broken down as follows:
- consultation on regulations (six months);
- development of draft regulations (twelve months);
- consultation on draft regulations (three months); and
- coming into force of initial set of regulations (three months).
It is therefore expected that AIDA will come into force in or around 2025 (assuming Bill C-27 passes this year, which is increasingly looking unlikely). Nonetheless, this timetable should not deter individuals and businesses from better understanding AIDA and its implications, as well as taking steps to ready for its debut.
The focus of the statute
The AIDA concentrates on three primary objectives:
- aligning high-impact AI systems with commonly held Canadian expectations for safety and human rights;
- establishing the office of AI and Data Commissioner (the “Commissioner”), tasked to advance AI policies, educate the public with respect to the AIDA, and carry out a compliance and enforcement role.
- restricting uses of AI that cause serious harm by way of the establishment of new criminal sanctions.
Recall that the AIDA is limited to designated activities “carried out in the course of international or interprovincial trade and commerce”, squarely positioning the AIDA within the competence of the federal government. There is no mention in the Companion Document of how the AIDA proposes to work in respect with provincially-regulated activities, or its approach to provincial AI legislation, should any be contemplated.
Defining high-impact AI systems
Bill C-27 does not provide a definition for a “high-impact system”, noting that such an AI system will meet certain criteria to be settled in the regulations to the statute. Organizations in the AI ecosystem have been left struggling to understand whether they will be affected by the AIDA. The Companion Document assists in clarifying organizations’ status by identifying the key factors that will be among the assessment criteria for determining whether a system is “high impact” (and thus regulated):
- evidence of risks of harm to health and safety, or a risk of adverse impact on human rights, based on both the intended purpose and potential unintended consequences;
- the severity of potential harms;
- the scale of use;
- the nature of harms or adverse impacts that have already taken place;
- the extent to which for practical or legal reasons it is not reasonably possible to opt-out from the AI system;
- imbalances of economic or social circumstances, or age of impacted persons; and
- the degree to which the risks are adequately regulated under another law.
Types of systems likely to be the focus of regulation
In addition to these insights into how a high-impact system will be assessed, the Companion Document gives individuals and businesses in Canada a better awareness of the particular systems that the current Government considers to be of most interest, given their possible impacts:
|Systems that are designed to make decisions, recommendations, or predictions for purposes linked to the accessing of services as they can potentially lead to discriminatory outcomes and economic harm.
|Systems that are designed to make predictions about individuals using biometric data as they can have consequential impacts on mental health.
|Systems that are designed to make online content recommendations as they can negatively impact psychological and physical health.
|Health and safety systems
|Systems that are designed to make decisions or recommendations in reliance on data collected from sensors, which could include self-driving vehicles or body sensors designed to monitor certain health issues. These systems can potentially precipitate physical harm.
Pursuant to the AIDA, Canadian businesses operating in these spaces will need to establish suitable processes and procedures to assure compliance with the statute. This will include the crafting and implementation of appropriate policies and governance practices aligned with established international norms for the administration of AI systems.
Notably, organizations offering employment, student or other applicant-based screening systems will be impacted by the first category and should be considering making submissions during the consultations and on the draft AIDA regulations when they are released.
“Predictions based on biometric systems” is less clear. It would be helpful to have a definition of what constitutes a “biometric” or “biometric system” as that will be the initial threshold and there is currently a great deal of ambiguity in case law and Canadian statues, as well as the Reports of Findings by the Office of the Privacy Commissioner of Canada. There is no definition in the current draft of Bill C-27. The lack of consistent understanding creates uncertainty as to what is being regulated. Furthermore, the restriction of the scope to “predictions” is curious – expanding this to “predictions, recommendations or decisions” would parallel the proposed language in Bill C-27 regarding automated decision systems. As drafted, the AIDA could potentially apply to AI-based filters on social media that “predict” users’ future career or celebrity lifestyle. Presumably it is the more insidious uses of biometrics that are intended to be captured (for instance, the use by employers or advertisers to assess areas such as an individual’s stress levels, engagement, and excitement, allowing an employer or advertiser to make a range of calculated assumptions about an individual based upon on an image or a live stream during an interview or interaction).
“Influencing systems” is equally broad, potentially capturing song and movie recommendation engines. Notably, because the AIDA is not limited to personal information, it potentially captures the use of AI by political parties (and others in the political ecosystem) to influence voters, even though the use of personal information by political parties is not currently covered by PIPEDA/Bill C-27. However, AIDA “regulated activities” are limited to those “carried out in the course of international or interprovincial trade and commerce” so political parties (and others) may escape regulation. The scope of the trade and commerce power is increasingly being challenged (for instance, in areas such as climate change) and it seems likely this will be challenged as well.
The identification of “health and safety systems” as systems of interest will have a broad impact on manufactures of items such as wearables and connected vehicles (for instance, vehicle safety systems that monitor excessive breakdown or driver fatigue). Certain smart home functions may also be implicated.
There is a specific call out for open source software or open access AI systems, likely in response to initial concerns by organizations that open source, in many respects the lifeblood of innovation, would be stifled by regulation. The Companion Document specifically addresses this, saying:
It is common for researchers to publish models or other tools as open source software, which can then be used by anyone to develop AI systems based on their own data and objectives. As these models alone do not constitute a complete AI system, the distribution of open source software would not be subject to obligations regarding “making available for use.” However, these obligations would apply to a person making available for use a fully-functioning high-impact AI system, including if it was made available through open access.
This should allay initial concerns raised by the AIDA, although considerable refinement is still required.
AI design and development norms
The Companion Document also sets out “norms” related to AI systems. These norms include designing and developing systems that enable meaningful human oversight and monitoring, the provision of applicable information to the public respecting how a particular system is being used, being aware of the possibility for discriminatory outcomes when designing and constructing an AI system (and working to mitigate against such possibilities), routinely and proactively appraising high-impact AI systems to identify harms, making regulatory compliance a primary operational focus, and ensuring high-impact AI systems are performing in a manner consistent with expected objectives.
More specifically, businesses designing or developing a high-impact AI system will need to analyze for and attend to risks with respect to harm and bias, taking corrective actions as and when needed.
Further, businesses making a high-impact AI system available for use will need to closely consider probable uses for the system and work to confirm that users are aware of restrictions on how the system is supposed to be used, and its limitations.
Businesses that manage the operations of an AI system will be required to use such systems in accordance with their specifications, and routinely monitor such systems to identify and mitigate risk.
Oversight and enforcement
The initial approach to enforcement will be a soft one – the Companion Document states that the Commissioner will concentrate on educational initiatives, aiming to help businesses voluntarily comply with the AIDA. The Companion Document makes note of the Government of Canada’s understanding that there needs to be an adequate period of adjustment for businesses operating within this new regulatory structure.
Over time however, it is anticipated that administrative monetary penalties will be used to spur compliance with the AIDA. The AIDA provides for the creation of a such a system, but the same will need to be built out in regulations, following the consultation process.
For more egregious offences, the Commissioner may look to prosecute offenders for the commission of regulatory offences, and where it can be proved that intentional behaviour has caused harm, a criminal prosecution may be undertaken.
This is good news for businesses, which often struggle to understand what the regulatory expectation is (particularly in respect to novel technologies or business models) and often underestimate the resources required to achieve compliance. However, this was the approach taken to privacy legislation (e.g., ombudsman model, limited enforcement powers, etc.) and it took over two decades to meaningfully change the legislation to reflect global norms.
This soft approach – particularly in light of the minimum 2 year timeline to enactment of the AIDA – is out of touch with the pace of technology and global measures. There already exist numerous (and rigorous) AI regimes elsewhere, as well as international standards and other regulatory frameworks (for instance, see the European Union AI framework).
Businesses, particularly multi-national businesses, would be well advised to look to these global norms when designing and implementing AI systems and not rely on the proposed softer Canadian approach there is a risk that their systems and business models may not be portable outside Canada.
Absent from the Companion Document (and the AIDA itself) is any mention of a private right of action. There is a private right of action contemplated in Bill C-27 but it is limited to the privacy protection of the Bill, the proposed Consumer Privacy Protection Act (and Quebec’s Law 25). This is somewhat unusual in the that the private right of action seems to have become a popular bogeyman to insert into legislation, particularly legislation , as a way of indicating that non-compliance will have serious consequences such as litigation, in particular class actions (see, for instance, Canada’s Anti-Spam Law, where the private right of action remains in the text but was never declared in force). This approach effects compliance but spares the need for (and the expense of) the government regulatory body stepping in. Unfortunately, it can also mean that the law develops in unusual and unanticipated directions. Businesses should pay attention during the Committee hearing on Bill C-27 (and AIDA in particular) to see if this approach is being discussed.
Takeaways for business
The Companion Document makes clear that significant work lies ahead in order to craft supporting regulations to the AIDA that adequately and appropriately meet the expectations of Canadians for a strong framework regulating AI systems in this country. That work is not going to be completed rapidly, as the Companion Document points to a relatively lengthy process that will be advanced and refined through several rounds of consultation with stakeholders. This is not inappropriate for a piece of legislation that deals with a complex technology with systemic impacts.
The Companion Document nonetheless does provide Canadians and Canadian businesses with valuable insights into what are likely to be the criteria for assessing a high-impact AI system, what AI systems are of the greatest concern to the current Government, and what the key practice and process actions will need to be for persons involved in the design, development, use or management of such systems.
However, Canada has chosen to be a “fast follower” in this area rather than an early adopter, and organizations developing, designing, or implementing this technology (or using data sets in conjunction with this technology) should consider closely the developments in other jurisdictions that are further along in the regulatory process.
For more information about this and other topics and how we can help, please see our unique Dentons Data suite of data solutions for every business, including enterprise privacy audits, privacy program reviews and implementation, data mapping and gap analysis, and training in respect of personal information.