On April 12, 2019, the UK’s Information Commissioner’s Office published comprehensive guidance (“Guidance”) titled Automated Decision Making: the role of meaningful human reviews, one of the first posts on its recently launched AI Auditing Framework Blog. Although not binding on Canadian companies (which are subject to different laws), the post (and the blog generally) provide helpful information for companies implementing artificial intelligence (“AI”).
Also relevant to Canadian organizations is the Canadian federal governments Directive on Automated Decision-making, (“Directive”) published April 1, 2019, and which applies to any Automated Decision System developed or procured by the federal government after April 1, 2020.
Automated decision-making and privacy legislation
An increasing number of businesses use artificial intelligence for decision-making (“automated decision-making” or an “automated decision-maker”). Such automated decision-making tools are increasingly subject to scrutiny in the wake of regulatory response to the commercialization of data. In particular, the European Union’s (“EU”) implementation of the General Data Protection Regulation (“GDPR”) specifically contemplates this issue.
While the GDPR is not necessarily applicable to Canadian companies, it nonetheless informs business and policy discussions on data in both EU and non-EU jurisdictions. Canada’s Personal Information Protection and Electronic Documents Act (“PIPEDA”), has yet to adopt many of the requirements of the arguably more rigorous GDPR. However, certain legislative changes are being contemplated in the context of PIPEDA and its effectiveness in governing the use of data and the role of machine learning.
How privacy laws regulate automated decision-making
An automated decision-maker is a process used to make a decision from data, such as an algorithm designed to make a decision from a range of possibilities. The process may make use of machine learning and massive quantities of data to predict a given outcome, or rely on a data profile, or some combination thereof. Automated systems are widely in use, from financial institutions to administrative decision-makers.
According to Article 22(1) of the GDPR, solely automated decision-making system can not be used if the decision has legal or other similar effect on an individual. The use of “solely automated” means that automated systems can be used if they involve some human input. However, there are specific requirements for what constitutes human input to an automated system.
PIPEDA does not contain a similar requirement, although it does prohibit the use of data for discriminatory decisions. However, this does not strictly require human review for automated processes. The current law requiring meaningful human review as it concerns Canadian companies primarily applies only to companies domiciled or collecting data in the EU.
Components of meaningful human review
The Guidance sets out the UK Information Commissioner’s view on what will (and what will not) constitute meaningful human review. An individual can not simply review, or “rubber-stamp”, the decision presented by an automated decision-maker. The analysis followed by the automated system to come to a decision is an entirely different analysis than a human reviewer should use, and the data used by the automated-decision maker is not relevant to the data a person requires to come to a decision. Though artificial intelligence is often considered to be value-neutral, the inherent biases present in all humans, including those who design automated decision-making systems, means those same biases creep into the design of the system. In the contest between the range of information that allows humans to recognize nuances and automated decision-makers to recognize patterns, nuance must prevail.
The components of an automated decision-making system incorporating meaningful human review include the following:
- Thorough training of the human reviewer on the system, its functions, and its limitations.
- Designing a system that is not so opaque in its decision-making processes that a human can not understand how decisions are made.
- Testing the assumptions used in an automated-decision making system.
- Reviewing of the decision by a trained human reviewer.
- “Meaningful influence” of the human reviewer over the decision, including the power to override the decision.
In short, any automated decision-making system that has legal or similar effect on a person must not only involve, but require by its design, a human to confirm or alter the results of the process is use.
The adoption of automated decision-making policy by the Canadian government
The Canadian government is making use of automated decision-making processes by external providers in its administrative decisions. The recently issued Directive on the subject, in line with GDPR requirements, makes clear that human intervention is a required component of any service being delivered to the government. Employees must be trained in the automated decision-making system, understand how it works, be prepared to review and explain decisions, and be permitted to override decisions as necessary.
Notably, the requirements of the Directive include:
- completing an Algorithmic Impact Assessment prior to the production of any Automated Decision-making system;
- ensuring transparency (including providing notice before automated decisions are made; providing explanations after automated decisions are made, and providing access to the government to access and test the Automated Decision-making system, and release of source code used by the government);
- quality assurance (including testing and monitoring outcomes, validating data quality, expert review, employee training, establishing contingency systems, Conducting risk assessments during the development cycle of the system and establish appropriate safeguards to be applied, consulting with legal services and ensuring opportunities for human intervention);
- providing recourse options; and
- publishing information on the effectiveness and efficiency of the Automated Decision Systems.
The directive by the Canadian government impacts only its external service providers, but it is reflective of the current policy landscape as it applies to automated decision-making. The government’s own policy combined with the discussions in the wake of the GDPR suggests that human involvement in automated decision-making processes will continue to be a salient topic.
Takeaways for business
Companies which are subject to the GDPR should already be in compliance with the requirements for meaningful human review if their product results in legal or similar effects on an individual. Non-compliance not only means liability under the GDPR, but also potentially impacts companies seeking investment or having an exit plan. Most transactions will require, at a minimum, a representation that the company complies with all laws applicable to it. If this representation can not be made and a disclosure of GDPR non-compliance is required, that potential exposure may alarm investors or acquirers.
Companies subject only to the privacy laws of non-GDPR jurisdictions, Canada included, are well advised to consider the Guidance around automated decision-making. The principles enshrined by the GDPR and expressed by Canadian government Directive suggest a potential future shift in data regulation is coming for automated decision-making systems. If human review becomes mandated by PIPEDA or any other legislation governing data practices, early adopters of these principles will reap the benefits of that preparation.
For more information about Denton’s data expertise and how we can help, please see our Transformative Technologies and Data Strategy page and our unique Dentons Datasuite of data solutions for every business.