Privacy Commissioner Issues Notice of Consultation on Artificial Intelligence

The Office of the Privacy Commissioner of Canada (“OPC“) has released a Consultation Paper on artificial intelligence (“AI“), saying that it is of the opinion that “responsible innovation involving AI systems must take place in a regulatory environment that respects fundamental rights and creates the conditions for trust in the digital economy to flourish. “

The OPC intends to examine AI in the context of the legislative reform policy analysis as it relates specifically to the Personal Information Protection and Electronic Documents Act (“PIPEDA“). The OPC is clear that it has concerns about AI, stating:

“In our view, AI presents fundamental challenges to all PIPEDA principles and we have identified several areas where the Act could be enhanced.”

Read More

Deepfakes, Risk and Liability

“A lie gets halfway around the world before the truth has a chance to get its pants on.”

– variously attributed

As artificial intelligence (AI) advances it creates both benefits and dangers, and few applications illustrate that fact better than the emergence of “deepfakes.” A portmanteau of “deep learning” and “fake,” deepfakes are AI-generated audios or even videos of real people saying fake things. While, in the case of videos at least, it is still reasonably easy to distinguish between a deepfake and the genuine article, the gap is narrowing and it may not be long before seeing is no longer believing.

Read More

Towards “Trustworthy” Artificial Intelligence

Governments have become concerned with artificial intelligence’s (“AI“) responsible use given the continued proliferation of AI technologies across industries. Domestically, Canada has published a Directive on Automated Decision-Making. Globally, the European Commission’s High Level Expert Group on AI (the “HLEGA”) released its Ethics Guidelines for Trustworthy Intelligence (the “Guidelines”) in 23 languages earlier this year. The Guidelines follow a first draft released in December 2018, which received more than 500 comments through an open consultation.

HLEGA Guidelines

The Guidelines’ starting point is a helpful definition of AI systems which underscores the importance of data for AI.

Read More

Use of AI Algorithm Triggers Lawsuit and Countersuit

As artificial intelligence (AI) becomes less of a curiosity and more of an everyday tool, disputes are increasingly arising over its operation and, when things go wrong, the question inevitably arises: whose fault is this and who’s liable? One high-profile example is the ongoing dispute between Hong Kong businessman Samathur Li Kin-kan and London-based Tyndaris Investments, in which Tyndaris is suing its client for $3 million in allegedly unpaid fees. In a countersuit, Mr. Li is claiming $23 million in damages allegedly resulting from Tyndaris’ use of algorithmic trading in managing his portfolio.

Tyndaris Case

The dispute centers around whether Tyndaris misled its client as to the AI’s capabilities, which means that the AI’s performance itself will be adjudicated.

Read More

Canada Announces Advisory Council on Artificial Intelligence

On May 14, 2019, the Government of Canada announced the creation of its Advisory Council on Artificial Intelligence (“Council”). The objectives of the Council include creating more jobs for Canadians, supporting entrepreneurs, and improving Canada’s global position in artificial intelligence (“AI“) research and development.

The Council will consider and identify: (i) innovative approaches to build on Canada’s current AI strengths and to further develop AI; (ii) opportunities for economic growth in Canada; and (iii) other opportunities in the AI sector that will benefit Canadians.

The Council will also establish working groups, including a working group with respect to commercializing value from Canadian-owned AI and data analytics.

Read More