Skip to content

Brought to you by

Dentons logo

Dentons Data

Your trusted advisor for all things digital.

open menu close menu

Dentons Data

  • Home
  • About Us

What FRFIs need to know about adopting AI: Key take-aways from the OSFI-FCAC Risk Report

By Jaime Cardy
October 24, 2024
  • Artificial Intelligence
  • Technology
Share on Facebook Share on Twitter Share via email Share on LinkedIn

In late 2023, the Office of the Superintendent of Financial Institutions (OSFI) and the Financial Consumer Agency of Canada (FCAC) shared a voluntary questionnaire with federally regulated financial institutions (FRFIs) regarding their use of artificial intelligence (AI). The survey results informed the Risk Report – AI Uses and Risks at Federally Regulated Financial Institutions (the Report) that OSFI and FCAC released in September, outlining the findings arising from those surveys, identifying key risks raised by AI adoption, and suggesting a number of risk mitigation practices.

Unsurprisingly, the survey revealed that the use of AI among financial institutions is rapidly increasing, with 75% of respondents indicating that they plan to invest in AI over the next three years, and 70% expecting to be using AI by 2026. In particular, the survey showed that FRFIs are integrating AI across multiple core functions such as risk assessment, underwriting and claims management, fraud detection, customer service, regulatory compliance, and cybersecurity. And, for the most part, respondents indicated that they are adopting AI models provided by third party providers, rather than developing systems in-house.

Key risks associated with AI adoption by FRFIs

The Report highlights both internal and external risks from AI adoption. The key risks identified include:

  • Data governance risks – Especially those related to data privacy, data governance and data quality, particularly when data ownership is fragmented and spread across different jurisdictions.
  • Explainability – FRFIs may have difficulty providing meaningful information about how their AI models make decisions, particularly when they implement deep learning and generative AI models.
  • Legal, ethical, and reputational risks – Including those posed by consumer privacy and consent considerations, biased outputs, and the reputational harms that may result.
  • Third party risks – FRFIs are responsible for their use of AI models provided by third party providers, which exposes them to risks related to oversight and compliance with legal and regulatory obligations and internal standards, as well as data security and service interruption concerns.
  • Cybersecurity risks – The highly interconnected nature of AI systems, and the customization of AI models using of customer data and FRFI trade secrets, increases the risk of cyber vulnerabilities and resulting data breaches and financial risk.
  • Business risks – As AI becomes more prevalent in the industry, FRFIs without in-house expertise may have difficulty adapting and may face financial and competitive pressures as a result.
  • Credit, market and liquidity risks – Including credit impacts resulting from AI-related job losses, and “herding risks” created by algorithmic trading modalities.

Risk mitigation recommendations

The Report includes a number of recommendations for FRFIs to manage or mitigate the most pressing risks presented by AI adoption. Those recommendations include:

  • Prioritizing the establishment of AI governance frameworks that are both robust and agile enough to adapt to the rapidly evolving realities of AI technologies;
  • Involving multiple stakeholders when designing AI systems and implementing them into an FRFI’s operations, including collaboration among AI developers and users to ensure open lines of communication about model use, training, assumptions, etc;
  • Engaging in ongoing risk assessments through all stages of the AI lifecycle;
  • Ensuring transparency by informing customers about how and when AI models may impact them;
  • Obtaining appropriate consent to use customer data in AI models, including for AI model training, and to provide a personalized or tailored experience, if applicable; and
  • Implementing appropriate security measures and other safeguards, such as including a human-in-the-loop, maintaining appropriate backup systems, and implementing alerts, to detect AI model malfunctions and ensure system resiliency.

What’s next

Many survey respondents expressed a desire for greater clarity and consistency in regulations related to the adoption of AI technologies before they commit to AI-related actions. The Report notes that a second financial industry forum on AI is being planned, following on the momentum from the Financial Industry Forum on Artificial Intelligence that OSFI co-hosted with the Global Risk Institute in 2022. The aim of the forum will be to further advance and establish best practices in this rapidly evolving area. In the meantime, however, alignment with the above-noted recommendations will ensure FRFIs are on stable footing when future regulations or guidelines are introduced.


For more information on this topic, please contact Jaime Cardy or other members of the Dentons Privacy and Cybersecurity group. 

Share on Facebook Share on Twitter Share via email Share on LinkedIn
Subscribe and stay updated
Receive our latest blog posts by email.
Stay in Touch
Financial Institutions, OSFI
Jaime Cardy

About Jaime Cardy

Jaime Cardy is a senior associate in the Privacy and Cybersecurity group in Dentons’ Toronto office. She has particular expertise in providing risk management and compliance advice under various legislative privacy regimes, including in both the public and healthcare sectors.

All posts Full bio

RELATED POSTS

  • Artificial Intelligence
  • Privacy

Part 2: Canada’s evolving artificial intelligence and privacy regime

By Luca Lucarini
  • Artificial Intelligence

Artificial intelligence in the workplace: Legal framework and important considerations

By Arianne Bouchard, Alexandra Quigley, and Charles Giroux
  • Artificial Intelligence

Towards “Trustworthy” Artificial Intelligence

By Dina Awad

About Dentons

Redefining possibilities. Together, everywhere. For more information visit dentons.com

Grow, Protect, Operate, Finance. Dentons, the law firm of the future is here. Copyright 2023 Dentons. Dentons is a global legal practice providing client services worldwide through its member firms and affiliates. Please see dentons.com for Legal notices.

Categories

Subscribe and stay updated

Receive our latest blog posts by email.

Stay in Touch

Dentons logo in black and white

© 2025 Dentons

  • Legal notices
  • Privacy policy
  • Terms of use
  • Cookies on this site