Snapshot:
- The Framework is voluntary, flexible guide designed to assist organizations implementing trustworthy and responsible AI systems.
- Part I helps organizations assess AI-related impacts and risks; Part II outlines functions that will allow AI actors to address those risks in practice.
- Part I set outs the characteristics of trustworthy AI systems: they are valid and reliable; safe; secure and resilient; accountable and transparent; explainable and interpretable; privacy-enhanced; and fair (harmful bias managed).
- Part II sets out the four “functions” that need to be in place for organizations to manage the risks that are posed by an AI system: Govern, Map, Measure, and Manage.
- Concepts from the Framework will provide a roadmap for jurisdictions seeking to regulate the AI space – as is currently underway in Canada. Organizations should consider the Framework in their development or acquisition of AI systems, and consider developing AI governance programs that align with the concepts in the Framework.
Background
The U.S. National Institute of Standards and Technology (“NIST”) recently released version 1.0 of its Artificial Intelligence Risk Management Framework (“AI RMF” or “Framework”), which amounts to a practical, flexible, and adaptable set of guidelines for AI actors across the AI lifecycle to use when they design, develop, deploy or use AI systems. The goal of the AI RMF is to provide a voluntary, rights-preserving, sector- and use-case agnostic guide for AI actors to implement in order to promote trustworthy and responsible AI systems.
This is a notable development in an area that has had few voluntary or obligatory requirements imposed on such actors to date, and is particularly relevant in the Canadian context as Bill C-27, which includes a proposed Artificial Intelligence and Data Act (“AIDA”), makes its way through second reading at the federal legislature.
AI RMF
The Framework is divided into two parts: the first equips AI actors, including individuals and organizations, with the tools needed to assess AI-related impacts and risks, and the second outlines functions that will allow AI actors to address those risks in practice.
Part 1: Foundational Information – Framing risk, defining the audience, and articulating key characteristics of “trustworthy” AI
The AI RMF defines “risk” as a combination of likelihood and impact of an event: “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event.” The Framework explains that such consequences, or impacts, of AI systems may be both positive and negative, and can be experienced by people, organizations, and ecosystems. Notably, the Framework provides tools for actors across the AI lifecycle to collaborate in order to minimize anticipated negative impacts, while also identifying opportunities to maximize positive ones.
Part 1 acknowledges existing challenges in assessing AI-related impacts that should be taken into account when managing risk. Such challenges include difficulties in measuring risk in a real-world versus laboratory settings, as well as the blind spots that can arise when third-party data or systems are used in developing an AI system. Part 1 also recognizes that AI actors will have varying degrees of risk tolerance depending on their goals, as well as their legal or regulatory requirements, which is why the AI RMF is intended to provide a flexible and adaptable framework for prioritizing such risks.
As mentioned above, the Framework is designed to help AI actors increase the trustworthiness of AI systems, with the aim of subsequently cultivating public trust such that the benefits of AI systems can be maximized. In furtherance of this goal, the AI RMF sets out the following characteristics of trustworthy AI systems:
- Valid and reliable – The AI system produces accurate and consistent results, which are able to be confirmed through objective evidence.
- Safe – The AI system does not harm or endanger human life, health, property or the environment.
- Secure and resilient – The AI system can withstand unexpected changes in its environment while maintaining the system’s confidentiality, integrity, and availability.
- Accountable and transparent – AI actors have access to sufficient information about an AI system, thereby providing increased understanding of the system’s decisions and actions.
- Explainable and interpretable – AI actors have insight into the functionality of an AI system, including the mechanisms underlying the system’s outputs and the meaning of those outputs, which is tailored to the actor’s role, knowledge, and skill level.
- Privacy-enhanced – The AI system is designed, developed and deployed with privacy values, including data-minimization, confidentiality, and control, in mind.
- Fair – with harmful bias managed – The AI system is continually evaluated for equity and equality issues, with an aim to minimize harmful biases such as systemic, computational/statistical, and human-cognitive bias.
Rarely will all seven characteristics apply in a given setting. Depending on the nature of an AI system, some characteristics will require greater emphasis or attention than others and, at times, trade-offs may need to be made. Accordingly, the Framework advises that human judgment is required when assessing these metrics related to trustworthiness.
Part 2: Core and Principles
Part 2 of the AI RMF outlines four “functions” to help AI actors manage the risks that are posed by an AI system in order to develop trustworthy AI. The first function, Govern, applies to all stages of an organization’s risk management process, while the other three, Map, Measure, and Manage, can be applied at context- and stage-specific points of the AI lifecycle. Each function is broken down into categories and sub-categories, outlining specific actions that AI actors can perform to manage risk throughout the AI lifecycle.
- Govern – This function is focused on cultivating a culture of risk management, compliance, and accountability amongst AI actors. Strong governance requires having teams, policies, structures, and processes in place to identify, evaluate and mitigate AI risks throughout the AI lifecycle.
- Map – This function enhances AI actors’ ability to identify risks and contributing factors by mapping the interplay amongst the network of interdependent activities involved in an AI system. The outcomes of the Map function provide the contextual knowledge that forms the basis of the Measure and Manage functions.
- Measure – The function requires the use of qualitative and quantitative tools to benchmark and monitor AI system risks and impacts, including metrics related to trustworthiness, during the AI system’s development and while in operation. The knowledge generated through regular assessment informs the Manage function.
- Manage – This function entails allocating resources to mapped and measured risks in order to maximize the benefits and minimize the risks of an AI system. Policies from the Govern function, together with contextual information from the Map and Measure functions, will guide how risks are prioritized.
In addition to the AI RMF, NIST published companion resources, including an AI RMF Playbook, to help organizations navigate and apply these concepts in their own contexts.
Takeaways for Business
While the AI RMF is voluntary, NIST is an influential and well-respected organization worldwide. As a result, AI actors should anticipate that this Framework may be adopted by international stakeholders, including here in Canada, as we have seen occur with other NIST Standards. Vendors of AI systems should expect their customers to start asking questions of them that reflect the concepts in the Framework, and that clauses in contracts incorporate them as well, or specifically reference the NIST AI Framework.
Even if the AI RMF is not formally adopted by international stakeholders, proactive AI actors in organizations of all sizes and across the public, private, and non-profit sectors should take note of the guidance provided in this Framework and its companion resources. There is a strong likelihood that the concepts from these documents will provide a roadmap for jurisdictions seeking to regulate the AI space – as is currently underway in Canada. For example, the foundational information provided under Part 1 of the Framework may be instructive for Canadian legislators looking to define the pivotal, and yet undefined, “high-impact system” concept under AIDA. As a result, AI actors that adopt the Framework now will not only be better prepared to manage risks inherent in AI systems today, they will also likely be better positioned for regulatory compliance in the future.
For more information about Dentons’ data expertise and how we can help with AI governance or AI agreements, please contact the authors. In additions, pelase see our unique Dentons Data suite of data solutions for every business, including enterprise privacy audits, privacy program reviews and implementation, data mapping and gap analysis, and training in respect of personal information.