Towards “Trustworthy” Artificial Intelligence

Print Friendly, PDF & Email

Governments have become concerned with artificial intelligence’s (“AI“) responsible use given the continued proliferation of AI technologies across industries. Domestically, Canada has published a Directive on Automated Decision-Making. Globally, the European Commission’s High Level Expert Group on AI (the “HLEGA”) released its Ethics Guidelines for Trustworthy Intelligence (the “Guidelines”) in 23 languages earlier this year. The Guidelines follow a first draft released in December 2018, which received more than 500 comments through an open consultation.

HLEGA Guidelines

The Guidelines’ starting point is a helpful definition of AI systems which underscores the importance of data for AI. The Guidelines define AI systems as:

…software (and possibly also hardware) designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal”

A separate document elaborating on this definition is also available, A definition of AI: Main capabilities and scientific disciplines.

In order to provide a framework for trustworthy AI, the Guidelines are split into three chapters. The first chapter identifies ethical principles and their correlated values which ought to be respected in the development, deployment and use of trustworthy AI systems. In this regard, the Guidelines provide that AI systems should adhere to the core principles of (i) respect for human autonomy, (ii) prevention of harm, (iii) fairness and (iv) explicability.

The second chapter of the Guidelines provides guidance on how AI can be trustworthy through the implementation of mechanisms that meet the following 7 requirements:

  • Human agency and oversight: AI systems should foster humans’ fundamental rights and enable them to make informed decisions. At the same time, proper oversight mechanisms need to be in place.
  • Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, accurate, reliable and reproducible. These strategies can minimize and prevent unintentional harm.
  • Privacy and data governance: AI systems ought to ensure full respect for privacy and data protection. Adequate data governance mechanisms, taking into account the quality and integrity of the data, and ensuring legitimised access to data, must be in place.
  • Transparency: AI systems should have transparent data, system and AI business models. Traceability mechanisms should be put in place, while AI systems should be explained in a clear manner for stakeholders. Humans need to be aware when interacting with an AI system and informed of an AI system’s capabilities and limitations.
  • Diversity, non-discrimination and fairness: AI systems should avoid unfair bias and foster diversity. AI systems should be accessible to all stakeholders.
  • Societal and environmental well-being: AI systems should benefit all human beings and future generations. They must be sustainable and environmentally friendly.
  • Accountability: AI systems should include mechanisms to ensure responsibility and accountability for the systems and their outcomes. Auditability plays a key role, and AI systems should ensure adequate an accessible redress.

The third chapter of the Guidelines provides a concrete but non-exhaustive list for the assessment of the trustworthiness of AI based on the key requirements set out in chapter 2. While the Guidelines are not intended to substitute any form of current or future government AI policymaking or regulation, they aim to foster global research, reflection and discussion on an ethical framework for AI.

With the release of the Guidelines, the EU has also announced the launch of a comprehensive piloting phase involving many stakeholders to test the practical implementation of ethical guidance for AI development and use. Building on the evaluation of feedback received during the piloting phase, the HLEGA plans to review and update the Guidelines at the beginning of 2020, thereby helping to continue to move the dial on a framework for the responsible use of AI.

Takeaways for Business

Businesses onboarding AI applications have primarily been focused on privacy concerns – in other words, where their data set includes personal information, have privacy laws been followed. Increasingly, privacy considerations are only a starting point. A business that is fully compliant with Canada’s federal Personal Information Protection and Electronic Documents act can still run afoul of the Guidelines, or similar frameworks regarding the ethical (and in some cases, legal) use of AI. An independent inquiry into any application of AI is highly recommended.

In many cases, businesses are buying AI services from service providers. A critical component of engaging with such service providers is a due diligence review of the inputs, outputs, and process used by such AI.


For more information about Denton’s data expertise and how we can help, please see our Transformative Technologies and Data Strategy page and our unique Dentons Data suite of data solutions for every business, which includes assessments of AI implementation and AI service providers.