4. Explainability

Ensure that automated and algorithmic decisions and any associated data driving those decisions can be explained to end users and other stakeholders in non technical terms.
Principle: A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

Published by Personal Data Protection Commission (PDPC), Singapore

Related Principles

II. Technical robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible. In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

(Preamble)

Automated decision making algorithms are now used throughout industry and government, underpinning many processes from dynamic pricing to employment practices to criminal sentencing. Given that such algorithmically informed decisions have the potential for significant societal impact, the goal of this document is to help developers and product managers design and implement algorithmic systems in publicly accountable ways. Accountability in this context includes an obligation to report, explain, or justify algorithmic decision making as well as mitigate any negative social impacts or potential harms. We begin by outlining five equally important guiding principles that follow from this premise: Algorithms and the data that drive them are designed and created by people There is always a human ultimately responsible for decisions made or informed by an algorithm. "The algorithm did it" is not an acceptable excuse if algorithmic systems make mistakes or have undesired consequences, including from machine learning processes.

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) in Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

Explainability

Ensure that algorithmic decisions as well as any data driving those decisions can be explained to end users and other stakeholders in non technical terms.

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) in Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

Ensuring Accountability

Principle: Legal accountability has to be ensured when human agency is replaced by decisions of AI agents. Recommendations: Ensure legal certainty: Governments should ensure legal certainty on how existing laws and policies apply to algorithmic decision making and the use of autonomous systems to ensure a predictable legal environment. This includes working with experts from all disciplines to identify potential gaps and run legal scenarios. Similarly, those designing and using AI should be in compliance with existing legal frameworks. Put users first: Policymakers need to ensure that any laws applicable to AI systems and their use put users’ interests at the center. This must include the ability for users to challenge autonomous decisions that adversely affect their interests. Assign liability up front: Governments working with all stakeholders need to make some difficult decisions now about who will be liable in the event that something goes wrong with an AI system, and how any harm suffered will be remedied.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017