Be transparent and produce explainable outputs

Principle: GE Healthcare AI principles, Oct 1, 2018 (unconfirmed)

Published by GE Healthcare

Related Principles

· Article 6: Transparent and explainable.

Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability for algorithmic logic, system decisions, and action outcomes.

Published by Artificial Intelligence Industry Alliance (AIIA), China in Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

I Interpretability

Interpretable and explainable AI will be essential for business and the public to understand, trust and effectively manage 'intelligent' machines. Organisations that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

3. New technology, including AI systems, must be transparent and explainable

For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.

Published by IBM in Principles for Trust and Transparency, May 30, 2018

10. Responsibility, accountability and transparency

a. Build trust by ensuring that designers and operators are responsible and accountable for their systems, applications and algorithms, and to ensure that such systems, applications and algorithms operate in a transparent and fair manner. b. To make available externally visible and impartial avenues of redress for adverse individual or societal effects of an algorithmic decision system, and to designate a role to a person or office who is responsible for the timely remedy of such issues. c. Incorporate downstream measures and processes for users or stakeholders to verify how and when AI technology is being applied. d. To keep detailed records of design processes and decision making.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

1. Transparency:

in principle, AI systems must be explainable;

Published by The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government in Rome Call for AI Ethics, Feb 28, 2020