3. Auditability

Enable interested third parties to probe, understand, and review the behaviour of the algorithm through disclosure of information that enables monitoring, checking or criticism.
Principle: A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

Published by Personal Data Protection Commission (PDPC), Singapore

Related Principles

Transparency and explainability

There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them. This principle aims to ensure responsible disclosure when an AI system is significantly impacting on a person’s life. The definition of the threshold for ‘significant impact’ will depend on the context, impact and application of the AI system in question. Achieving transparency in AI systems through responsible disclosure is important to each stakeholder group for the following reasons for users, what the system is doing and why for creators, including those undertaking the validation and certification of AI, the systems’ processes and input data for those deploying and operating the system, to understand processes and input data for an accident investigator, if accidents occur for regulators in the context of investigations for those in the legal process, to inform evidence and decision‐making for the public, to build confidence in the technology Responsible disclosures should be provided in a timely manner, and provide reasonable justifications for AI systems outcomes. This includes information that helps people understand outcomes, like key factors used in decision making. This principle also aims to ensure people have the ability to find out when an AI system is engaging with them (regardless of the level of impact), and are able to obtain a reasonable disclosure regarding the AI system.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

Auditability

Enable interested third parties to probe, understand, and review the behavior of the algorithm through disclosure of information that enables monitoring, checking, or criticism, including through provision of detailed documentation, technically suitable APIs, and permissive terms of use.

Published by Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) in Principles for Accountable Algorithms, Jul 22, 2016 (unconfirmed)

3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

8. Disclosure and Transparency

In addition to improving the rulemaking process, transparency and disclosure can increase public trust and confidence in AI applications. At times, such disclosures may include identifying when AI is in use, for instance, if appropriate for addressing questions about how the application impacts human end users. Agencies should be aware that some applications of AI could increase human autonomy. Agencies should carefully consider the sufficiency of existing or evolving legal, policy, and regulatory environments before contemplating additional measures for disclosure and transparency. What constitutes appropriate disclosure and transparency is context specific, depending on assessments of potential harms, the magnitude of those harms, the technical state of the art, and the potential benefits of the AI application.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Jan 13, 2020

5. Data Provenance

A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data gathering process. Public scrutiny of the data provides maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorized individuals.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017