3. provide meaningful explanations about AI decision making, while also offering opportunities to review results and challenge these decisions

Principle: Responsible use of artificial intelligence (AI): Our guiding principles, 2019 (unconfirmed)

Published by Government of Canada

Related Principles

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

8. Principle of user assistance

Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners. [Comment] In order to support users of AI systems, it is recommended that developers pay attention to the followings: ● To make efforts to make available interfaces that provide in a timely and appropriate manner the information that can help users’ decisions and are easy to use for them. ● To make efforts to give consideration to make available functions that provide users with opportunities for choice in a timely and appropriate manner (e.g., default settings, easy to understand options, feedbacks, emergency warnings, handling of errors, etc.). And ● To make efforts to take measures to make AI systems easier to use for socially vulnerable people such as universal design. In addition, it is recommended that developers make efforts to provide users with appropriate information considering the possibility of changes in outputs or programs as a result of learning or other methods of AI systems.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

Accountability

Decision making remains the responsibility of organisations and individuals AI is a powerful tool for analysing and looking for patterns in large quantities of data, undertaking high volume routine process work, or making recommendations based on complex information. However, AI based functions and decisions must always be subject to human review and intervention. Projects should clearly demonstrate: that the agency remains responsible for all AI informed decisions and will monitor them accordingly that human intervention in decision making and accountability in service delivery are key factors that AI projects are overseen by individuals with the relevant expertise in the technology and its benefits and risks that a review and assurance process has been put in place for both the development of the AI solution and its outcomes.

Published by Government of New South Welsh, Australia in Mandatory Ethical Principles for the use of AI, 2024

1. Transparent and explainable

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. For more on this, please consult the Transparency Guidelines.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023

· 2) Humanistic approach:

Humanistic approach: Artificial intelligence should empower users to make their own decisions. We are committed to providing transparent, understandable decision interpretations and interactive tools, allowing users to join, monitor or involve in the decision making process.

Published by Youth Work Committee of Shanghai Computer Society in Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019