8. Principle of user assistance

Developers should take it into consideration that AI systems will support users and make it possible to give them opportunities for choice in appropriate manners. [Comment] In order to support users of AI systems, it is recommended that developers pay attention to the followings: ● To make efforts to make available interfaces that provide in a timely and appropriate manner the information that can help users’ decisions and are easy to use for them. ● To make efforts to give consideration to make available functions that provide users with opportunities for choice in a timely and appropriate manner (e.g., default settings, easy to understand options, feedbacks, emergency warnings, handling of errors, etc.). And ● To make efforts to take measures to make AI systems easier to use for socially vulnerable people such as universal design. In addition, it is recommended that developers make efforts to provide users with appropriate information considering the possibility of changes in outputs or programs as a result of learning or other methods of AI systems.
Principle: AI R&D Principles, Jul 28, 2017

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

4. Human centricity

AI systems should respect human centred values and pursue benefits for human society, including human beings’ well being, nutrition, happiness, etc. It is key to ensure that people benefit from AI design, development, and deployment while being protected from potential harms. AI systems should be used to promote human well being and ensure benefit for all. Especially in instances where AI systems are used to make decisions about humans or aid them, it is imperative that these systems are designed with human benefit in mind and do not take advantage of vulnerable individuals. Human centricity should be incorporated throughout the AI system lifecycle, starting from the design to development and deployment. Actions must be taken to understand the way users interact with the AI system, how it is perceived, and if there are any negative outcomes arising from its outputs. One example of how deployers can do this is to test the AI system with a small group of internal users from varied backgrounds and demographics and incorporate their feedback in the AI system. AI systems should not be used for malicious purposes or to sway or deceive users into making decisions that are not beneficial to them or society. In this regard, developers and deployers (if developing or designing inhouse) should also ensure that dark patterns are avoided. Dark patterns refer to the use of certain design techniques to manipulate users and trick them into making decisions that they would otherwise not have made. An example of a dark pattern is employing the use of default options that do not consider the end user’s interests, such as for data sharing and tracking of the user’s other online activities. As an extension of human centricity as a principle, it is also important to ensure that the adoption of AI systems and their deployment at scale do not unduly disrupt labour and job prospects without proper assessment. Deployers are encouraged to take up impact assessments to ensure a systematic and stakeholder based review and consider how jobs can be redesigned to incorporate use of AI. Personal Data Protection Commission of Singapore’s (PDPC) Guide on Job Redesign in the Age of AI6 provides useful guidance to assist organisations in considering the impact of AI on its employees, and how work tasks can be redesigned to help employees embrace AI and move towards higher value tasks.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

Ensure “Interpretability” of AI systems

Principle: Decisions made by an AI agent should be possible to understand, especially if those decisions have implications for public safety, or result in discriminatory practices. Recommendations: Ensure Human Interpretability of Algorithmic Decisions: AI systems must be designed with the minimum requirement that the designer can account for an AI agent’s behaviors. Some systems with potentially severe implications for public safety should also have the functionality to provide information in the event of an accident. Empower Users: Providers of services that utilize AI need to incorporate the ability for the user to request and receive basic explanations as to why a decision was made.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

9. Principle of accountability

Developers should make efforts to fulfill their accountability to stakeholders, including AI systems’ users. [Comment] Developers are expected to fulfill their accountability for AI systems they have developed to gain users’ trust in AI systems. Specifically, it is encouraged that developers make efforts to provide users with the information that can help their choice and utilization of AI systems. In addition, in order to improve the acceptance of AI systems by the society including users, it is also encouraged that, taking into account the R&D principles (1) to (8) set forth in the Guidelines, developers make efforts: (a) to provide users et al. with both information and explanations about the technical characteristics of the AI systems they have developed; and (b) to gain active involvement of stakeholders (such as their feedback) in such manners as to hear various views through dialogues with diverse stakeholders. Moreover, it is advisable that developers make efforts to share the information and cooperate with providers et al. who offer services with the AI systems they have developed on their own.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

1. Transparent and explainable

There must be transparent use and responsible disclosure around data enhanced technology like AI, automated decisions and machine learning systems to ensure that people understand outcomes and can discuss, challenge and improve them. This includes being open about how and why these technologies are being used. When automation has been used to make or assist with decisions, a meaningful explanation should be made available. The explanation should be meaningful to the person requesting it. It should include relevant information about what the decision was, how the decision was made, and the consequences. Why it matters Transparent use is the key principle that helps enable other principles while building trust and confidence in government use of data enhanced technologies. It also encourages a dialogue between those using the technology and those who are affected by it. Meaningful explanations are important because they help people understand and potentially challenge outcomes. This helps ensure decisions are rendered fairly. It also helps identify and reverse adverse impacts on historically disadvantaged groups. For more on this, please consult the Transparency Guidelines.

Published by Government of Ontario, Canada in Principles for Ethical Use of AI [Beta], Sept 14, 2023