· e. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.

Principle: Tenets, Sep 28, 2016 (unconfirmed)

Published by Partnership on AI

Related Principles

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals. Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

2 Accountability

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall respect and adopt the eight principles of this Policy Framework for Responsible AI (or other analogous accountability principles). In all instances, humans should remain accountable for the acts and omissions of AI systems.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

4 Fairness and Non discrimination

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall ensure the non discrimination of AI outcomes, and shall promote appropriate and effective measures to safeguard fairness in AI use.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

Chapter 5. The Norms of Use

  18. Promote good use. Strengthen the justifications and evaluations of AI products and services before use, fully get aware on the benefits of AI products and services, and fully consider the legitimate rights and interests of various stakeholders, so as to better promote economic prosperity, social progress and sustainable development.   19. Avoid misuse and abuse. Fully get aware and understand the scope of applications and potential negative effects of AI products and services, and earnestly respect the rights of relevant entities not to use AI products or services, avoid improper use, misuse and abuse of AI products and services, and avoid unintended cause of damages to the legitimate rights and interests of others.   20. Forbid malicious use. It is forbidden to use AI products and services that do not comply with laws, regulations, ethical norms, and standards. It is forbidden to use AI products and services to engage in illegal activities. It is strictly forbidden to endanger national security, public safety and production safety, and it is strictly forbidden to do harm to public interests.   21. Timely and Proactive feedback. Actively participate in the practice of AI ethics and governance, prompt feedback to relevant subjects and assistance for solving problems are expected when technical safety and security flaws, policy and law vacuums, and lags of regulation are found in the use of AI products and services.   22. Improve the ability to use. Actively learn AI related knowledge, and actively master the skills required for various phases related to the use of AI products and services, such as operation, maintenance, and emergency response, so as to ensure the safe and efficient use of them.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

7. Human rights alignment

Ensure that the design, development and implementation of technologies do not infringe internationally recognised human rights.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020