1. AI shall not impair, and, where possible, shall advance the equality in rights, dignity, and freedom to flourish of all humans.

Accordingly, the purpose of governing artificial intelligence is to develop policy frameworks, voluntary codes or practice, practical guidelines, national and international regulations, and ethical norms that safeguard and promote the equality in rights, dignity, and freedom to flourish of all humans.
Principle: Principles for the Governance of AI, Oct 3, 2017 (unconfirmed)

Published by The Future Society, Science, Law and Society (SLS) Initiative

Related Principles

(f) Rule of law and accountability

Rule of law, access to justice and the right to redress and a fair trial provide the necessary framework for ensuring the observance of human rights standards and potential AI specific regulations. This includes protections against risks stemming from ‘autonomous’ systems that could infringe human rights, such as safety and privacy. The whole range of legal challenges arising in the field should be addressed with timely investment in the development of robust solutions that provide a fair and clear allocation of responsibilities and efficient mechanisms of binding law. In this regard, governments and international organisations ought to increase their efforts in clarifying with whom liabilities lie for damages caused by undesired behaviour of ‘autonomous’ systems. Moreover, effective harm mitigation systems should be in place.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

· 1.2. Human centered values and fairness

a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights. b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

Published by G20 Ministerial Meeting on Trade and Digital Economy in G20 AI Principles, Jun 09, 2019

1. Principle 1 — Human Rights

Issue: How can we ensure that A IS do not infringe upon human rights? [Candidate Recommendations] To best honor human rights, society must assure the safety and security of A IS so that they are designed and operated in a way that benefits humans: 1. Governance frameworks, including standards and regulatory bodies, should be established to oversee processes assuring that the use of A IS does not infringe upon human rights, freedoms, dignity, and privacy, and of traceability to contribute to the building of public trust in A IS. 2. A way to translate existing and forthcoming legal obligations into informed policy and technical considerations is needed. Such a method should allow for differing cultural norms as well as legal and regulatory frameworks. 3. For the foreseeable future, A IS should not be granted rights and privileges equal to human rights: A IS should always be subordinate to human judgment and control.

Published by The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems in Ethically Aligned Design (v2): General Principles, (v1) Dec 13, 2016. (v2) Dec 12, 2017

· 1.2. Human centred values and fairness

a) AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights. b) To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.

Published by The Organisation for Economic Co-operation and Development (OECD) in OECD Principles on Artificial Intelligence, May 22, 2019

· We will promote human values, freedom and dignity

1. AI should improve society, and society should be consulted in a representative fashion to inform the development of AI 2. Humanity should retain the power to govern itself and make the final decision, with AI in an assisting role 3. AI systems should conform to international norms and standards with respect to human values and people rights and acceptable behaviour

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019