· e. Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.

Principle: Tenets, Sep 28, 2016 (unconfirmed)

Published by Partnership on AI

Related Principles

(c) Responsibility

The principle of responsibility must be fundamental to AI research and application. ‘Autonomous’ systems should only be developed and used in ways that serve the global social and environmental good, as determined by outcomes of deliberative democratic processes. This implies that they should be designed so that their effects align with a plurality of fundamental human values and rights. As the potential misuse of ‘autonomous’ technologies poses a major challenge, risk awareness and a precautionary approach are crucial. Applications of AI and robotics should not pose unacceptable risks of harm to human beings, and not compromise human freedom and autonomy by illegitimately and surreptitiously reducing options for and knowledge of citizens. They should be geared instead in their development and use towards augmenting access to knowledge and access to opportunities for individuals. Research, design and development of AI, robotics and ‘autonomous’ systems should be guided by an authentic concern for research ethics, social accountability of developers, and global academic cooperation to protect fundamental rights and values and aim at designing technologies that support these, and not detract from them.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

2 Accountability

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall respect and adopt the eight principles of this Policy Framework for Responsible AI (or other analogous accountability principles). In all instances, humans should remain accountable for the acts and omissions of AI systems.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

4 Fairness and Non discrimination

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall ensure the non discrimination of AI outcomes, and shall promote appropriate and effective measures to safeguard fairness in AI use.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

7 Privacy

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall endeavour to ensure that AI systems are compliant with privacy norms and regulations, taking into account the unique characteristics of AI systems, and the evolution of standards on privacy.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

7. Human rights alignment

Ensure that the design, development and implementation of technologies do not infringe internationally recognised human rights.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020