· 16) Human Control

Humans should choose how and whether to delegate decisions to AI systems, to accomplish human chosen objectives.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

Human centred values

Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals. This principle aims to ensure that AI systems are aligned with human values. Machines should serve humans, and not the other way around. AI systems should enable an equitable and democratic society by respecting, protecting and promoting human rights, enabling diversity, respecting human freedom and the autonomy of individuals, and protecting the environment. Human rights risks need to be carefully considered, as AI systems can equally enable and hamper such fundamental rights. It’s permissible to interfere with certain human rights where it’s reasonable, necessary and proportionate. All people interacting with AI systems should be able to keep full and effective control over themselves. AI systems should not undermine the democratic process, and should not undertake actions that threaten individual autonomy, like deception, unfair manipulation, unjustified surveillance, and failing to maintain alignment between a disclosed purpose and true action. AI systems should be designed to augment, complement and empower human cognitive, social and cultural skills. Organisations designing, developing, deploying or operating AI systems should ideally hire staff from diverse backgrounds, cultures and disciplines to ensure a wide range of perspectives, and to minimise the risk of missing important considerations only noticeable by some stakeholders.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

(b) Autonomy

The principle of autonomy implies the freedom of the human being. This translates into human responsibility and thus control over and knowledge about ‘autonomous’ systems as they must not impair freedom of human beings to set their own standards and norms and be able to live according to them. All ‘autonomous’ technologies must, hence, honour the human ability to choose whether, when and how to delegate decisions and actions to them. This also involves the transparency and predictability of ‘autonomous’ systems, without which users would not be able to intervene or terminate them if they would consider this morally required.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

· (9) Responsibility for Human

AI need to keep human safe, on the basis that this safety consideration do not directly and indirectly harm human society. AI need to help Human for transformation to future human becoming.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

2. Autonomy

[QUESTIONS] How can AI contribute to greater autonomy for human beings? Must we fight against the phenomenon of attention seeking which has accompanied advances in AI? Should we be worried that humans prefer the company of AI to that of other humans or animals? Can someone give informed consent when faced with increasingly complex autonomous technologies? Must we limit the autonomy of intelligent computer systems? Should a human always make the final decision? [PRINCIPLES] ​The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

· Human oversight and determination

35. Member States should ensure that it is always possible to attribute ethical and legal responsibility for any stage of the life cycle of AI systems, as well as in cases of remedy related to AI systems, to physical persons or to existing legal entities. Human oversight refers thus not only to individual human oversight, but to inclusive public oversight, as appropriate. 36. It may be the case that sometimes humans would choose to rely on AI systems for reasons of efficacy, but the decision to cede control in limited contexts remains that of humans, as humans can resort to AI systems in decision making and acting, but an AI system can never replace ultimate human responsibility and accountability. As a rule, life and death decisions should not be ceded to AI systems.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021