(Preamble)

The Principles of Artificial Intelligence (AI) Ethics for the Intelligence Community (IC) are intended to guide personnel on whether and how to develop and use AI, to include machine learning, in furtherance of the IC’s mission. These Principles supplement the Principles of Professional Ethics for the IC and do not modify or supersede applicable laws, executive orders, or policies. Instead, they articulate the general norms that IC elements should follow in applying those authorities and requirements. To assist with the implementation of these Principles, the IC has also created an AI Ethics Framework to guide personnel who are determining whether and how to procure, design, build, use, protect, consume, and manage AI and other advanced analytics. The Intelligence Community commits to the design, development, and use of AI with the following principles:
Principle: Principles of Artificial Intelligence Ethics for the Intelligence Community, Jul 23, 2020

Published by Intelligence Community (IC), United States

Related Principles

· (1) Human centric

Utilization of AI should not infringe upon fundamental human rights that are guaranteed by the Constitution and international norms. AI should be developed and utilized and implemented in society to expand the abilities of people and to pursue the diverse concepts of happiness of diverse people. In the AI utilized society, it is desirable that we implement appropriate mechanisms of literacy education and promotion of proper uses, so as not to over depend on AI or not to ill manipulate human decisions by exploiting AI. AI can expand human abilities and creativity not only by replacing part of human task but also by assisting human as an advanced instrument. When using AI, people must judge and decide for themselves how to use AI. Appropriate stakeholders involved in the development, provision, and utilization of AI should be responsible for the result of AI utilization, depending on the nature of the issue. In order to avoid creating digital divide and allow all people to reap the benefit of AI regardless of their digital expertise, each stakeholder should take into consideration to user friendliness of the system in the process of AI deployment.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

(Preamble)

We reaffirm that the use of AI must take place within the context of the existing DoD ethical framework. Building on this foundation, we propose the following principles, which are more specific to AI, and note that they apply to both combat and non combat systems. AI is a rapidly developing field, and no organization that currently develops or fields AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded in the following principles. However, the Department should set the goal that its use of AI systems is:

Published by Defense Innovation Board (DIB), Department of Defense (DoD), United States in AI Ethics Principles for DoD, Oct 31, 2019

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

Chapter 1. General Principles

  1. This set of norms aims to integrate ethics into the entire life cycle of AI, to promote fairness, justice, harmony, safety and security, and to avoid issues such as prejudice, discrimination, privacy and information leakage.   2. This set of norms applies to natural persons, legal persons, and other related organizations engaged in related activities such as management, research and development, supply, and use of AI. (1) The management activities mainly refer to strategic planning, formulation and implementation of policies, laws, regulations, and technical standards, resource allocation, supervision and inspection, etc. (2) The research and development activities mainly refer to scientific research, technology development, product development, etc. related to AI. (3) The supply activities mainly refer to the production, operation, and sales of AI products and services. (4) The use activities mainly refer to the procurement, consumption, and manipulation of AI products and services.   3. Various activities of AI shall abide by the following fundamental ethical norms. (1) Enhancing the well being of humankind. Adhere to the people oriented vision, abide by the common values of humankind, respect human rights and the fundamental interests of humankind, and abide by national and regional ethical norms. Adhere to the priority of public interests, promote human machine harmony, improve people’s livelihood, enhance the sense of happiness, promote the sustainable development of economy, society and ecology, and jointly build a human community with a shared future. (2) Promoting fairness and justice. Adhere to shared benefits and inclusivity, effectively protect the legitimate rights and interests of all relevant stakeholders, promote fair sharing of the benefits of AI in the whole society, and promote social fairness and justice, and equal opportunities. When providing AI products and services, we should fully respect and help vulnerable groups and underrepresented groups, and provide corresponding alternatives as needed. (3) Protecting privacy and security. Fully respect the rights of personal information, to know, and to consent, etc., handle personal information, protect personal privacy and data security in accordance with the principles of lawfulness, justifiability, necessity, and integrity, do no harm to the legitimate rights of personal data, must not illegally collect and use personal information by stealing, tampering, or leaking, etc., and must not infringe on the rights of personal privacy. (4) Ensuring controllability and trustworthiness. Ensure that humans have the full power for decision making, the rights to choose whether to accept the services provided by AI, the rights to withdraw from the interaction with AI at any time, and the rights to suspend the operation of AI systems at any time, and ensure that AI is always under meaningful human control. (5) Strengthening accountability. Adhere that human beings are the ultimate liable subjects. Clarify the responsibilities of all relevant stakeholders, comprehensively enhance the awareness of responsibility, introspect and self discipline in the entire life cycle of AI. Establish an accountability mechanism in AI related activities, and do not evade liability reviews and do not escape from responsibilities. (6) Improving ethical literacy. Actively learn and popularize knowledge related to AI ethics, objectively understand ethical issues, and do not underestimate or exaggerate ethical risks. Actively carry out or participate in the discussions on the ethical issues of AI, deeply promote the practice of AI ethics and governance, and improve the ability to respond to related issues.   4. The ethical norms that should be followed in specific activities related to AI include the norms of management, the norms of research and development, the norms of supply, and the norms of use.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

(Preamble)

New developments in Artificial Intelligence are transforming the world, from science and industry to government administration and finance. The rise of AI decision making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI. The Guidelines are intended to maximize the benefits of AI, to minimize the risk, and to ensure the protection of human rights. These Guidelines should be incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems. We state clearly that the primary responsibility for AI systems must reside with those institutions that fund, develop, and deploy these systems.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018