6. Democracy

[QUESTIONS] How should AI research and its applications, at the institutional level, be controlled? In what areas would this be most pertinent? Who should decide, and according to which modalities, the norms and moral values determining this control? Who should establish ethical guidelines for self driving cars? Should ethical labeling that respects certain standards be developed for AI, websites and businesses? [PRINCIPLES] ​The development of AI should promote informed participation in public life, cooperation and democratic debate.
Principle: The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

Published by University of Montreal, Forum on the Socially Responsible Development of AI

Related Principles

· (6) Fairness, Accountability, and Transparency

Under the "AI Ready society", when using AI, fair and transparent decision making and accountability for the results should be appropriately ensured, and trust in technology should be secured, in order that people using AI will not be discriminated on the ground of the person's background or treated unjustly in light of human dignity. Under the AI design concept, all people must be treated fairly without unjustified discrimination on the grounds of diverse backgrounds such as race, sex, nationality, age, political beliefs, religion, etc. Appropriate explanations should be provided such as the fact that AI is being used, the method of obtaining and using the data used in AI, and the mechanism to ensure the appropriateness of the operation results of AI according to the situation AI is used. In order for people to understand and judge AI proposals, there should be appropriate opportunities for open dialogue on the use, adoption and operation of AI, as needed. In order to ensure the above viewpoints and to utilize AI safely in society, a mechanism must be established to secure trust in AI and its using data.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

Public Empowerment

Principle: The public’s ability to understand AI enabled services, and how they work, is key to ensuring trust in the technology. Recommendations: “Algorithmic Literacy” must be a basic skill: Whether it is the curating of information in social media platforms or self driving cars, users need to be aware and have a basic understanding of the role of algorithms and autonomous decision making. Such skills will also be important in shaping societal norms around the use of the technology. For example, identifying decisions that may not be suitable to delegate to an AI. Provide the public with information: While full transparency around a service’s machine learning techniques and training data is generally not advisable due to the security risk, the public should be provided with enough information to make it possible for people to question its outcomes.

Published by Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper" in Guiding Principles and Recommendations, Apr 18, 2017

Chapter 1. General Principles

  1. This set of norms aims to integrate ethics into the entire life cycle of AI, to promote fairness, justice, harmony, safety and security, and to avoid issues such as prejudice, discrimination, privacy and information leakage.   2. This set of norms applies to natural persons, legal persons, and other related organizations engaged in related activities such as management, research and development, supply, and use of AI. (1) The management activities mainly refer to strategic planning, formulation and implementation of policies, laws, regulations, and technical standards, resource allocation, supervision and inspection, etc. (2) The research and development activities mainly refer to scientific research, technology development, product development, etc. related to AI. (3) The supply activities mainly refer to the production, operation, and sales of AI products and services. (4) The use activities mainly refer to the procurement, consumption, and manipulation of AI products and services.   3. Various activities of AI shall abide by the following fundamental ethical norms. (1) Enhancing the well being of humankind. Adhere to the people oriented vision, abide by the common values of humankind, respect human rights and the fundamental interests of humankind, and abide by national and regional ethical norms. Adhere to the priority of public interests, promote human machine harmony, improve people’s livelihood, enhance the sense of happiness, promote the sustainable development of economy, society and ecology, and jointly build a human community with a shared future. (2) Promoting fairness and justice. Adhere to shared benefits and inclusivity, effectively protect the legitimate rights and interests of all relevant stakeholders, promote fair sharing of the benefits of AI in the whole society, and promote social fairness and justice, and equal opportunities. When providing AI products and services, we should fully respect and help vulnerable groups and underrepresented groups, and provide corresponding alternatives as needed. (3) Protecting privacy and security. Fully respect the rights of personal information, to know, and to consent, etc., handle personal information, protect personal privacy and data security in accordance with the principles of lawfulness, justifiability, necessity, and integrity, do no harm to the legitimate rights of personal data, must not illegally collect and use personal information by stealing, tampering, or leaking, etc., and must not infringe on the rights of personal privacy. (4) Ensuring controllability and trustworthiness. Ensure that humans have the full power for decision making, the rights to choose whether to accept the services provided by AI, the rights to withdraw from the interaction with AI at any time, and the rights to suspend the operation of AI systems at any time, and ensure that AI is always under meaningful human control. (5) Strengthening accountability. Adhere that human beings are the ultimate liable subjects. Clarify the responsibilities of all relevant stakeholders, comprehensively enhance the awareness of responsibility, introspect and self discipline in the entire life cycle of AI. Establish an accountability mechanism in AI related activities, and do not evade liability reviews and do not escape from responsibilities. (6) Improving ethical literacy. Actively learn and popularize knowledge related to AI ethics, objectively understand ethical issues, and do not underestimate or exaggerate ethical risks. Actively carry out or participate in the discussions on the ethical issues of AI, deeply promote the practice of AI ethics and governance, and improve the ability to respond to related issues.   4. The ethical norms that should be followed in specific activities related to AI include the norms of management, the norms of research and development, the norms of supply, and the norms of use.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

3 Ensure transparency, explainability and intelligibility

AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology. Transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology. Such information should facilitate meaningful public consultation and debate on how the AI technology is designed and how it should be used. Such information should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use. Transparency will improve system quality and protect patient and public health safety. For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight. It must be possible to audit an AI technology, including if something goes wrong. Transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model. AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed. Data protection laws already create specific obligations of explainability for automated decision making. Those who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalized populations. Many AI technologies are complex, and the complexity might frustrate both the explainer and the person receiving the explanation. There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability). All algorithms should be tested rigorously in the settings in which the technology will be used in order to ensure that it meets standards of safety and efficacy. The examination and validation should include the assumptions, operational protocols, data properties and output decisions of the AI technology. Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics. There should be robust, independent oversight of such tests and evaluation to ensure that they are conducted safely and effectively. Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021

4 Foster responsibility and accountability

Humans require clear, transparent specification of the tasks that systems can perform and the conditions under which they can achieve the desired level of performance; this helps to ensure that health care providers can use an AI technology responsibly. Although AI technologies perform specific tasks, it is the responsibility of human stakeholders to ensure that they can perform those tasks and that they are used under appropriate conditions. Responsibility can be assured by application of “human warranty”, which implies evaluation by patients and clinicians in the development and deployment of AI technologies. In human warranty, regulatory principles are applied upstream and downstream of the algorithm by establishing points of human supervision. The critical points of supervision are identified by discussions among professionals, patients and designers. The goal is to ensure that the algorithm remains on a machine learning development path that is medically effective, can be interrogated and is ethically responsible; it involves active partnership with patients and the public, such as meaningful public consultation and debate (101). Ultimately, such work should be validated by regulatory agencies or other supervisory authorities. When something does go wrong in application of an AI technology, there should be accountability. Appropriate mechanisms should be adopted to ensure questioning by and redress for individuals and groups adversely affected by algorithmically informed decisions. This should include access to prompt, effective remedies and redress from governments and companies that deploy AI technologies for health care. Redress should include compensation, rehabilitation, restitution, sanctions where necessary and a guarantee of non repetition. The use of AI technologies in medicine requires attribution of responsibility within complex systems in which responsibility is distributed among numerous agents. When medical decisions by AI technologies harm individuals, responsibility and accountability processes should clearly identify the relative roles of manufacturers and clinical users in the harm. This is an evolving challenge and remains unsettled in the laws of most countries. Institutions have not only legal liability but also a duty to assume responsibility for decisions made by the algorithms they use, even if it is not feasible to explain in detail how the algorithms produce their results. To avoid diffusion of responsibility, in which “everybody’s problem becomes nobody’s responsibility”, a faultless responsibility model (“collective responsibility”), in which all the agents involved in the development and deployment of an AI technology are held responsible, can encourage all actors to act with integrity and minimize harm. In such a model, the actual intentions of each agent (or actor) or their ability to control an outcome are not considered.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021