· Algorithmic fairness

Ethics by design (EBD): ensure that algorithm is reasonable, and date is accurate, up to date, complete, relevant, unbiased and representative, and take technical measures to identify, solve and eliminate bias Formulate guidelines and principles on solving bias and discrimination, potential mechanisms include algorithmic transparency, quality review, impact assessment, algorithmic audit, supervision and review, ethical board, etc.
Principle: "ARCC": An Ethical Framework for Artificial Intelligence, Sep 18, 2018

Published by Tencent Research Institute

Related Principles

· Article 6: Transparent and explainable.

Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability for algorithmic logic, system decisions, and action outcomes.

Published by Artificial Intelligence Industry Alliance (AIIA), China in Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment), May 31, 2019

Chapter 3. The Norms of Research and Development

  10. Strengthen the awareness of self discipline. Strengthen self discipline in activities related to AI research and development, actively integrate AI ethics into every phase of technology research and development, consciously carry out self censorship, strengthen self management, and do not engage in AI research and development that violates ethics and morality.   11. Improve data quality. In the phases of data collection, storage, use, processing, transmission, provision, disclosure, etc., strictly abide by data related laws, standards and norms. Improve the completeness, timeliness, consistency, normativeness and accuracy of data.   12. Enhance safety, security and transparency. In the phases of algorithm design, implementation, and application, etc., improve transparency, interpretability, understandability, reliability, and controllability, enhance the resilience, adaptability, and the ability of anti interference of AI systems, and gradually realize verifiable, auditable, supervisable, traceable, predictable and trustworthy AI.   13. Avoid bias and discrimination. During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non discrimination of AI systems.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

10. Responsibility, accountability and transparency

a. Build trust by ensuring that designers and operators are responsible and accountable for their systems, applications and algorithms, and to ensure that such systems, applications and algorithms operate in a transparent and fair manner. b. To make available externally visible and impartial avenues of redress for adverse individual or societal effects of an algorithmic decision system, and to designate a role to a person or office who is responsible for the timely remedy of such issues. c. Incorporate downstream measures and processes for users or stakeholders to verify how and when AI technology is being applied. d. To keep detailed records of design processes and decision making.

Published by Personal Data Protection Commission (PDPC), Singapore in A compilation of existing AI ethical principles (Annex A), Jan 21, 2020

3 Ensure transparency, explainability and intelligibility

AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology. Transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology. Such information should facilitate meaningful public consultation and debate on how the AI technology is designed and how it should be used. Such information should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use. Transparency will improve system quality and protect patient and public health safety. For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight. It must be possible to audit an AI technology, including if something goes wrong. Transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model. AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed. Data protection laws already create specific obligations of explainability for automated decision making. Those who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalized populations. Many AI technologies are complex, and the complexity might frustrate both the explainer and the person receiving the explanation. There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability). All algorithms should be tested rigorously in the settings in which the technology will be used in order to ensure that it meets standards of safety and efficacy. The examination and validation should include the assumptions, operational protocols, data properties and output decisions of the AI technology. Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics. There should be robust, independent oversight of such tests and evaluation to ensure that they are conducted safely and effectively. Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021