8. Fair and equal

We aspire to embed the principles of fairness and equality in datasets and algorithms applied in all phases of AI design, implementation, testing and usage – fostering fairness and diversity and avoiding unfair bias both at the input and output levels of AI.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

(d) Justice, equity, and solidarity

AI should contribute to global justice and equal access to the benefits and advantages that AI, robotics and ‘autonomous’ systems can bring. Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible. We need a concerted global effort towards equal access to ‘autonomous’ technologies and fair distribution of benefits and equal opportunities across and within societies. This includes the formulating of new models of fair distribution and benefit sharing apt to respond to the economic transformations caused by automation, digitalisation and AI, ensuring accessibility to core AI technologies, and facilitating training in STEM and digital disciplines, particularly with respect to disadvantaged regions and societal groups. Vigilance is required with respect to the downside of the detailed and massive data on individuals that accumulates and that will put pressure on the idea of solidarity, e.g. systems of mutual assistance such as in social insurance and healthcare. These processes may undermine social cohesion and give rise to radical individualism.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

· 4. The Principle of Justice: “Be Fair”

For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair. Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination. Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination. Justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences. Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability. Humans might benefit from procedures enabling the benchmarking of AI performance with (ethical) expectations.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

2. Fairness and Justice

The development of AI should promote fairness and justice, protect the rights and interests of all stakeholders, and promote equal opportunities. Through technology advancement and management improvement, prejudices and discriminations should be eliminated as much as possible in the process of data acquisition, algorithm design, technology development, and product development and application.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

Chapter 3. The Norms of Research and Development

  10. Strengthen the awareness of self discipline. Strengthen self discipline in activities related to AI research and development, actively integrate AI ethics into every phase of technology research and development, consciously carry out self censorship, strengthen self management, and do not engage in AI research and development that violates ethics and morality.   11. Improve data quality. In the phases of data collection, storage, use, processing, transmission, provision, disclosure, etc., strictly abide by data related laws, standards and norms. Improve the completeness, timeliness, consistency, normativeness and accuracy of data.   12. Enhance safety, security and transparency. In the phases of algorithm design, implementation, and application, etc., improve transparency, interpretability, understandability, reliability, and controllability, enhance the resilience, adaptability, and the ability of anti interference of AI systems, and gradually realize verifiable, auditable, supervisable, traceable, predictable and trustworthy AI.   13. Avoid bias and discrimination. During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non discrimination of AI systems.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

· 2) Diversity and Fairness:

Artificial intelligence should provide non discriminatory services to various groups of people in accordance with the principles of fairness, equity and inclusion. We aim to initiate from the disciplined approach of system engineering, and to construct the AI system with diverse data and unbiased algorithms, thus improving the fairness of user experience.

Published by Youth Work Committee of Shanghai Computer Society in Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence, Aug 29, 2019