· 2. Avoid creating or reinforcing unfair bias.

AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
Principle: Artificial Intelligence at Google: Our Principles, Jun 7, 2018

Published by Google

Related Principles

· Fairness and inclusion

AI systems should make the same recommendations for everyone with similar characteristics or qualifications. Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on. AI should adopt inclusive design efforts to anticipate any potential deployment issues that could unintentionally exclude people. Workplace AI should be tested to ensure that it does not discriminate against vulnerable individuals or communities. Governments should review the impact of workplace, governmental and social AI on the opportunities and rights of poor people, Indigenous peoples and vulnerable members of society. In particular, the impact of overlapping AI systems toward profiling and marginalization should be identified and countered.

Published by Centre for International Governance Innovation (CIGI), Canada in Toward a G20 Framework for Artificial Intelligence in the Workplace, Jul 19, 2018

· 5. Non Discrimination

Discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups. Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups. Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons. Intentional harm can, for instance, be achieved by explicit manipulation of the data to exclude certain groups. Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market. Discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models. Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased and or incomplete data sets. An incomplete data set may not reflect the target group it is intended to represent. While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias. Therefore, the upstream identification of possible bias, which later can be rectified, is important to build in to the development of AI. Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias. Accordingly, it can also assist us in making less biased decisions.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

3. Justice

[QUESTIONS] How do we ensure that the benefits of AI are available to everyone? Must we fight against the concentration of power and wealth in the hands of a small number of AI companies? What types of discrimination could AI create or exacerbate? Should the development of AI be neutral or should it seek to reduce social and economic inequalities? What types of legal decisions can we delegate to AI? [PRINCIPLES] ​The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental physical abilities, sexual orientation, ethnic social origins and religious beliefs.

Published by University of Montreal, Forum on the Socially Responsible Development of AI in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Nov 3, 2017

1. Fair AI

We seek to ensure that the applications of AI technology lead to fair results. This means that they should not lead to discriminatory impacts on people in relation to race, ethnic origin, religion, gender, sexual orientation, disability or any other personal condition. We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination. When optimizing a machine learning algorithm for accuracy in terms of false positives and negatives, we will consider the impact of the algorithm in the specific domain.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018