Bias

Companies should strive to avoid bias in A.I. by drawing on diverse data sets.
Principle: Seeking Ground Rules for A.I.: The Recommendations, Mar 1, 2019

Published by New Work Summit, hosted by The New York Times

Related Principles

V. Diversity, non discrimination and fairness

Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models. The continuation of such biases could lead to (in)direct discrimination. Harm can also result from the intentional exploitation of (consumer) biases or by engaging in unfair competition. Moreover, the way in which AI systems are developed (e.g. the way in which the programming code of an algorithm is written) may also suffer from bias. Such concerns should be tackled from the beginning of the system’ development. Establishing diverse design teams and setting up mechanisms ensuring participation, in particular of citizens, in AI development can also help to address these concerns. It is advisable to consult stakeholders who may directly or indirectly be affected by the system throughout its life cycle. AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility through a universal design approach to strive to achieve equal access for persons with disabilities.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· 5. Non Discrimination

Discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups. Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups. Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons. Intentional harm can, for instance, be achieved by explicit manipulation of the data to exclude certain groups. Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market. Discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models. Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased and or incomplete data sets. An incomplete data set may not reflect the target group it is intended to represent. While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias. Therefore, the upstream identification of possible bias, which later can be rectified, is important to build in to the development of AI. Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias. Accordingly, it can also assist us in making less biased decisions.

Published by The European Commission’s High-Level Expert Group on Artificial Intelligence in Draft Ethics Guidelines for Trustworthy AI, Dec 18, 2018

6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

a. ensuring the respect of international legal instruments on human rights and non discrimination, b. investing in research into technical ways to identify, address and mitigate biases, c. taking reasonable steps to ensure the personal data and information used in automated decision making is accurate, up to date and as complete as possible, and d. elaborating specific guidance and principles in addressing biases and discrimination, and promoting individuals’ and stakeholders’ awareness.

Published by 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) in Declaration On Ethics And Data Protection In Artifical Intelligence, Oct 23, 2018

3. Scientific Integrity and Information Quality

The government’s regulatory and non regulatory approaches to AI applications should leverage scientific and technical information and processes. Agencies should hold information, whether produced by the government or acquired by the government from third parties, that is likely to have a clear and substantial influence on important public policy or private sector decisions (including those made by consumers) to a high standard of quality, transparency, and compliance. Consistent with the principles of scientific integrity in the rulemaking and guidance processes, agencies should develop regulatory approaches to AI in a manner that both informs policy decisions and fosters public trust in AI. Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results. Agencies should also be mindful that, for AI applications to produce predictable, reliable, and optimized outcomes, data used to train the AI system must be of sufficient quality for the intended use.

Published by The White House Office of Science and Technology Policy (OSTP), United States in Principles for the Stewardship of AI Applications, Nov 17, 2020