The principle "Draft Ethics Guidelines for Trustworthy AI" has mentioned the topic "fairness" in the following places:

    · 1. The Principle of Beneficence: “Do Good”

    At the same time, beneficent AI systems can contribute to wellbeing by seeking achievement of a fair, inclusive and peaceful society, by helping to increase citizen’s mental autonomy, with equal distribution of economic, social and political opportunity.

    · 2. The Principle of Non maleficence: “Do no Harm”

    To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling.

    · 4. The Principle of Justice: “Be Fair”

    The Principle of justice: “Be Fair”

    · 4. The Principle of Justice: “Be Fair”

    The Principle of Justice: “Be fair

    · 4. The Principle of Justice: “Be Fair”

    For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair.

    · 4. The Principle of Justice: “Be Fair”

    For the purposes of these Guidelines, the principle of justice imparts that the development, use, and regulation of AI systems must be fair.

    · 4. The Principle of Justice: “Be Fair”

    Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.

    · 4. The Principle of Justice: “Be Fair”

    Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.

    · 4. The Principle of Justice: “Be Fair”

    Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination.

    · 4. The Principle of Justice: “Be Fair”

    justice also means that AI systems must provide users with effective redress if harm occurs, or effective remedy if data practices are no longer aligned with human beings’ individual or collective preferences.

    · 4. The Principle of Justice: “Be Fair”

    Lastly, the principle of justice also commands those developing or implementing AI to be held to high standards of accountability.

    · 1. Accountability

    In a case of discrimination, however, an explanation and apology might be at least as important.

    · 2. Data Governance

    The datasets gathered inevitably contain biases, and one has to be able to prune these away before engaging in training.

    · 2. Data Governance

    Instead, the findings of bias should be used to look forward and lead to better processes and instructions – improving our decisions making and strengthening our institutions.

    · 5. Non Discrimination

    Non discrimination

    · 5. Non Discrimination

    discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups.

    · 5. Non Discrimination

    Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups.

    · 5. Non Discrimination

    Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups.

    · 5. Non Discrimination

    Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons.

    · 5. Non Discrimination

    Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons.

    · 5. Non Discrimination

    Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons.

    · 5. Non Discrimination

    Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market.

    · 5. Non Discrimination

    Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market.

    · 5. Non Discrimination

    discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models.

    · 5. Non Discrimination

    Discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models.

    · 5. Non Discrimination

    Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased and or incomplete data sets.

    · 5. Non Discrimination

    While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias.

    · 5. Non Discrimination

    While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias.

    · 5. Non Discrimination

    Therefore, the upstream identification of possible bias, which later can be rectified, is important to build in to the development of AI.

    · 5. Non Discrimination

    Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias.

    · 5. Non Discrimination

    Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias.

    · 5. Non Discrimination

    Accordingly, it can also assist us in making less biased decisions.

    · 6. Respect for (& Enhancement of) Human Autonomy

    AI systems should be designed not only to uphold rights, values and principles, but also to protect citizens in all their diversity from governmental and private abuses made possible by AI technology, ensuring a fair distribution of the benefits created by AI technologies, protect and enhance a plurality of human values, and enhance self determination and autonomy of individual users and communities.

    · 8. Robustness

    The lack of reproducibility can lead to unintended discrimination in AI decisions.

    · 8. Robustness

    Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm.