The principle "Draft Ethics Guidelines for Trustworthy AI" has mentioned the topic "discrimination" in the following places:

    · 2. The Principle of Non maleficence: “Do no Harm”

    To avoid harm, data collected and used for training of AI algorithms must be done in a way that avoids discrimination, manipulation, or negative profiling.

    · 4. The Principle of Justice: “Be Fair”

    Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination.

    · 4. The Principle of Justice: “Be Fair”

    Additionally, the positives and negatives resulting from AI should be evenly distributed, avoiding to place vulnerable demographics in a position of greater vulnerability and striving for equal opportunity in terms of access to education, goods, services and technology amongst human beings, without discrimination.

    · 1. Accountability

    In a case of discrimination, however, an explanation and apology might be at least as important.

    · 5. Non Discrimination

    Non discrimination

    · 5. Non Discrimination

    discrimination concerns the variability of AI results between individuals or groups of people based on the exploitation of differences in their characteristics that can be considered either intentionally or unintentionally (such as ethnicity, gender, sexual orientation or age), which may negatively impact such individuals or groups.

    · 5. Non Discrimination

    Direct or indirect discrimination through the use of AI can serve to exploit prejudice and marginalise certain groups.

    · 5. Non Discrimination

    Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons.

    · 5. Non Discrimination

    discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models.

    · 8. Robustness

    The lack of reproducibility can lead to unintended discrimination in AI decisions.

    · 8. Robustness

    Poor governance, by which it becomes possible to intentionally or unintentionally tamper with the data, or grant access to the algorithms to unauthorised entities, can also result in discrimination, erroneous decisions, or even physical harm.