The principle "Key ethical principles for use of artificial intelligence for health" has mentioned the topic "bias" in the following places:

    5 Ensure inclusiveness and equity

    AI technologies should not be biased.

    5 Ensure inclusiveness and equity

    bias is a threat to inclusiveness and equity because it represents a departure, often arbitrary, from equal treatment.

    5 Ensure inclusiveness and equity

    Unintended biases that may emerge with AI should be avoided or identified and mitigated.

    5 Ensure inclusiveness and equity

    AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society.

    5 Ensure inclusiveness and equity

    AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society.

    5 Ensure inclusiveness and equity

    These parties also have a duty to address potential bias and avoid introducing or exacerbating health care disparities, including when testing or deploying new AI technologies in vulnerable populations.

    5 Ensure inclusiveness and equity

    AI developers should ensure that AI data, and especially training data, do not include sampling bias and are therefore accurate, complete and diverse.

    5 Ensure inclusiveness and equity

    The effects of use of AI technologies must be monitored and evaluated, including disproportionate effects on specific groups of people when they mirror or exacerbate existing forms of bias and discrimination.

    5 Ensure inclusiveness and equity

    Special provision should be made to protect the rights and welfare of vulnerable persons, with mechanisms for redress if such bias and discrimination emerges or is alleged.