2. 公平的(Equitable)

The department will take deliberate steps to minimize unintended bias in AI capabilities.
原则: 国防部人工智能伦理原则(DoD's AI ethical principles), Feb 24, 2020

作者:Department of Defense (DoD), United States

相关原则

2. Fairness and Equity

Deployers should have safeguards in place to ensure that algorithmic decisions do not further exacerbate or amplify existing discriminatory or unjust impacts across different demographics and the design, development, and deployment of AI systems should not result in unfair biasness or discrimination. An example of such safeguards would include human interventions and checks on the algorithms and its outputs. Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity. With the rapid developments in the AI space, AI systems are increasingly used to aid decision making. For example, AI systems are currently used to screen resumes in job application processes, predict the credit worthiness of consumers and provide agronomic advice to farmers. If not properly managed, an AI system’s outputs used to make decisions with significant impact on individuals could perpetuate existing discriminatory or unjust impacts to specific demographics. To mitigate discrimination, it is important that the design, development, and deployment of AI systems align with fairness and equity principles. In addition, the datasets used to train the AI systems should be diverse and representative. Appropriate measures should be taken to mitigate potential biases during data collection and pre processing, training, and inference. For example, thetraining and test dataset for an AI system used in the education sector should be adequately representative of the student population by including students of different genders and ethnicities.

由 ASEAN 在 东盟人工智能治理与道德指南发表, 2024

2. Equitable.

DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non combat AI systems that would inadvertently cause harm to persons.

由 Defense Innovation Board (DIB), Department of Defense (DoD), United States 在 国防部人工智能伦理原则(AI Ethics Principles for DoD)发表, Oct 31, 2019

F. Bias Mitigation:

Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.

由 The North Atlantic Treaty Organization (NATO) 在 北约在国防领域负责任地使用人工智能的原则(NATO Principles of Responsible Use of Artificial Intelligence in Defence)发表, Oct 22, 2021

Principle 7 – Accountability & Responsibility

The accountability and responsibility principle holds designers, vendors, procurers, developers, owners and assessors of AI systems and the technology itself ethically responsible and liable for the decisions and actions that may result in potential risk and negative effects on individuals and communities. Human oversight, governance, and proper management should be demonstrated across the entire AI System Lifecycle to ensure that proper mechanisms are in place to avoid harm and misuse of this technology. AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice. The designers, developers, and people who implement the AI system should be identifiable and assume responsibility and accountability for any potential damage the technology has on individuals or communities, even if the adverse impact is unintended. The liable parties should take necessary preventive actions as well as set risk assessment and mitigation strategy to minimize the harm due to the AI system. The accountability and responsibility principle is closely related to the fairness principle. The parties responsible for the AI system should ensure that the fairness of the system is maintained and sustained through control mechanisms. All parties involved in the AI System Lifecycle should consider and action these values in their decisions and execution.

由 SDAIA 在 人工智能伦理准则发表, Sept 14, 2022

· We will make AI systems fair

1. Data ingested should, where possible, be representative of the affected population 2. Algorithms should avoid non operational bias 3. Steps should be taken to mitigate and disclose the biases inherent in datasets 4. Significant decisions should be provably fair

由 Smart Dubai 在 迪拜的人工智能原则(Dubai's AI Principles)发表, Jan 08, 2019