关键词"偏见(bias)"的搜索结果

· Article 3: Fair and just.

...The development of artificial intelligence should ensure fairness and justice, avoid bias or discrimination against specific groups or individuals, and avoid placing disadvantaged people in an even more unfavorable position....

作者: Artificial Intelligence Industry Alliance (AIIA), China

2. Fairness and Equity

...Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity....

作者: ASEAN

2. Fairness and Equity

...Deployers of AI systems should conduct regular testing of such systems to confirm if there is bias and where bias is confirmed, make the necessary adjustments to rectify imbalances to ensure equity....

作者: ASEAN

2. Fairness and Equity

...Appropriate measures should be taken to mitigate potential biases during data collection and pre processing, training, and inference....

作者: ASEAN

Responsibility:

...We will ensure that we design for inclusiveness and assess the impact of potentially unfair, discriminatory, or inaccurate results, which might perpetuate harmful biases and stereotypes....

作者: Adobe

Responsibility:

...We understand that special care must be taken to address bias if a product or service will have a significant impact on an individual's life, such as with employment, housing, credit, and health....

作者: Adobe

Fairness

...This entails that the datasets they use be equitable; that their model architectures only include reasonable features, processes, and analytical structures; that they do not have inequitable impact; and that they are implemented in an unbiased way....

作者: The Alan Turing Institute

Fairness and non discrimination.

...AI actors should make all reasonable efforts to minimise and avoid reinforcing or perpetuating discriminatory or biased applications and outcomes throughout the lifecycle of AI systems, in order to ensure the fairness of such systems....

作者: OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES

· Be Ethical

...This may include, but not limited to: making the system as fair as possible, reducing possible discrimination and biases, improving its transparency, explainability and predictability, and making the system more traceable, auditable and accountable....

作者: Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc.

Fairness Obligation

...The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair....

作者: Center for AI and Digital Policy

· Fairness and inclusion

...Employers should be required to test AI in the workplace on a regular basis to ensure that the system is built for purpose and is not harmfully influenced by bias of any kind — gender, race, sexual orientation, age, religion, income, family status and so on....

作者: Centre for International Governance Innovation (CIGI), Canada

· Transparency

...Their involvement will help to identify potential bias, errors and unintended outcomes....

作者: Centre for International Governance Innovation (CIGI), Canada

(b) Inclusiveness:

...AI should be inclusive, aiming to avoid bias and allowing for diversity and avoiding a new digital divide....

作者: The Extended Working Group on Ethics of Artificial Intelligence (AI) of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), UNESCO

· (2) Education

...Literacy education provides the following contents: 1) Data used by AI are usually contaminated by bias, 2) AI is easy to generate unwanted bias in its use, and 3) The issues of impartiality, fairness, and privacy protection which are inherent to actual use of AI....

作者: Cabinet Office, Government of Japan

· (2) Education

...Literacy education provides the following contents: 1) Data used by AI are usually contaminated by bias, 2) AI is easy to generate unwanted bias in its use, and 3) The issues of impartiality, fairness, and privacy protection which are inherent to actual use of AI....

作者: Cabinet Office, Government of Japan

· (5) Fair Competition

... By using AI, influence for wealth and society should not be overly biased on some stakeholders in the society....

作者: Cabinet Office, Government of Japan

We should adhere to the principles of fairness and non discrimination, and avoid biases and discrimination based on ethnicities, beliefs, nationalities, genders, etc., during the process of data collection, algorithm design, technology development, and product development and application.

... We should adhere to the principles of fairness and non discrimination, and avoid biases and discrimination based on ethnicities, beliefs, nationalities, genders, etc., during the process of data collection, algorithm design, technology development, and product development and application....

作者: Cyberspace Administration of China

· Fairness and justice

...When it is used to assess, analyze and predict the impact of countries, regions and industries on climate change, their characteristics and development stages should be considered to avoid introducing bias....

作者: International Research Center for AI Ethics and Governance, Instituteof Automation, Chinese Academy of Sciences and other 10 entities

7. We maintain control.

...Additionally, we remove inappropriate data to avoid bias....

作者: Deutsche Telekom

7. We maintain control.

...We are also able to “reset” our AI systems in order to remove false or biased data....

作者: Deutsche Telekom

2. Equitable.

...DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non combat AI systems that would inadvertently cause harm to persons....

作者: Defense Innovation Board (DIB), Department of Defense (DoD), United States

III. Privacy and Data Governance

...When data is gathered, it may reflect socially constructed biases, or contain inaccuracies, errors and mistakes....

作者: European Commission

V. Diversity, non discrimination and fairness

...Data sets used by AI systems (both for training and operation) may suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models....

作者: European Commission

V. Diversity, non discrimination and fairness

...The continuation of such biases could lead to (in)direct discrimination....

作者: European Commission

V. Diversity, non discrimination and fairness

...Harm can also result from the intentional exploitation of (consumer) biases or by engaging in unfair competition....

作者: European Commission

V. Diversity, non discrimination and fairness

...the way in which the programming code of an algorithm is written) may also suffer from bias....

作者: European Commission

(d) Justice, equity, and solidarity

...Discriminatory biases in data sets used to train and run AI systems should be prevented or detected, reported and neutralised at the earliest stage possible....

作者: European Group on Ethics in Science and New Technologies, European Commission

· 1.4. Robustness, security and safety

...c) AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias....

作者: G20 Ministerial Meeting on Trade and Digital Economy

· 2.1. Investing in AI research and development

...b) Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards....

作者: G20 Ministerial Meeting on Trade and Digital Economy

Guard against creating or reinforcing bias

... Guard against creating or reinforcing bias...

作者: GE Healthcare

· 2. Avoid creating or reinforcing unfair bias.

...Avoid creating or reinforcing unfair bias....

作者: Google

· 2. Avoid creating or reinforcing unfair bias.

...AI algorithms and datasets can reflect, reinforce, or reduce unfair biases....

作者: Google

· 2. Avoid creating or reinforcing unfair bias.

...We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies....

作者: Google

· (8) Bias on Human

...· (8) bias on Human...

作者: HAIP Initiative

· (8) Bias on Human

...AI cannot introduce bias to understand and interact with humanity, and should actively interact with human to remove generated potential bias....

作者: HAIP Initiative

· (8) Bias on Human

...AI cannot introduce bias to understand and interact with humanity, and should actively interact with human to remove generated potential bias....

作者: HAIP Initiative

· (15) Bias on Machine

...· (15) bias on Machine...

作者: HAIP Initiative

· (15) Bias on Machine

...Without clear technical judgement, human cannot have bias on AI when human and AI shows similar risks....

作者: HAIP Initiative

· 4. The Principle of Justice: “Be Fair”

...Developers and implementers need to ensure that individuals and minority groups maintain freedom from bias, stigmatisation and discrimination....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 2. Data Governance

...The datasets gathered inevitably contain biases, and one has to be able to prune these away before engaging in training....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 2. Data Governance

...Instead, the findings of bias should be used to look forward and lead to better processes and instructions – improving our decisions making and strengthening our institutions....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Those in control of algorithms may intentionally try to achieve unfair, discriminatory, or biased outcomes in order to exclude certain groups of persons....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Harm may also result from exploitation of consumer biases or unfair competition, such as homogenisation of prices by means of collusion or non transparent market....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Discrimination in an AI context can occur unintentionally due to, for example, problems with data such as bias, incompleteness and bad governance models....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Machine learning algorithms identify patterns or regularities in data, and will therefore also follow the patterns resulting from biased and or incomplete data sets....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...While it might be possible to remove clearly identifiable and unwanted bias when collecting data, data always carries some kind of bias....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Therefore, the upstream identification of possible bias, which later can be rectified, is important to build in to the development of AI....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Moreover, it is important to acknowledge that AI technology can be employed to identify this inherent bias, and hence to support awareness training on our own inherent bias....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

· 5. Non Discrimination

...Accordingly, it can also assist us in making less biased decisions....

作者: The European Commission’s High-Level Expert Group on Artificial Intelligence

8. Avoid biases and unfair impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious beliefs.

...Avoid biases and unfair impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability and political or religious beliefs....

作者: IA Latam

A Accuracy

...They need to be free from biases and systematic errors deriving, for example, from an unfair sampling of a population, or from an estimation process that does not give accurate results....

作者: Institute of Business Ethics (IBE)

4. Fairness:

...AI should be designed to minimize bias and promote inclusive representation....

作者: IBM

6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

...Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:...

作者: 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

...b. investing in research into technical ways to identify, address and mitigate biases,...

作者: 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:

...d. elaborating specific guidance and principles in addressing biases and discrimination, and promoting individuals’ and stakeholders’ awareness....

作者: 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC)

· 1.3 Robust and Representative Data

...To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems....

作者: Information Technology Industry Council (ITI)

· 1.3 Robust and Representative Data

...To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems....

作者: Information Technology Industry Council (ITI)

· 1.4 Interpretability

...We are committed to partnering with others across government, private industry, academia, and civil society to find ways to mitigate bias, inequity, and other potential harms in automated decision making systems....

作者: Information Technology Industry Council (ITI)

Responsible Deployment

...AI systems should not be trained with data that is biased, inaccurate, incomplete or misleading....

作者: Internet Society, "Artificial Intelligence and Machine Learning: Policy Paper"

4. Fairness

...Members of the JSAI will acknowledge that the use of AI may bring about additional inequality and discrimination in society which did not exist before, and will not be biased when developing AI....

作者: The Japanese Society for Artificial Intelligence (JSAI)

8. Principle of fairness

...AI service providers, business users, and data providers may be expected to pay attention to the representativeness of data used for learning or other methods of AI and the social bias inherent in the data so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc....

作者: Ministry of Internal Affairs and Communications (MIC), the Government of Japan

8. Principle of fairness

...In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is attention expected to be paid to the representativeness of data used for learning or other methods and the social bias inherent in the data?...

作者: Ministry of Internal Affairs and Communications (MIC), the Government of Japan

4. Fairness and diversity

...Developers of AI technology should minimize systemic biases in AI solutions that may result from deviations inherent in data and algorithms used to develop solutions....

作者: Megvii

5. Accountability and timely correction

...Errors, defects, biases, or other negative effects of ad intelligence solutions should be recognized and actively addressed as soon as they are discovered....

作者: Megvii

F. Bias Mitigation:

... F. bias Mitigation:...

作者: The North Atlantic Treaty Organization (NATO)

F. Bias Mitigation:

...Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets....

作者: The North Atlantic Treaty Organization (NATO)

Chapter 3. The Norms of Research and Development

...Avoid bias and discrimination....

作者: National Governance Committee for the New Generation Artificial Intelligence, China

Chapter 3. The Norms of Research and Development

...During the process of data collection and algorithm development, strengthen ethics review, fully consider the diversity of demands, avoid potential data and algorithmic bias, and strive to achieve inclusivity, fairness and non discrimination of AI systems....

作者: National Governance Committee for the New Generation Artificial Intelligence, China

Fairness

...Use of AI will include safeguards to manage data bias or data quality risks...

作者: Government of New South Welsh, Australia

Fairness

...It will also rely on careful data management to ensure potential data biases are identified and appropriately managed....

作者: Government of New South Welsh, Australia

Bias

... bias...

作者: New Work Summit, hosted by The New York Times

Bias

...Companies should strive to avoid bias in A.I....

作者: New Work Summit, hosted by The New York Times

· 6. A.I. must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate.

...must guard against bias, ensuring proper, and representative research so that the wrong heuristics cannot be used to discriminate....

作者: Satya Nadella, CEO of Microsoft

· 1.4. Robustness, security and safety

...c) AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias....

作者: The Organisation for Economic Co-operation and Development (OECD)

· 2.1. Investing in AI research and development

...b) Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards....

作者: The Organisation for Economic Co-operation and Development (OECD)

4. Accountable and responsible

...Algorithmic systems should also be regularly peer reviewed or audited to ensure that unwanted biases have not inadvertently crept in over time....

作者: Government of Ontario, Canada

4. Accountable and responsible

...Issues around bias may not be evident when AI systems are initially designed or developed, so it's important to consider this requirement throughout the lifecycle of the system....

作者: Government of Ontario, Canada

Practice holism and do not reduce our ethical focus to components

...We routinely employ statistical analyses to search for unwarranted data, model, and outcome bias....

作者: Rebelliondefense

Ensure fairness

...We are fully determined to combat all types of reducible bias in data collection, derivation, and analysis....

作者: Rebelliondefense

Ensure fairness

...Our teams are trained to identify and challenge biases in our own decision making and in the data we use to train and test our models....

作者: Rebelliondefense

Ensure fairness

...We execute statistical tests to look for imbalance and skewed datasets and include methods to augment datasets to combat these statistical biases....

作者: Rebelliondefense

Ensure fairness

...This review includes in sample and out of sample testing to mitigate the risk of model overfitting to the training data, and biased outcomes in production....

作者: Rebelliondefense

4. Impartiality:

...do not create or act according to bias, thus safeguarding fairness and human dignity;...

作者: The Pontifical Academy for Life, Microsoft, IBM, FAO, the Italia Government

3. We enable businesses beyond bias

...We enable businesses beyond bias...

作者: SAP

3. We enable businesses beyond bias

...bias can negatively impact AI software and, in turn, individuals and our customers....

作者: SAP

3. We enable businesses beyond bias

...We seek to increase the diversity and interdisciplinarity of our teams, and we are investigating new technical methods for mitigating biases....

作者: SAP

3. We enable businesses beyond bias

...We are also deeply committed to supporting our customers in building even more diverse businesses by leveraging AI to build products that help move business beyond bias....

作者: SAP

1. AI should reflect the diversity of the users it serves

...Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from – ensuring AI does not perpetuate stereotypes....

作者: Sage

1. Fairness

...The company will strive not to reinforce nor propagate negative or unfair bias....

作者: Samsung

Principle 1 – Fairness

...The fairness principle requires taking necessary actions to eliminate bias, discriminationor stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems....

作者: SDAIA

Principle 1 – Fairness

...bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups....

作者: SDAIA

Principle 1 – Fairness

...When designing, selecting, and developing AI systems, it is essential to ensure just, fair, non biased, non discriminatory and objective standards that are inclusive, diverse, and representative of all or targeted segments of society....

作者: SDAIA

Principle 1 – Fairness

...To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems should be trained on data that are cleansed from bias and is representative of affected minority groups....

作者: SDAIA

Principle 1 – Fairness

...Al algorithms should be built and developed in a manner that makes their composition free from bias and correlation fallacy....

作者: SDAIA

· Plan and Design:

...The fairness principle requires taking necessary actions to eliminate bias, discrimination or stigmatization of individuals, communities, or groups in the design, data, development, deployment and use of AI systems....

作者: SDAIA

· Plan and Design:

...bias may occur due to data, representation or algorithms and could lead to discrimination against the historically disadvantaged groups....

作者: SDAIA

· Plan and Design:

...When designing, selecting, and developing AI systems, it is essential to ensure just, fair,non biased, non discriminatory and objective standards that are inclusive, diverse, andrepresentative of all or targeted segments of society....

作者: SDAIA

· Plan and Design:

...To ensure consistent AI systems that are based on fairness and inclusiveness, AI systems shouldbe trained on data that are cleansed from bias and is representative of affected minority groups.Al algorithms should be built and developed in a manner that makes their composition free frombias and correlation fallacy....

作者: SDAIA

· Plan and Design:

...During this phase, it is important to implement a fairness awaredesign that takes appropriate precautions across the AI system algorithm, processes, andmechanisms to prevent biases from having a discriminatory effect or lead to skewed andunwanted results or outcomes....

作者: SDAIA

· Prepare Input Data:

...2 Sensitive personal data attributes which are defined in the plan and design phase should not be included in the model data not to feed the existing bias on them....

作者: SDAIA

· Prepare Input Data:

...4 Automated decision support technologies present major risks of bias and unwanted application at the deployment phase, so it is critical to set out mechanisms to prevent harmful and discriminatory results at this phase....

作者: SDAIA

· Deploy and Monitor:

...Periodic UI and UX testing should be conducted to avoid the risk of confusion, confirmation of biases, or cognitive fatigue of the AI system....

作者: SDAIA

· Prepare Input Data:

...Furthermore, the data should be cleansed from societal biases....

作者: SDAIA

3.4 Fairness

...The real dimension includes protection against unjustified bias, discrimination and stigmatization....

作者: Republic of Serbia

· 2) Diversity and Fairness:

...We aim to initiate from the disciplined approach of system engineering, and to construct the AI system with diverse data and unbiased algorithms, thus improving the fairness of user experience....

作者: Youth Work Committee of Shanghai Computer Society

· We will make AI systems fair

...Algorithms should avoid non operational bias...

作者: Smart Dubai

· We will make AI systems fair

...Steps should be taken to mitigate and disclose the biases inherent in datasets...

作者: Smart Dubai

· ③ Respect for Diversity

...Throughout every stage of AI development and utilization, the diversity and representativeness of the AI users should be ensured, and bias and discrimination based on personal characteristics, such as gender, age, disability, region, race, religion, and nationality, should be minimized....

作者: The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI)

· ⑦ Data Management

...Throughout the entire process of data collection and utilization, data quality and risks should be carefully managed so as to minimize data bias....

作者: The Ministry of Science and ICT (MSIT) and the Korea Information Society Development Institute (KISDI)

1. Fair AI

...We will apply technology to minimize the likelihood that the training data sets we use create or reinforce unfair bias or discrimination....

作者: Telefónica

3. Human centric AI

...We are concerned about the potential use of AI for the creation or spreading of fake news, technology addiction, and the potential reinforcement of societal bias in algorithms in general....

作者: Telefónica

8. Fair and equal

...We aspire to embed the principles of fairness and equality in datasets and algorithms applied in all phases of AI design, implementation, testing and usage – fostering fairness and diversity and avoiding unfair bias both at the input and output levels of AI....

作者: Telia Company AB

· Algorithmic fairness

... Ethics by design (EBD): ensure that algorithm is reasonable, and date is accurate, up to date, complete, relevant, unbiased and representative, and take technical measures to identify, solve and eliminate bias...

作者: Tencent Research Institute

· Algorithmic fairness

... Ethics by design (EBD): ensure that algorithm is reasonable, and date is accurate, up to date, complete, relevant, unbiased and representative, and take technical measures to identify, solve and eliminate bias...

作者: Tencent Research Institute

· Algorithmic fairness

... Formulate guidelines and principles on solving bias and discrimination, potential mechanisms include algorithmic transparency, quality review, impact assessment, algorithmic audit, supervision and review, ethical board, etc....

作者: Tencent Research Institute

4. Fairness Obligation.

...Institutions must ensure that AI systems do not reflect unfair bias or make impermissible discriminatory decisions....

作者: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC)

4. Fairness Obligation.

...The Fairness Obligation recognizes that all automated systems make decisions that reflect bias and discrimination, but such decisions should not be normatively unfair....

作者: The Public Voice coalition, established by Electronic Privacy Information Center (EPIC)

Fairness & equality

...unbiased, fair and inclusive AI fostering diversity and equality among people....

作者: Tieto

Fourth principle: Bias and Harm Mitigation

... Fourth principle: bias and Harm Mitigation...

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...Those responsible for AI enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed....

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...Of particular concern is the risk of discriminatory outcomes resulting from algorithmic bias or skewed data sets....

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...Defence must ensure that its AI enabled systems do not result in unfair bias or discrimination, in line with the MOD’s ongoing strategies for diversity and inclusion....

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...A principle of bias and harm mitigation requires the assessment and, wherever possible, the mitigation of these biases or harms....

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...A principle of bias and harm mitigation requires the assessment and, wherever possible, the mitigation of these biases or harms....

作者: The Ministry of Defence (MOD), United Kingdom

Fourth principle: Bias and Harm Mitigation

...This includes addressing bias in algorithmic decision making, carefully curating and managing datasets, setting safeguards and performance thresholds throughout the system lifecycle, managing environmental effects, and applying strict development criteria for new systems, or existing systems being applied to a new context....

作者: The Ministry of Defence (MOD), United Kingdom

Fairness and non discrimination

...United Nations system organizations should aim to promote fairness to ensure the equal and just distribution of the benefits, risks and costs, and to prevent bias, discrimination and stigmatization of any kind, in compliance with international law....

作者: United Nations System Chief Executives Board for Coordination

· Fairness and non discrimination

...AI actors should make all reasonable efforts to minimize and avoid reinforcing or perpetuating discriminatory or biased applications and outcomes throughout the life cycle of the AI system to ensure fairness of such systems....

作者: The United Nations Educational, Scientific and Cultural Organization (UNESCO)

· Fairness and non discrimination

...Effective remedy should be available against discrimination and biased algorithmic determination....

作者: The United Nations Educational, Scientific and Cultural Organization (UNESCO)

5. Ensure a Genderless, Unbiased AI

...Ensure a Genderless, unbiased AI...

作者: UNI Global Union

5. Ensure a Genderless, Unbiased AI

...In the design and maintenance of AI, it is vital that the system is controlled for negative or harmful human bias, and that any bias—be it gender, race, sexual orientation, age, etc.—is identified and is not propagated by the system....

作者: UNI Global Union

3. Prioritize fairness and non discrimination for children

...Seek to eliminate any prejudicial bias against children, or against certain groups of children, that leads to discrimination and exclusion....

作者: United Nations Children's Fund (UNICEF) and the Ministry of

2. Equitable

...The department will take deliberate steps to minimize unintended bias in AI capabilities....

作者: Department of Defense (DoD), United States

Objective and Equitable.

...Consistent with our commitment to providing objective intelligence, we will take affirmative steps to identify and mitigate bias....

作者: Intelligence Community (IC), United States

3. Scientific Integrity and Information Quality

...Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results....

作者: The White House Office of Science and Technology Policy (OSTP), United States

7. Fairness and Non Discrimination

...At the same time, applications can, in some instances, introduce real world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI....

作者: The White House Office of Science and Technology Policy (OSTP), United States

3. Scientific Integrity and Information Quality

...Best practices include transparently articulating the strengths, weaknesses, intended optimizations or outcomes, bias mitigation, and appropriate uses of the AI application’s results....

作者: The White House Office of Science and Technology Policy (OSTP), United States

7. Fairness and Non Discrimination

...At the same time, applications can, in some instances, introduce real world bias that produces discriminatory outcomes or decisions that undermine public trust and confidence in AI....

作者: The White House Office of Science and Technology Policy (OSTP), United States

1. Awareness

...Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society....

作者: ACM US Public Policy Council (USACM)

1. Awareness

...Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society....

作者: ACM US Public Policy Council (USACM)

5. Data Provenance

...A description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data gathering process....

作者: ACM US Public Policy Council (USACM)

Be Unbiased.

... Be unbiased....

作者: Unity Technologies

5 Ensure inclusiveness and equity

...AI technologies should not be biased....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...bias is a threat to inclusiveness and equity because it represents a departure, often arbitrary, from equal treatment....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...Unintended biases that may emerge with AI should be avoided or identified and mitigated....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...AI developers should be aware of the possible biases in their design, implementation and use and the potential harm that biases can cause to individuals and society....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...These parties also have a duty to address potential bias and avoid introducing or exacerbating health care disparities, including when testing or deploying new AI technologies in vulnerable populations....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...AI developers should ensure that AI data, and especially training data, do not include sampling bias and are therefore accurate, complete and diverse....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...The effects of use of AI technologies must be monitored and evaluated, including disproportionate effects on specific groups of people when they mirror or exacerbate existing forms of bias and discrimination....

作者: World Health Organization (WHO)

5 Ensure inclusiveness and equity

...Special provision should be made to protect the rights and welfare of vulnerable persons, with mechanisms for redress if such bias and discrimination emerges or is alleged....

作者: World Health Organization (WHO)