· Ensuring diversity and inclusiveness

19. Respect, protection and promotion of diversity and inclusiveness should be ensured throughout the life cycle of AI systems, consistent with international law, including human rights law. This may be done by promoting active participation of all individuals or groups regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds. 20. The scope of lifestyle choices, beliefs, opinions, expressions or personal experiences, including the optional use of AI systems and the co design of these architectures should not be restricted during any phase of the life cycle of AI systems. 21. Furthermore, efforts, including international cooperation, should be made to overcome, and never take advantage of, the lack of necessary technological infrastructure, education and skills, as well as legal frameworks, particularly in LMICs, LDCs, LLDCs and SIDS, affecting communities.
Principle: The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO)

Related Principles

Fairness

Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups. This principle aims to ensure that AI systems are fair and that they enable inclusion throughout their entire lifecycle. AI systems should be user centric and designed in a way that allows all people interacting with it to access the related products or services. This includes both appropriate consultation with stakeholders, who may be affected by the AI system throughout its lifecycle, and ensuring people receive equitable access and treatment. This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups including, but not limited to, groups relating to age, disability, race, sex, intersex status, gender identity and sexual orientation. Measures should be taken to ensure the AI produced decisions are compliant with anti‐discrimination laws.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

1.1. Human centered and humanistic approach. In the development of AI technologies, the rights and freedoms of the individual should be given the greatest value. AI technologies developed by AI Actors should promote or not hinder the realization of humans’ capabilities to achieve harmony in social, economic and spiritual spheres, as well as in the highest self fulfillment of human beings. They should take into account key values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples and ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors, which are listed in the section 2 of this Code. 1.2. Respect for human autonomy and freedom of will. AI Actors should take all necessary measures to preserve the autonomy and free will of a human‘s decision making ability, the right to choose, and, in general, the intellectual abilities of a human as an intrinsic value and a system forming factor of modern civilization. AI Actors should, during AIS creation, assess the possible negative consequences for the development of human cognitive abilities and prevent the development of AIS that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the legislation of the Russian Federation in all areas of their activities and at all stages of the creation, development and use of AI technologies, including in matters of the legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life. (At the same time, cannot be considered as discrimination rules, which are explicitly declared by an AI Actor for functioning or the application of AIS for the different groups of users, with such factors taken into account for segmentation) 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to assess the potential risks of using an AIS, including the social consequences for individuals, society and the state, as well as the humanitarian impact of the AIS on human rights and freedoms at different stages, including during the formation and use of datasets. AI Actors should also carry out long term monitoring of the manifestations of such risks and take into account the complexity of the behavior of AIS during risk assessment, including the relationship and the interdependence of processes in the AIS’s life cycle. For critical applications of the AIS, in special cases, it is encouraged that a risk assessment be conducted through the involvement of a neutral third party or authorized official body when to do so would not harm the performance and information security of the AIS and would ensure the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 1. THE KEY PRIORITY OF AI TECHNOLOGIES DEVELOPMENT IS PROTECTION OF THE INTERESTS AND RIGHTS OF HUMAN BEINGS AT LARGE AND EVERY PERSON IN PARTICULAR

1.1. Human centered and humanistic approach. Human rights and freedoms and the human as such must be treated as the greatest value in the process of AI technologies development. AI technologies developed by Actors should promote or not hinder the full realization of all human capabilities to achieve harmony in social, economic and spiritual spheres, as well as the highest self fulfillment of human beings. AI Actors should regard core values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples, ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors listed in Section 2 of this Code. 1.2. Recognition of autonomy and free will of human. AI Actors should take necessary measures to preserve the autonomy and free will of human in the process of decision making, their right to choose, as well as preserve human intellectual abilities in general as an intrinsic value and a system forming factor of modern civilization. AI Actors should forecast possible negative consequences for the development of human cognitive abilities at the earliest stages of AI systems creation and refrain from the development of AI systems that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the national legislation in all areas of their activities and at all stages of creation, integration and use of AI technologies, i.a. in the sphere of legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data that concern individuals or groups do not entail intentional discrimination. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination manifestations based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life (at the same time, the rules of functioning or application of AI systems for different groups of users wherein such factors are taken into account for user segmentation, which are explicitly declared by an AI Actor, cannot be defined as discrimination). 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to: • assess the potential risks of the use of an AI system, including social consequences for individuals, society and the state, as well as the humanitarian impact of an AI system on human rights and freedoms at different stages of its life cycle, i.a. during the formation and use of datasets; • monitor the manifestations of such risks in the long term; • take into account the complexity of AI systems’ actions, including interconnection and interdependence of processes in the AI systems’ life cycle, during risk assessment. In special cases concerning critical applications of an AI system it is encouraged that risk assessment be conducted with the involvement of a neutral third party or authorized official body given that it does not harm the performance and information security of the AI system and ensures the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· Respect, protection and promotion of human rights and fundamental freedoms and human dignity

13. The inviolable and inherent dignity of every human constitutes the foundation for the universal, indivisible, inalienable, interdependent and interrelated system of human rights and fundamental freedoms. Therefore, respect, protection and promotion of human dignity and rights as established by international law, including international human rights law, is essential throughout the life cycle of AI systems. Human dignity relates to the recognition of the intrinsic and equal worth of each individual human being, regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds. 14. No human being or human community should be harmed or subordinated, whether physically, economically, socially, politically, culturally or mentally during any phase of the life cycle of AI systems. Throughout the life cycle of AI systems, the quality of life of human beings should be enhanced, while the definition of “quality of life” should be left open to individuals or groups, as long as there is no violation or abuse of human rights and fundamental freedoms, or the dignity of humans in terms of this definition. 15. Persons may interact with AI systems throughout their life cycle and receive assistance from them, such as care for vulnerable people or people in vulnerable situations, including but not limited to children, older persons, persons with disabilities or the ill. Within such interactions, persons should never be objectified, nor should their dignity be otherwise undermined, or human rights and fundamental freedoms violated or abused. 16. Human rights and fundamental freedoms must be respected, protected and promoted throughout the life cycle of AI systems. Governments, private sector, civil society, international organizations, technical communities and academia must respect human rights instruments and frameworks in their interventions in the processes surrounding the life cycle of AI systems. New technologies need to provide new means to advocate, defend and exercise human rights and not to infringe them.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021

· Fairness and non discrimination

28. AI actors should promote social justice and safeguard fairness and non discrimination of any kind in compliance with international law. This implies an inclusive approach to ensuring that the benefits of AI technologies are available and accessible to all, taking into consideration the specific needs of different age groups, cultural systems, different language groups, persons with disabilities, girls and women, and disadvantaged, marginalized and vulnerable people or people in vulnerable situations. Member States should work to promote inclusive access for all, including local communities, to AI systems with locally relevant content and services, and with respect for multilingualism and cultural diversity. Member States should work to tackle digital divides and ensure inclusive access to and participation in the development of AI. At the national level, Member States should promote equity between rural and urban areas, and among all persons regardless of race, colour, descent, gender, age, language, religion, political opinion, national origin, ethnic origin, social origin, economic or social condition of birth, or disability and any other grounds, in terms of access to and participation in the AI system life cycle. At the international level, the most technologically advanced countries have a responsibility of solidarity with the least advanced to ensure that the benefits of AI technologies are shared such that access to and participation in the AI system life cycle for the latter contributes to a fairer world order with regard to information, communication, culture, education, research and socio economic and political stability. 29. AI actors should make all reasonable efforts to minimize and avoid reinforcing or perpetuating discriminatory or biased applications and outcomes throughout the life cycle of the AI system to ensure fairness of such systems. Effective remedy should be available against discrimination and biased algorithmic determination. 30. Furthermore, digital and knowledge divides within and between countries need to be addressed throughout an AI system life cycle, including in terms of access and quality of access to technology and data, in accordance with relevant national, regional and international legal frameworks, as well as in terms of connectivity, knowledge and skills and meaningful participation of the affected communities, such that every person is treated equitably.

Published by The United Nations Educational, Scientific and Cultural Organization (UNESCO) in The Recommendation on the Ethics of Artificial Intelligence, Nov 24, 2021