· 6. MAXIMUM TRANSPARENCY AND RELIABILITY OF INFORMATION CONCERNING THE LEVEL OF AI TECHNOLOGIES DEVELOPMENT, THEIR CAPABILITIES AND RISKS ARE CRUCIAL

6.1. Reliability of information about AI systems. AI Actors are encouraged to provide AI systems users with reliable information about the AI systems and most effective methods of their use, harms, benefits acceptable areas and existing limitations of their use. 6.2. Awareness raising in the field of ethical AI application. AI Actors are encouraged to carry out activities aimed at increasing the level of trust and awareness of the citizens who use AI systems and the society at large, in the field of technologies being developed, the specifics of ethical use of AI systems and other issues related to AI systems development by all available means, i.a. by working on scientific and journalistic publications, organizing scientific and public conferences or seminars, as well as by adding the provisions about ethical behavior to the rules of AI systems operation for users and (or) operators, etc.
Principle: AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

Published by AI Alliance Russia

Related Principles

· 5. INTERESTS OF DEVELOPING AI TECHNOLOGIES ABOVE THE INTERESTS OF COMPETITION

5.1. Correctness of AIS comparisons. To maintain the fair competition and effective cooperation of developers, AI Actors should use the most reliable and comparable information about the capabilities of AISs in relation to a task and ensure the uniformity of the measurement methodologies. 5.2. Development of competencies. AI Actors are encouraged to follow practices adopted by the professional community, to maintain the proper level of professional competence necessary for safe and effective work with AIS and to promote the improvement of the professional competence of workers in the field of AI, including within the framework of programs and educational disciplines on AI ethics. 5.3. Collaboration of developers. AI Actors are encouraged to develop cooperation within the AI Actor community, particularly between developers, including by informing each other of the identification of critical vulnerabilities in order to prevent their wide distribution. They should also make efforts to improve the quality and availability of resources in the field of AIS development, including by increasing the availability of data (including labeled data), ensuring the compatibility of the developed AIS where applicable and creating conditions for the formation of a national school for the development of AI technologies that includes publicly available national repositories of libraries and network models, available national development tools, open national frameworks, etc. They are also encouraged to share information on the best practices in the development of AI technologies and organize and hold conferences, hackathons and public competitions, as well as high school and student Olympiads. They should increase the availability of knowledge and encourage the use of open knowledge databases, creating conditions for attracting investments in the development of AI technologies from Russian private investors, business angels, venture funds and private equity funds while stimulating scientific and educational activities in the field of AI by participating in the projects and activities of leading Russian research centers and educational organizations.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 6. IMPORTANCE OF MAXIMUM TRANSPARENCY AND TRUTHFULNESS IN INFORMATION ON THE LEVEL OF DEVELOPMENT, CAPABILITIES AND RISKS OF AI TECHNOLOGIES

6.1. Credibility of information about AIS. AI Actors are encouraged to provide AIS users with credible information about the AIS, acceptable and most effective methods of using the AIS and the harm, benefits, and existing limitations of their use. 6.2. Raising awareness of the ethics of AI application AI Actors are encouraged to carry out activities aimed at increasing the level of trust and awareness of citizens who use AISs and society in general. This should include increasing awareness of the technologies being developed, the features of the ethical use of AISs and other provisions accompanying the development of AIS. This promotion could include the development of journal articles, the organization of scientific and public conferences and seminars, and the inclusion of rules of ethical behavior for users and operators in the rules of the AIS.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 5. INTERESTS OF AI TECHNOLOGIES DEVELOPMENT OUTWEIGH THE INTERESTS OF COMPETITION

5.1. Accuracy of AI systems comparisons. In order to maintain fair competition and effective cooperation of developers, AI Actors are encouraged to use the most reliable and comparable information about the capabilities of AI systems with regards to a certain task and ensure the uniformity of measurement methodologies when comparing AI systems with one another. 5.2. Development of competencies. AI Actors are encouraged to follow practices adopted in the professional community, maintain a proper level of professional competence required for safe and effective work with AI systems and promote the improvement of professional competence of experts in the field of AI, i.a. within programs and educational disciplines on AI ethics. 5.3. Cooperation of developers. AI Actors are encouraged to cooperate within their community and among developers in particular, i.a. through informing each other about the identification of critical vulnerabilities in order to prevent them from spreading, and make efforts to improve the quality and availability of resources in the field of AI systems development, i.a. by: • increasing the availability of data (including marked up data), • ensuring the compatibility of the developed AI systems where applicable; • creating conditions for the formation of a international school of AI technologies development, including publicly available repositories of libraries and network models, available development tools, open international frameworks, etc.; • sharing information about the best practices of AI technologies development; • organizing and hosting conferences, hackathons and public competitions, as well as high school and student Olympiads, or participating in them; • increasing the availability of knowledge and encouraging the use of open knowledge databases; • creating conditions for attracting investments in AI technologies development from private investors, business angels, venture funds and private equity funds; • stimulating scientific, educational and awareness raising activities in the field of AI by participating in the projects and activities of leading research centers and educational organizations.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)

· 1. THE BASICS OF THE CODE

1.1. Legal basis of the Code. The Code duly regards the national legislation of the AI Actors and international treaties. 1.2. Terminology. Terms and definitions in this Code are determined in accordance with applicable international regulatory legal acts and technical regulations in the field of AI. 1.3. AI Actors. For the purposes of this Code, AI Actors are defined as persons and entities, involved in the life cycle of AI systems, including those involved in the provision of goods and services. These include, but are not limited to, the following: • developers who create, train or test AI models systems and develop or implement such models systems, software and or hardware systems and take responsibility for their design; • customers (individuals or organizations) who receive a product or a service; • data providers and persons entities engaged in the formation of datasets for their further use in AI systems; • experts who measure and or assess the parameters of the developed models systems; • manufacturers engaged in the production of AI systems; • AI systems operating entities who legally own the relevant systems, use them for their intended purpose and directly solve practical tasks using AI systems; • operators (individuals or organizations) who ensure the functioning of AI systems; • persons entities with a regulatory impact in the field of AI, including those who work on regulatory and technical documents, manuals, various regulations, requirements and standards in the field of AI; • other persons entities whose actions can affect the results of the actions of AI systems or those who make decisions using AI systems.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)