b) Security

Security: the impermissibility of the use of artificial intelligence for the purpose of intentionally inflicting harm on individuals and legal entities, as well as the prevention and minimization of the risks of negative consequences of the use of artificial intelligence technologies;
Principle: Basic Principles of the Development and Use of Artificial Intelligence Technologies, Oct 10, 2019

Published by Office of the President of the Russian Federation, Decree of the President of the Russian Federation on the Development of Artificial Intelligence in the Russian Federation

Related Principles

5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

There is a significant risk that well intended AI research will be misused in ways which harm people. AI researchers and developers must consider the ethical implications of their work. The Cabinet Office's final Cyber Security & Technology Strategy must explicitly consider the risks of AI with respect to cyber security, and the Government should conduct further research as how to protect data sets from any attempts at data sabotage. The Government and Ofcom must commission research into the possible impact of AI on conventional and social media outlets, and investigate measures which might counteract the use of AI to mislead or distort public opinion as a matter of urgency.

Published by House of Lords of United Kingdom, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

5. Principle of security

Developers should pay attention to the security of AI systems. [Comment] In addition to respecting international guidelines on security such as “OECD Guidelines for the Security of Information Systems and Networks,” it is encouraged that developers pay attention to the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning or other methods: ● To pay attention, as necessary, to the reliability (that is, whether the operations are performed as intended and not steered by unauthorized third parties) and robustness (that is, tolerance to physical attacks and accidents) of AI systems, in addition to: (a) confidentiality; (b) integrity; and (c) availability of information that are usually required for ensuring the information security of AI systems. ● To make efforts to conduct verification and validation in advance in order to assess and control the risks related to the security of AI systems. ● To make efforts to take measures to maintain the security to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of the development of AI systems (“security by design”).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

· 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

1.1. Human centered and humanistic approach. In the development of AI technologies, the rights and freedoms of the individual should be given the greatest value. AI technologies developed by AI Actors should promote or not hinder the realization of humans’ capabilities to achieve harmony in social, economic and spiritual spheres, as well as in the highest self fulfillment of human beings. They should take into account key values such as the preservation and development of human cognitive abilities and creative potential; the preservation of moral, spiritual and cultural values; the promotion of cultural and linguistic diversity and identity; and the preservation of traditions and the foundations of nations, peoples and ethnic and social groups. A human centered and humanistic approach is the basic ethical principle and central criterion for assessing the ethical behavior of AI Actors, which are listed in the section 2 of this Code. 1.2. Respect for human autonomy and freedom of will. AI Actors should take all necessary measures to preserve the autonomy and free will of a human‘s decision making ability, the right to choose, and, in general, the intellectual abilities of a human as an intrinsic value and a system forming factor of modern civilization. AI Actors should, during AIS creation, assess the possible negative consequences for the development of human cognitive abilities and prevent the development of AIS that purposefully cause such consequences. 1.3. Compliance with the law. AI Actors must know and comply with the provisions of the legislation of the Russian Federation in all areas of their activities and at all stages of the creation, development and use of AI technologies, including in matters of the legal responsibility of AI Actors. 1.4. Non discrimination. To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate. AI Actors are encouraged to create and apply methods and software solutions that identify and prevent discrimination based on race, nationality, gender, political views, religious beliefs, age, social and economic status, or information about private life. (At the same time, cannot be considered as discrimination rules, which are explicitly declared by an AI Actor for functioning or the application of AIS for the different groups of users, with such factors taken into account for segmentation) 1.5. Assessment of risks and humanitarian impact. AI Actors are encouraged to assess the potential risks of using an AIS, including the social consequences for individuals, society and the state, as well as the humanitarian impact of the AIS on human rights and freedoms at different stages, including during the formation and use of datasets. AI Actors should also carry out long term monitoring of the manifestations of such risks and take into account the complexity of the behavior of AIS during risk assessment, including the relationship and the interdependence of processes in the AIS’s life cycle. For critical applications of the AIS, in special cases, it is encouraged that a risk assessment be conducted through the involvement of a neutral third party or authorized official body when to do so would not harm the performance and information security of the AIS and would ensure the protection of the intellectual property and trade secrets of the developer.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

2.1. Risk based approach. The level of attention to ethical issues in AI and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific technologies and AISs and the interests of individuals and society. Risk level assessment must take into account both the known and possible risks; in this case, the level of probability of threats should be taken into account as well as their possible scale in the short and long term. In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities. In pursuance of this Code, the development and use of an AIS risk assessment methodology is recommended. 2.2. Responsible attitude. AI Actors should have a responsible approach to the aspects of AIS that influence society and citizens at every stage of the AIS life cycle. These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software. In this case, the responsibility of the AI Actors must correspond to the nature, degree and amount of damage that may occur as a result of the use of technologies and AIS, while taking into account the role of the AI Actor in the life cycle of AIS, as well as the degree of possible and real impact of a particular AI Actor on causing damage, as well as its size. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, the occurrence of which the corresponding AI Actor can reasonably assume, measures should be taken to prevent or limit the occurrence of such consequences. To assess the moral acceptability of consequences and the possible measures to prevent them, Actors can use the provisions of this Code, including the mechanisms specified in Section 2. 2.4. No harm. AI Actors should not allow use of AI technologies for the purpose of causing harm to human life, the environment and or the health or property of citizens and legal entities. Any application of an AIS capable of purposefully causing harm to the environment, human life or health or the property of citizens and legal entities during any stage, including design, development, testing, implementation or operation, is unacceptable. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are informed of their interactions with the AIS when it affects their rights and critical areas of their lives and to ensure that such interactions can be terminated at the request of the user. 2.6. Data security AI Actors must comply with the legislation of the Russian Federation in the field of personal data and secrets protected by law when using an AIS. Furthermore, they must ensure the protection and protection of personal data processed by an AIS or AI Actors in order to develop and improve the AIS by developing and implementing innovative methods of controlling unauthorized access by third parties to personal data and using high quality and representative datasets from reliable sources and obtained without breaking the law. 2.7. Information security. AI Actors should provide the maximum possible protection against unauthorized interference in the work of the AI by third parties by introducing adequate information security technologies, including the use of internal mechanisms for protecting the AIS from unauthorized interventions and informing users and developers about such interventions. They must also inform users about the rules regarding information security when using the AIS. 2.8. Voluntary certification and Code compliance. AI Actors can implement voluntary certification for the compliance of the developed AI technologies with the standards established by the legislation of the Russian Federation and this Code. AI Actors can create voluntary certification and AIS labeling systems that indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AISs. AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry. The use of "strong" AI technologies should be under the control of the state.

Published by AI Alliance Russia in Artificial Intelligence Code of Ethics, Oct 26, 2021

· 2. RESPONSIBILITY MUST BE FULLY ACKNOWLEDGED WHEN CREATING AND USING AI

2.1. Risk based approach. The degree of attention paid to ethical AI issues and the nature of the relevant actions of AI Actors should be proportional to the assessment of the level of risk posed by specific AI technologies and systems for the interests of individuals and society. Risk level assessment shall take into account both known and possible risks, whereby the probability level of threats, as well as their possible scale in the short and long term shall be considered. Making decisions in the field of AI use that significantly affect society and the state should be accompanied by a scientifically verified, interdisciplinary forecast of socio economic consequences and risks and examination of possible changes in the paradigm of value and cultural development of the society. Development and use of an AI systems risk assessment methodology are encouraged in pursuance of this Code. 2.2. Responsible attitude. AI Actors should responsibly treat: • issues related to the influence of AI systems on society and citizens at every stage of the AI systems’ life cycle, i.a. on privacy, ethical, safe and responsible use of personal data; • the nature, degree and extent of damage that may result from the use of AI technologies and systems; • the selection and use of hardware and software utilized in different life cycles of AI systems. At the same time, the responsibility of AI Actors should correspond with the nature, degree and extent of damage that may occur as a result of the use of AI technologies and systems. The role in the life cycle of the AI system, as well as the degree of possible and real influence of a particular AI Actor on causing damage and its extent, should also be taken into account. 2.3. Precautions. When the activities of AI Actors can lead to morally unacceptable consequences for individuals and society, which can be reasonably predicted by the relevant AI Actor, the latter, should take measures to prohibit or limit the occurrence of such consequences. AI Actors shall use the provisions of this Code, including the mechanisms specified in Section 2, to assess the moral unacceptability of such consequences and discuss possible preventive measures. 2.4. No harm. AI Actors should not allow the use of AI technologies for the purpose of causing harm to human life and or health, the property of citizens and legal entities and the environment. Any use, including the design, development, testing, integration or operation of an AI system capable of purposefully causing harm to the environment, human life and or health, the property of citizens and legal entities, is prohibited. 2.5. Identification of AI in communication with a human. AI Actors are encouraged to ensure that users are duly informed of their interactions with AI systems when it affects human rights and critical areas of people’s lives and to ensure that such interaction can be terminated at the request of the user. 2.6. Data security. AI Actors must comply with the national legislation in the field of personal data and secrets protected by law when using AI systems; ensure the security and protection of personal data processed by AI systems or by AI Actors in order to develop and improve the AI systems; develop and integrate innovative methods to counter unauthorized access to personal data by third parties and use high quality and representative datasets obtained without breaking the law from reliable sources. 2.7. Information security. AI Actors should ensure the maximum possible protection from unauthorized interference of third parties in the operation of AI systems; integrate adequate information security technologies, i.a. use internal mechanisms designed to protect the AI system from unauthorized interventions and inform users and developers about such interventions; as well as promote the informing of users about the rules of information security during the use of AI systems. 2.8. Voluntary certification and Code compliance. AI Actors may implement voluntary certification systems to assess the compliance of developed AI technologies with the standards established by the national legislation and this Code. AI Actors may create voluntary certification and labeling systems for AI systems to indicate that these systems have passed voluntary certification procedures and confirm quality standards. 2.9. Control of the recursive self improvement of AI systems. AI Actors are encouraged to cooperate in identifying and verifying information about ways and forms of design of so called universal ("general") AI systems and prevention of possible threats they carry. The issues concerning the use of "general" AI technologies should be under the control of the state.

Published by AI Alliance Russia in AI Ethics Code (revised version), Oct 21, 2022 (unconfirmed)