The principle "Artificial Intelligence Code of Ethics" has mentioned the topic "safety" in the following places:

    · 1. THE MAIN PRIORITY OF THE DEVELOPMENT OF AI TECHNOLOGIES IS PROTECTING THE INTERESTS AND RIGHTS OF HUMAN BEINGS COLLECTIVELY AND AS INDIVIDUALS

    To ensure fairness and non discrimination, AI Actors should take measures to verify that the algorithms, datasets and processing methods for machine learning that are used to group and or classify data concerning individuals or groups do not intentionally discriminate.

    · 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

    In the field of AI development, making decisions that are significant to society and the state should be accompanied by scientifically verified and interdisciplinary forecasting of socio economic consequences and risks, as well as by the examination of possible changes in the value and cultural paradigm of the development of society, while taking into account national priorities.

    · 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

    These include privacy; the ethical, safe and responsible use of personal data; the nature, degree and amount of damage that may follow as a result of the use of the technology and AIS; and the selection and use of companion hardware and software.

    · 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

    during any stage, including design, development, testing, implementation or

    · 2. NEED FOR CONSCIOUS RESPONSIBILITY WHEN CREATING AND USING AI

    AI Actors are encouraged to collaborate in the identification and verification of methods and forms of creating universal ("strong") AIS and the prevention of the possible threats that AIS carry.

    · 4. AI TECHNOLOGIES SHOULD BE APPLIED AND IMPLEMENTED WHERE IT WILL BENEFIT PEOPLE

    AI Actors should encourage and incentivize the design, implementation, and development of safe and ethical AI technologies, taking into account national priorities.

    · 5. INTERESTS OF DEVELOPING AI TECHNOLOGIES ABOVE THE INTERESTS OF COMPETITION

    AI Actors are encouraged to follow practices adopted by the professional community, to maintain the proper level of professional competence necessary for safe and effective work with AIS and to promote the improvement of the professional competence of workers in the field of AI, including within the framework of programs and educational disciplines on AI ethics.

    · 1. Foundation of the code action

    Such persons include, but are not limited to, the following: developers who create, train, or test AI models systems and develop or implement such models systems, software and or hardware systems and take responsibility for their design; customers (individuals or organizations) receiving a product; or a service; data providers and persons involved in the formation of datasets for their use in AISs; experts who measure and or evaluate the parameters of the developed models systems; manufacturers engaged in the production of AIS; AIS operators who legally own the relevant systems, use them for their intended purpose and directly implement the solution to the problems that arise from using AIS; operators (individuals or organizations) carrying out the work of the AIS; persons with a regulatory impact in the field of AI, including the developers of regulatory and technical documents, manuals, various regulations, requirements, and standards in the field of AI; and other persons whose actions can affect the results of the actions of an AIS or persons who make decisions on the use of AIS.

    · 2. MECHANISM OF ACCESSION AND IMPLEMENTATION OF THE CODE

    For the timely exchange of best practices, the useful and safe application of AIS built on the basic principles of this Code, increasing the transparency of developers' activities, and maintaining healthy competition in the AIS market, AI Actors may create a set of best and or worst practices for solving emerging ethical issues in the AI life cycle, selected according to the criteria established by the professional community.