3.2 Dignity

It is the duty of all members of society to mutually respect and protect this right as one of the basic and inviolable rights of every human being. Every individual has the right to protect own dignity: violation or non respect of this right is sanctioned by law. Human dignity (further: dignity) should be understood as a starting principle (principle) that focuses on the preservation of human integrity. Based on that premise, the persons to whom these Guidelines refershould at all times, regardless of the stage in which the concrete artificial intelligence solution isIdevelopment, application or use), keep in mind the person and his integrity as a central concept. In thisregard, it is necessary to develop systems that, at every stage, make it imperative to respect theperson's personality, his freedom and autonomy. Respecting human personality means creating a system that will respect the cognitive, social andcultural characteristics of each individual. The artificial inteligence systems that are being developed must be in accordance with the above, therefore it is necessary to take care that they cannot in any waylead to the subordination of man to the functions of the system, as well as endangering his dignity andintegrity. In order to ensure respect for the principle of dignity, artificial intelligence systems must not be such that in the processes of work and application they grossly ignore the autonomy of human choice. The Constitution of the Republic of Serbia emphasizes that dignity "is inviolable and everyone is obliged to respectand protect it., Evervone has the right to free personal development, if it does not violate the rights of others guaranteed by the Constitution. The Convention on Human Rights states the following: "Human dignity (dignity) is not only a basic human right but also the foundation of human rights." Human dignity is inherent in every human being. In the Republic of Serbia, this term is regulated in the following ways: "The dignity of the person (honor, reputation, or piety) of the person to whom the information refers.'it is legally protected.” "Whoever abuses another or treats him in a way that offends a human being. dignity, shall be punished by imprisonment for up to one year. "Work in the public interest is any socially useful work that does not offend human dignity and is not done for the purpose of making a profit." This principle emphasizes that the integrity and dignity of all who may be affected by the Artificial inteligence Systemmust be taken care of at all times, As it is a general concept, to which life, in addition to the law, gives different sides although the essence is the same, it is appropriate to attach to the concept itself: honor, reputation, that is, piety.
Principle: ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

Published by Republic of Serbia

Related Principles

Proportionality and harmlessness.

It should be recognised that AI technologies do not necessarily, in and of themselves, guarantee the prosperity of humans or the environment and ecosystems. In the event that any harm to humans may occur, risk assessment procedures should be applied and measures taken to prevent such harm from occurring. In other words, for a human person to be legally responsible for the decisions he or she makes to carry out one or more actions, there must be discernment (full human mental faculties), intention (human drive or desire) and freedom (to act in a calculated and premeditated manner). Therefore, to avoid falling into anthropomorphisms that could hinder eventual regulations and or wrong attributions, it is important to establish the conception of artificial intelligences as artifices, that is, as technology, a thing, an artificial means to achieve human objectives but which should not be confused with a human person. That is, the algorithm can execute, but the decision must necessarily fall on the person and therefore, so must the responsibility. Consequently, it emerges that an algorithm does not possess self determination and or agency to make decisions freely (although many times in colloquial language the concept of "decision" is used to describe a classification executed by an algorithm after training), and therefore it cannot be held responsible for the actions that are executed through said algorithm in question.

Published by OFFICE OF THE CHIEF OF MINISTERS UNDERSECRETARY OF INFORMATION TECHNOLOGIES in Recommendations for reliable artificial intelligence, Jnue 2, 2023

Transparency Principle

The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by Center for AI and Digital Policy in Universal Guidelines for AI, Oct, 2018

(a) Human dignity

The principle of human dignity, understood as the recognition of the inherent human state of being worthy of respect, must not be violated by ‘autonomous’ technologies. This means, for instance, that there are limits to determinations and classifications concerning persons, made on the basis of algorithms and ‘autonomous’ systems, especially when those affected by them are not informed about them. It also implies that there have to be (legal) limits to the ways in which people can be led to believe that they are dealing with human beings while in fact they are dealing with algorithms and smart machines. A relational conception of human dignity which is characterised by our social relations, requires that we are aware of whether and when we are interacting with a machine or another human being, and that we reserve the right to vest certain tasks to the human or the machine.

Published by European Group on Ethics in Science and New Technologies, European Commission in Ethical principles and democratic prerequisites, Mar 9, 2018

3.1 Explainability and verifiability

One of the basic characteristics of human consciousness is that it perceives the environment, seeks answers to questions. i.e. explanations of why and how something is or is not. That trait influenced the evolution of man and the development of science, and therefore artificial intelligence. Man's need to understand and make things clear to him found its foothold in this principle. Clarity in the context of these Guidelines means that all processes: development, testing, commissioning, system monitoring and shutdown must be transparent. The purpose and capabilities of the artificial intelligence system itself must be explainable, especially the decisions (recommendations) which it brings (to the extent that it is expedient) to all who are affected by the System (directly or indirectly). lf certain results of the System's work cannot be explained, it is necessary to mark them as a system with a "black box" model. Verifiability is a complementary element of this principle, which ensures that the System can check in all processes, ie. during the entire life cycle. Verifiability includes the actions and procedures of checking artificial intelligence systems during testing and implementation, as well as checking the short term and long term impact that such a system has on humans.

Published by Republic of Serbia in ETHICAL GUIDELINES FOR THE DEVELOPMENT, APPLICATION AND USE OF RELIABLE AND RESPONSIBLE ARTIFICIAL INTELLIGENCE, Febrary, 2023

1. Right to Transparency.

All individuals have the right to know the basis of an AI decision that concerns them. This includes access to the factors, the logic, and techniques that produced the outcome. [Explanatory Memorandum] The elements of the Transparency Principle can be found in several modern privacy laws, including the US Privacy Act, the EU Data Protection Directive, the GDPR, and the Council of Europe Convention 108. The aim of this principle is to enable independent accountability for automated decisions, with a primary emphasis on the right of the individual to know the basis of an adverse determination. In practical terms, it may not be possible for an individual to interpret the basis of a particular decision, but this does not obviate the need to ensure that such an explanation is possible.

Published by The Public Voice coalition, established by Electronic Privacy Information Center (EPIC) in Universal Guidelines for Artificial Intelligence, Oct 23, 2018