2. AI developers are responsible for their projects and must take into account the impact that each project may have on society. We will work to avoid potentially harmful or abusive applications. As we develop and implement AI technologies, we will evaluate probable uses in light of the following factors: Purpose, nature and Impact.

Principle: Declaration Of Ethics For The Development And Use Of Artificial Intelligence (unofficial translation), Feb 8, 2019 (unconfirmed)

Published by IA Latam

Related Principles

· 1. Be socially beneficial.

The expanded reach of new technologies increasingly touches society as a whole. Advances in AI will have transformative impacts in a wide range of fields, including healthcare, security, energy, transportation, manufacturing, and entertainment. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides. AI also enhances our ability to understand the meaning of content at scale. We will strive to make high quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate. And we will continue to thoughtfully evaluate when to make our technologies available on a non commercial basis.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· 7. Be made available for uses that accord with these principles.

Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications. As we develop and deploy AI technologies, we will evaluate likely uses in light of the following factors: Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use Nature and uniqueness: whether we are making available technology that is unique or more generally available Scale: whether the use of this technology will have significant impact Nature of Google’s involvement: whether we are providing general purpose tools, integrating tools for customers, or developing custom solutions

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

AI Applications We Will Not Pursue

In addition to the above objectives, we will not design or deploy AI in the following application areas: 1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. 2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. 3. Technologies that gather or use information for surveillance violating internationally accepted norms. 4. Technologies whose purpose contravenes widely accepted principles of international law and human rights. As our experience in this space deepens, this list may evolve.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

4. Principle of safety

Developers should take it into consideration that AI systems will not harm the life, body, or property of users or third parties through actuators or other devices. [Comment] AI systems which are supposed to be subject to this principle are such ones that might harm the life, body, or property of users or third parties through actuators or other devices. It is encouraged that developers refer to relevant international standards and pay attention to the followings, with particular consideration of the possibility that outputs or programs might change as a result of learning or other methods of AI systems: ● To make efforts to conduct verification and validation in advance in order to assess and mitigate the risks related to the safety of the AI systems. ● To make efforts to implement measures, throughout the development stage of AI systems to the extent possible in light of the characteristics of the technologies to be adopted, to contribute to the intrinsic safety (reduction of essential risk factors such as kinetic energy of actuators) and the functional safety (mitigation of risks by operation of additional control devices such as automatic braking) when AI systems work with actuators or other devices. And ● To make efforts to explain the designers’ intent of AI systems and the reasons for it to stakeholders such as users, when developing AI systems to be used for making judgments regarding the safety of life, body, or property of users and third parties (for example, such judgments that prioritizes life, body, property to be protected at the time of an accident of a robot equipped with AI).

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

First principle: Human Centricity

The impact of AI enabled systems on humans must be assessed and considered, for a full range of effects both positive and negative across the entire system lifecycle. Whether they are MOD personnel, civilians, or targets of military action, humans interacting with or affected by AI enabled systems for Defence must be treated with respect. This means assessing and carefully considering the effects on humans of AI enabled systems, taking full account of human diversity, and ensuring those effects are as positive as possible. These effects should prioritise human life and wellbeing, as well as wider concerns for human kind such as environmental impacts, while taking account of military necessity. This applies across all uses of AI enabled systems, from the back office to the battlefield. The choice to develop and deploy AI systems is an ethical one, which must be taken with human implications in mind. It may be unethical to use certain systems where negative human impacts outweigh the benefits. Conversely, there may be a strong ethical case for the development and use of an AI system where it would be demonstrably beneficial or result in a more ethical outcome.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022