· 5) Race Avoidance

Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.
Principle: Asilomar AI Principles, Jan 3-8, 2017

Published by Future of Life Institute (FLI), Beneficial AI 2017

Related Principles

· 3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· (6) Safety

Artificial Intelligence should be with concrete design to avoid known and potential safety issues (for themselves, other AI, and human) with different levels of risks.

Published by HAIP Initiative in Harmonious Artificial Intelligence Principles (HAIP), Sep 16, 2018

5 Safety and Reliability

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall adopt design regimes and standards ensuring high safety and reliability of AI systems on one hand while limiting the exposure of developers and deployers on the other hand.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

Chapter 5. The Norms of Use

  18. Promote good use. Strengthen the justifications and evaluations of AI products and services before use, fully get aware on the benefits of AI products and services, and fully consider the legitimate rights and interests of various stakeholders, so as to better promote economic prosperity, social progress and sustainable development.   19. Avoid misuse and abuse. Fully get aware and understand the scope of applications and potential negative effects of AI products and services, and earnestly respect the rights of relevant entities not to use AI products or services, avoid improper use, misuse and abuse of AI products and services, and avoid unintended cause of damages to the legitimate rights and interests of others.   20. Forbid malicious use. It is forbidden to use AI products and services that do not comply with laws, regulations, ethical norms, and standards. It is forbidden to use AI products and services to engage in illegal activities. It is strictly forbidden to endanger national security, public safety and production safety, and it is strictly forbidden to do harm to public interests.   21. Timely and Proactive feedback. Actively participate in the practice of AI ethics and governance, prompt feedback to relevant subjects and assistance for solving problems are expected when technical safety and security flaws, policy and law vacuums, and lags of regulation are found in the use of AI products and services.   22. Improve the ability to use. Actively learn AI related knowledge, and actively master the skills required for various phases related to the use of AI products and services, such as operation, maintenance, and emergency response, so as to ensure the safe and efficient use of them.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

· AI systems should not be able to autonomously hurt, destroy or deceive humans

1. AI systems should be built to serve and inform, and not to deceive and manipulate 2. Nations should collaborate to avoid an arms race in lethal autonomous weapons, and such weapons should be tightly controlled 3. Active cooperation should be pursued to avoid corner cutting on safety standards 4. Systems designed to inform significant decisions should do so impartially

Published by Smart Dubai in Dubai's AI Principles, Jan 08, 2019