1. Future oriented

The developement of AI requires coordination between innovation and safety, so as to protect innovation with security, and to drive security with innovation. While ensuring the safety of artificial intelligence itself, we will actively apply artificial intelligence technology to solve the security problems of human society.
Principle: Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security

Related Principles

· (7) Innovation

To realize Society 5.0 and continuous innovation in which people evolve along with AI, it is necessary to account for national, industry academia, and public private borders, race, sex, nationality, age, political and religious beliefs, etc. Beyond these boundaries, through a Global perspective we must promote diversification and cooperation between industry academia public private sectors, through the development of human capabilities and technology. To encourage mutual collaboration and partnership between universities, research institutions and private sectors, and the flexible movement of talent. To implement AI efficiently and securely in society, methods for confirming the quality and reliability of AI and for efficient collection and maintenance of data utilized in AI must be promoted. Additionally, the establishment of AI engineering should also be promoted. This engineering includes methods for the development, testing and operation of AI. To ensure the sound development of AI technology, it is necessary to establish an accessible platform in which data from all fields can be mutually utilized across borders with no monopolies, while ensuring privacy and security. In addition, research and development environments should be created in which computer resources and highspeed networks are shared and utilized, to promote international collaboration and accelerate AI research. To promote implementation of AI technology, governments must promote regulatory reform to reduce impeding factors in AI related fields.

Published by Cabinet Office, Government of Japan in Social Principles of Human-centric AI (Draft), Dec 27, 2018

PREAMBLE

For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks of which natural intelligence alone was thought capable: processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects. Given the immaterial nature of these tasks, and by analogy with human intelligence, we designate these wide ranging systems under the general name of artificial intelligence. Artificial intelligence constitutes a major form of scientific and technological progress, which can generate considerable social benefits by improving living conditions and health, facilitating justice, creating wealth, bolstering public safety, and mitigating the impact of human activities on the environment and the climate. Intelligent machines are not limited to performing better calculations than human beings; they can also interact with sentient beings, keep them company and take care of them. However, the development of artificial intelligence does pose major ethical challenges and social risks. Indeed, intelligent machines can restrict the choices of individuals and groups, lower living standards, disrupt the organization of labor and the job market, influence politics, clash with fundamental rights, exacerbate social and economic inequalities, and affect ecosystems, the climate and the environment. Although scientific progress, and living in a society, always carry a risk, it is up to the citizens to determine the moral and political ends that give meaning to the risks encountered in an uncertain world. The lower the risks of its deployment, the greater the benefits of artificial intelligence will be. The first danger of artificial intelligence development consists in giving the illusion that we can master the future through calculations. Reducing society to a series of numbers and ruling it through algorithmic procedures is an old pipe dream that still drives human ambitions. But when it comes to human affairs, tomorrow rarely resembles today, and numbers cannot determine what has moral value, nor what is socially desirable. The principles of the current declaration are like points on a moral compass that will help guide the development of artificial intelligence towards morally and socially desirable ends. They also offer an ethical framework that promotes internationally recognized human rights in the fields affected by the rollout of artificial intelligence. Taken as a whole, the principles articulated lay the foundation for cultivating social trust towards artificially intelligent systems. The principles of the current declaration rest on the common belief that human beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive to fulfill their potential by freely exercising their emotional, moral and intellectual capacities. It is incumbent on the various public and private stakeholders and policymakers at the local, national and international level to ensure that the development and deployment of artificial intelligence are compatible with the protection of fundamental human capacities and goals, and contribute toward their fuller realization. With this goal in mind, one must interpret the proposed principles in a coherent manner, while taking into account the specific social, cultural, political and legal contexts of their application.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

(Preamble)

The global development of Artificial Intelligence (AI) has reached a new stage, with features such as cross disciplinary integration, human machine coordination, open and collective intelligence, and etc., which are profoundly changing our daily lives and the future of humanity. In order to promote the healthy development of the new generation of AI, better balance between development and governance, ensure the safety, reliability and controllability of AI, support the economic, social, and environmental pillars of the UN sustainable development goals, and to jointly build a human community with a shared future, all stakeholders concerned with AI development should observe the following principles:

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

8. Open cooperation

The development of artificial intelligence requires the concerted efforts of all countries and all parties, and we should actively establish norms and standards for the safe development of artificial intelligence at the international level, so as to avoid the security risks caused by incompatibility between technology and policies.

Published by Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security in Shanghai Initiative for the Safe Development of Artificial Intelligence, Aug 30, 2019

6 Promote artificial intelligence that is responsive and sustainable

Responsiveness requires that designers, developers and users continuously, systematically and transparently examine an AI technology to determine whether it is responding adequately, appropriately and according to communicated expectations and requirements in the context in which it is used. Thus, identification of a health need requires that institutions and governments respond to that need and its context with appropriate technologies with the aim of achieving the public interest in health protection and promotion. When an AI technology is ineffective or engenders dissatisfaction, the duty to be responsive requires an institutional process to resolve the problem, which may include terminating use of the technology. Responsiveness also requires that AI technologies be consistent with wider efforts to promote health systems and environmental and workplace sustainability. AI technologies should be introduced only if they can be fully integrated and sustained in the health care system. Too often, especially in under resourced health systems, new technologies are not used or are not repaired or updated, thereby wasting scare resources that could have been invested in proven interventions. Furthermore, AI systems should be designed to minimize their ecological footprints and increase energy efficiency, so that use of AI is consistent with society’s efforts to reduce the impact of human beings on the earth’s environment, ecosystems and climate. Sustainability also requires governments and companies to address anticipated disruptions to the workplace, including training of health care workers to adapt to use of AI and potential job losses due to the use of automated systems for routine health care functions and administrative tasks.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021