(Preamble)

OpenAI’s mission is to ensure that artificial general intelligence (AGI) — by which we mean highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome. To that end, we commit to the following principles:
Principle: OpenAI Charter, Apr 9, 2018

Published by OpenAI

Related Principles

9. We share and enlighten.

We acknowledge the transformative power of AI for our society. We will support people and society in preparing for this future world. We live our digital responsibility by sharing our knowledge, pointing out the opportunities of the new technology without neglecting its risks. We will engage with our customers, other companies, policy makers, education institutions and all other stakeholders to ensure we understand their concerns and needs and can setup the right safeguards. We will engage in AI and ethics education. Hereby preparing ourselves, our colleagues and our fellow human beings for the new tasks ahead. Many tasks that are being executed by humans now will be automated in the future. This leads to a shift in the demand of skills. Jobs will be reshaped, rather replaced by AI. While this seems certain, the minority knows what exactly AI technology is capable of achieving. Prejudice and sciolism lead to either demonization of progress or to blind acknowledgment, both calling for educational work. We as Deutsche Telekom feel responsible to enlighten people and help society to deal with the digital shift, so that new appropriate skills can be developed and new jobs can be taken over. And we start from within – by enabling our colleagues and employees. But we are aware that this task cannot be solved by one company alone. Therefore we will engage in partnerships with other companies, offer our know how to policy makers and education providers to jointly tackle the challenges ahead.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

(Preamble)

Google aspires to create technologies that solve important problems and help people in their daily lives. We are optimistic about the incredible potential for AI and other advanced technologies to empower people, widely benefit current and future generations, and work for the common good. We believe that these technologies will promote innovation and further our mission to organize the world’s information and make it universally accessible and useful. We recognize that these same technologies also raise important challenges that we need to address clearly, thoughtfully, and affirmatively. These principles set out our commitment to develop technology responsibly and establish specific application areas we will not pursue.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

(Preamble)

AI has arrived, and it is a technology that has the potential to change the world as we know it so far. It is technology that, with its development has the ability to generate an infinite amount of benefits and improve the quality of life of humanity. Similarly, AI opens the way to risky situations and raises questions about their use and their effects. At this point, it appears necessary to incorporate the ethical dimension to illuminate the development of AI, and make distinctions of its correct or incorrect use. At IA Latam, we collaboratively understand the creation of ethical criteria of self adherence that help us and guide all of us to follow the best possible path, always having as a north a better planet for the new generations. As the technologies advance, the ethical ramifications will be more relevant, where the conversation is no longer based on a "fulfill" but rather on a "we are doing the right thing and getting better". For this reason, in IA LATAM we present below our first Declaration of Ethical Principles for Latin American AI that we hope will be a starting point and a great help for all.

Published by IA Latam in Declaration Of Ethics For The Development And Use Of Artificial Intelligence (unofficial translation), Feb 8, 2019 (unconfirmed)

1. Broadly Distributed Benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

Published by OpenAI in OpenAI Charter, Apr 9, 2018

Third principle: Understanding

AI enabled systems, and their outputs, must be appropriately understood by relevant individuals, with mechanisms to enable this understanding made an explicit part of system design. Effective and ethical decision making in Defence, from the frontline of combat to back office operations, is always underpinned by appropriate understanding of context by those making decisions. Defence personnel must have an appropriate, context specific understanding of the AI enabled systems they operate and work alongside. This level of understanding will naturally differ depending on the knowledge required to act ethically in a given role and with a given system. It may include an understanding of the general characteristics, benefits and limitations of AI systems. It may require knowledge of a system’s purposes and correct environment for use, including scenarios where a system should not be deployed or used. It may also demand an understanding of system performance and potential fail states. Our people must be suitably trained and competent to operate or understand these tools. To enable this understanding, we must be able to verify that our AI enabled systems work as intended. While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable. Mechanisms to interpret and understand our systems must be a crucial and explicit part of system design across the entire lifecycle. This requirement for context specific understanding based on technically understandable systems must also reach beyond the MOD, to commercial suppliers, allied forces and civilians. Whilst absolute transparency as to the workings of each AI enabled system is neither desirable nor practicable, public consent and collaboration depend on context specific shared understanding. What our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible, within the necessary constraints of the national security context.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022