2. be transparent about how and when we are using AI, starting with a clear user need and public benefit

Principle: Responsible use of artificial intelligence (AI): Our guiding principles, 2019 (unconfirmed)

Published by Government of Canada

Related Principles

4. We are transparent.

In no case we hide it when the customer’s counterpart is an AI. And, we are transparent about how we use customer data. As Deutsche Telekom, we always have the customer’s trust in mind – trust is what we stand for. We are acting openly to our customers. It is obvious to our customers that they are interacting with an AI when they do. In addition, we make clear, how and to which extent they can choose the way of further processing their personal data.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

· 2) Research Funding

Investments in AI should be accompanied by funding for research on ensuring its beneficial use, including thorny questions in computer science, economics, law, ethics, and social studies, such as: How can we make future AI systems highly robust, so that they do what we want without malfunctioning or getting hacked? How can we grow our prosperity through automation while maintaining people’s resources and purpose? How can we update our legal systems to be more fair and efficient, to keep pace with AI, and to manage the risks associated with AI? What set of values should AI be aligned with, and what legal and ethical status should it have?

Published by Future of Life Institute (FLI), Beneficial AI 2017 in Asilomar AI Principles, Jan 3-8, 2017

L Learning

To maximise the potential of AI, people need to learn how it works and what are the most efficient and effective ways to use it. Employees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI and they need to be provided with the skills to do so.

Published by Institute of Business Ethics (IBE) in IBE interactive framework of fundamental values and principles for the use of Artificial Intelligence (AI) in business, Jan 11, 2018

3. New technology, including AI systems, must be transparent and explainable

For the public to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.

Published by IBM in Principles for Trust and Transparency, May 30, 2018

2. AI must be held to account – and so must users

Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility and AI needs to be held accountable for its actions and decisions, just like humans. Technology should not be allowed to become too clever to be accountable. We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception.

Published by Sage in The Ethics of Code: Developing AI for Business with Five Core Principles, Jun 27, 2017