4. Control

We monitor AI solutions so that we are continuously ready to intervene into AI, datasets and algorithms, to identify needs for improvements and to prevent and or reduce damage.
Principle: Telia Company Guiding Principles on trusted AI ethics, Jan 22, 2019

Published by Telia Company AB

Related Principles

4. Data Governance

o Understanding how we use data, and the sources from which we obtain it, are key to our AI and ML principles. We maintain processes and systems to track and manage our data usage and retention from across ADP systems or processes. If we use external information in our models, such as government reports or industry terminologies, we understand the processes and impact of that information in our models. All data included in our ML models is monitored for changes that could alter the desired outcomes.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

6. We set the framework.

Our AI solutions are developed and enhanced on grounds of deep analysis and evaluation. They are transparent, auditable, fair, and fully documented. We consciously initiate the AI’s development for the best possible outcome. The essential paradigm for our AI systems’ impact analysis is “privacy und security by design”. This is accompanied e.g. by risks and chances scenarios or reliable disaster scenarios. We take great care in the initial algorithm of our own AI solutions to prevent so called “Black Boxes” and to make sure that our systems shall not unintentionally harm the users

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

7. We maintain control.

We are able to deactivate and stop AI systems at any time (kill switch). Additionally, we remove inappropriate data to avoid bias. We have an eye on the decisions made and the information fed to the system in order to enhance decision quality. We take responsibility for a diverse and appropriate data input. In case of inconsistencies, we rather stop the AI system than pursue with potentially manipulated data. We are also able to “reset” our AI systems in order to remove false or biased data. By this, we install a lever to reduce (unintended) unsuitable decisions or actions to a minimum.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

II. Technical robustness and safety

Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of the AI system, and to adequately cope with erroneous outcomes. AI systems need to be reliable, secure enough to be resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and they must ensure a fall back plan in case of problems. Their decisions must be accurate, or at least correctly reflect their level of accuracy, and their outcomes should be reproducible. In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned. This includes the minimisation and where possible the reversibility of unintended consequences or errors in the system’s operation. Processes to clarify and assess potential risks associated with the use of AI systems, across various application areas, should be put in place.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

· 3. Be built and tested for safety.

We will continue to develop and apply strong safety and security practices to avoid unintended results that create risks of harm. We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018