1. Accountability and Transparency

o ADP believes that human oversight is core to providing reliable ML results. We have implemented audit and risk assessments to test our models as the baseline of our oversight methodologies. We continue to actively monitor and improve our models and systems to ensure that changes in the underlying data or model conditions do not inappropriately affect the desired results. o ADP provides information as to how we handle personal data in the relevant privacy statement that is made available to our clients’ employees, consumers or job applicants.
Principle: ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

Published by ADP

Related Principles

3. Explainability

o We strive to develop ML solutions that are explainable and direct. Our ML data discovery and data usage models are designed with understanding as a key attribute, measured against an expressed desired outcome. For example, if the ML model is to provide an employee specific learning or training recommendations, we actively measure both the selection of those recommendations as well as the outcome or results of the learning module for that individual. In turn, we provide supporting information to outline the results of the recommendation’s effectiveness. ADP is also committed to providing individuals with the right to question an automated decision, and to require a human review of the decision.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

4. Data Governance

o Understanding how we use data, and the sources from which we obtain it, are key to our AI and ML principles. We maintain processes and systems to track and manage our data usage and retention from across ADP systems or processes. If we use external information in our models, such as government reports or industry terminologies, we understand the processes and impact of that information in our models. All data included in our ML models is monitored for changes that could alter the desired outcomes.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

5. We are secure.

Data security is a prime quality of Deutsche Telekom. In order to maintain this asset, we ensure that our security measures are up to date while having a full overview of how customer related data is used and who has access to which kind of data. We never process privacy relevant data without legal permission. This policy applies to our AI systems just as much as it does to all of our activities. Additionally, we limit the usage to appropriate use cases and thoroughly secure our systems to obstruct external access and ensure data privacy.

Published by Deutsche Telekom in Deutsche Telekom’s guidelines for artificial intelligence, May 11, 2018

2. Transparency

For cognitive systems to fulfill their world changing potential, it is vital that people have confidence in their recommendations, judgments and uses. Therefore, the IBM company will make clear: When and for what purposes AI is being applied in the cognitive solutions we develop and deploy. The major sources of data and expertise that inform the insights of cognitive solutions, as well as the methods used to train those systems and solutions. The principle that clients own their own business models and intellectual property and that they can use AI and cognitive systems to enhance the advantages they have built, often through years of experience. We will work with our clients to protect their data and insights, and will encourage our clients, partners and industry colleagues to adopt similar practices.

Published by IBM in Principles for the Cognitive Era, Jan 17, 2017

3 Ensure transparency, explainability and intelligibility

AI should be intelligible or understandable to developers, users and regulators. Two broad approaches to ensuring intelligibility are improving the transparency and explainability of AI technology. Transparency requires that sufficient information (described below) be published or documented before the design and deployment of an AI technology. Such information should facilitate meaningful public consultation and debate on how the AI technology is designed and how it should be used. Such information should continue to be published and documented regularly and in a timely manner after an AI technology is approved for use. Transparency will improve system quality and protect patient and public health safety. For instance, system evaluators require transparency in order to identify errors, and government regulators rely on transparency to conduct proper, effective oversight. It must be possible to audit an AI technology, including if something goes wrong. Transparency should include accurate information about the assumptions and limitations of the technology, operating protocols, the properties of the data (including methods of data collection, processing and labelling) and development of the algorithmic model. AI technologies should be explainable to the extent possible and according to the capacity of those to whom the explanation is directed. Data protection laws already create specific obligations of explainability for automated decision making. Those who might request or require an explanation should be well informed, and the educational information must be tailored to each population, including, for example, marginalized populations. Many AI technologies are complex, and the complexity might frustrate both the explainer and the person receiving the explanation. There is a possible trade off between full explainability of an algorithm (at the cost of accuracy) and improved accuracy (at the cost of explainability). All algorithms should be tested rigorously in the settings in which the technology will be used in order to ensure that it meets standards of safety and efficacy. The examination and validation should include the assumptions, operational protocols, data properties and output decisions of the AI technology. Tests and evaluations should be regular, transparent and of sufficient breadth to cover differences in the performance of the algorithm according to race, ethnicity, gender, age and other relevant human characteristics. There should be robust, independent oversight of such tests and evaluation to ensure that they are conducted safely and effectively. Health care institutions, health systems and public health agencies should regularly publish information about how decisions have been made for adoption of an AI technology and how the technology will be evaluated periodically, its uses, its known limitations and the role of decision making, which can facilitate external auditing and oversight.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021