· Test and validation

Ensure AI systems go through rigorous test and validation to achieve reasonable expectations of performance
Principle: "ARCC": An Ethical Framework for Artificial Intelligence, Sep 18, 2018

Published by Tencent Research Institute

Related Principles

· 1.3 Robust and Representative Data

To promote the responsible use of data and ensure its integrity at every stage, industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems. AI systems need to leverage large datasets, and the availability of robust and representative data for building and improving AI and machine learning systems is of utmost importance.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

C. Explainability and Traceability:

AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and or national level.

Published by The North Atlantic Treaty Organization (NATO) in NATO Principles of Responsible Use of Artificial Intelligence in Defence, Oct 22, 2021

5. We uphold quality and safety standards

As with any of our products, our AI software is subject to our quality assurance process, which we continuously adapt when necessary. Our AI software undergoes thorough testing under real world scenarios to firmly validate they are fit for purpose and that the product specifications are met. We work closely with our customers and users to uphold and further improve our systems’ quality, safety, reliability, and security.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

Fifth principle: Reliability

AI enabled systems must be demonstrably reliable, robust and secure. The MOD’s AI enabled systems must be suitably reliable; they must fulfil their intended design and deployment criteria and perform as expected, within acceptable performance parameters. Those parameters must be regularly reviewed and tested for reliability to be assured on an ongoing basis, particularly as AI enabled systems learn and evolve over time, or are deployed in new contexts. Given Defence’s unique operational context and the challenges of the information environment, this principle also requires AI enabled systems to be secure, and a robust approach to cybersecurity, data protection and privacy. MOD personnel working with or alongside AI enabled systems can build trust in those systems by ensuring that they have a suitable level of understanding of the performance and parameters of those systems, as articulated in the principle of understanding.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022

7. Validation and Testing

Institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public.

Published by ACM US Public Policy Council (USACM) in Principles for Algorithmic Transparency and Accountability, Jan 12, 2017