· Test and validation

Ensure AI systems go through rigorous test and validation to achieve reasonable expectations of performance
Principle: "ARCC": An Ethical Framework for Artificial Intelligence, Sep 18, 2018

Published by Tencent Research Institute

Related Principles

7. Robustness and Reliability

AI systems should be sufficiently robust to cope with errors during execution and unexpected or erroneous input, or cope with stressful environmental conditions. It should also perform consistently. AI systems should, where possible, work reliably and have consistent results for a range of inputs and situations. AI systems may have to operate in real world, dynamic conditions where input signals and conditions change quickly. To prevent harm, AI systems need to be resilient to unexpected data inputs, not exhibit dangerous behaviour, and continue to perform according to the intended purpose. Notably, AI systems are not infallible and deployers should ensure proper access control and protection of critical or sensitive systems and take actions to prevent or mitigate negative outcomes that occur due to unreliable performances. Deployers should conduct rigorous testing before deployment to ensure robustness and consistent results across a range of situations and environments. Measures such as proper documentation of data sources, tracking of data processing steps, and data lineage can help with troubleshooting AI systems.

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

Ensure fairness

We are fully determined to combat all types of reducible bias in data collection, derivation, and analysis. Our teams are trained to identify and challenge biases in our own decision making and in the data we use to train and test our models. All data sets are evaluated for fairness, possible inclusion of sensitive data and implicitly discriminatory collection models. We execute statistical tests to look for imbalance and skewed datasets and include methods to augment datasets to combat these statistical biases. We pressure test our decisions by performing peer review of model design, execution, and outcomes; this includes peer review of model training and performance metrics. Before a model is graduated from one development stage to the next, a review is conducted with required acceptance criteria. This review includes in sample and out of sample testing to mitigate the risk of model overfitting to the training data, and biased outcomes in production. We subscribe to the principles laid out in the Department of Defense’s AI ethical principles: that AI technologies should be responsible, equitable, traceable, reliable, and governable.

Published by Rebelliondefense in AI Ethical Principles, January 2023

5. We uphold quality and safety standards

As with any of our products, our AI software is subject to our quality assurance process, which we continuously adapt when necessary. Our AI software undergoes thorough testing under real world scenarios to firmly validate they are fit for purpose and that the product specifications are met. We work closely with our customers and users to uphold and further improve our systems’ quality, safety, reliability, and security.

Published by SAP in SAP's Guiding Principles for Artificial Intelligence, Sep 18, 2018

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022

Fifth principle: Reliability

AI enabled systems must be demonstrably reliable, robust and secure. The MOD’s AI enabled systems must be suitably reliable; they must fulfil their intended design and deployment criteria and perform as expected, within acceptable performance parameters. Those parameters must be regularly reviewed and tested for reliability to be assured on an ongoing basis, particularly as AI enabled systems learn and evolve over time, or are deployed in new contexts. Given Defence’s unique operational context and the challenges of the information environment, this principle also requires AI enabled systems to be secure, and a robust approach to cybersecurity, data protection and privacy. MOD personnel working with or alongside AI enabled systems can build trust in those systems by ensuring that they have a suitable level of understanding of the performance and parameters of those systems, as articulated in the principle of understanding.

Published by The Ministry of Defence (MOD), United Kingdom in Ethical Principles for AI in Defence, Jun 15, 2022