1)法治问责:

Artificial intelligence should be auditable and traceable. We are committed to confirming test standards, deployment processes and specifications, ensuring algorithms verifiable, and gradually improving the accountability and supervision mechanism of artificial intelligence systems.
原则: 中国青年科学家2019人工智能创新治理上海宣言(Chinese Young Scientists’ Declaration on the Governance and Innovation of Artificial Intelligence), Aug 29, 2019

作者:Youth Work Committee of Shanghai Computer Society

相关原则

· Article 6: Transparent and explainable.

Continuously improve the transparency of artificial intelligence systems. Regarding system decision making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability for algorithmic logic, system decisions, and action outcomes.

由 Artificial Intelligence Industry Alliance (AIIA), China 在 人工智能行业自律公约(征求意见稿)(Joint Pledge on Artificial Intelligence Industry Self Discipline (Draft for Comment))发表, May 31, 2019

3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:

a. investing in public and private scientific research on explainable artificial intelligence, b. promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience, c. making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided, and d. guaranteeing the right to informational self determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems, e. providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.

由 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) 在 人工智能伦理与数据保护宣言(Declaration On Ethics And Data Protection In Artifical Intelligence)发表, Oct 23, 2018

Design for human control, accountability, and intended use

Humans should have ultimate control of our technology, and we strive to prevent unintended use of our products. Our user experience enforces accountability, responsible use, and transparency of consequences. We build protections into our products to detect and avoid unintended system behaviors. We achieve this through modern software engineering and rigorous testing on our entire systems including their constituent data and AI products, in isolation and in concert. Additionally, we rely on ongoing user research to help ensure that our products function as expected and can be appropriately disabled when necessary. Accountability is enforced by providing customers with insight into the provenance of data sources, methodologies, and design processes in easily understood and transparent language. Effective governance — of data, models, and software — is foundational to the ethical and accountable deployment of AI.

由 Rebelliondefense 在 人工智能道德原则发表, January 2023

3. Clear responsibility

The development of artificial intelligence should establish a complete framework of safety responsibility, and we need to innovate laws, regulations and ethical norms for the application of artificial intelligence, and clarify the mechanism of identification and sharing of safety responsibility of artificial intelligence.

由 Shanghai Advisory Committee of Experts on Artificial Intelligence Industry Security 在 人工智能安全发展上海倡议(Shanghai Initiative for the Safe Development of Artificial Intelligence)发表, Aug 30, 2019

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

由 SDAIA 在 人工智能伦理准则发表, Sept 14, 2022