The principle "Key requirements for trustworthy AI" has mentioned the topic "safety" in the following places:

    I. Human agency and oversight

    All other things being equal, the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.

    II. Technical robustness and safety

    Technical robustness and safety

    II. Technical robustness and safety

    In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.

    II. Technical robustness and safety

    In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.

    II. Technical robustness and safety

    In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.

    II. Technical robustness and safety

    In addition, AI systems should integrate safety and security by design mechanisms to ensure that they are verifiably safe at every step, taking at heart the physical and mental safety of all concerned.

    III. Privacy and Data Governance

    Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment.

    III. Privacy and Data Governance

    Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment.

    VII. Accountability

    External auditability should especially be ensured in applications affecting fundamental rights, including safety critical applications.