The principle "The Recommendation on the Ethics of Artificial Intelligence" has mentioned the topic "safety" in the following places:

    · Living in peaceful, just and interconnected societies

    This value demands that peace, inclusiveness and justice, equity and interconnectedness should be promoted throughout the life cycle of AI systems, in so far as the processes of the life cycle of AI systems should not segregate, objectify or undermine freedom and autonomous decision making as well as the safety of human beings and communities, divide and turn individuals and groups against each other, or threaten the coexistence between humans, other living beings and the natural environment.

    · Safety and security

    · safety and security

    · Safety and security

    Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security.

    · Safety and security

    Unwanted harms (safety risks), as well as vulnerabilities to attack (security risks) should be avoided and should be addressed, prevented and eliminated throughout the life cycle of AI systems to ensure human, environmental and ecosystem safety and security.

    · Safety and security

    safe and secure AI will be enabled by the development of sustainable, privacy protective data access frameworks that foster better training and validation of AI models utilizing quality data.

    · Safety and security

    Safe and secure AI will be enabled by the development of sustainable, privacy protective data access frameworks that foster better training and validation of AI models utilizing quality data.

    · Transparency and explainability

    While efforts need to be made to increase transparency and explainability of AI systems, including those with extra territorial impact, throughout their life cycle to support democratic governance, the level of transparency and explainability should always be appropriate to the context and impact, as there may be a need to balance between transparency and explainability and other principles such as privacy, safety and security.

    · Transparency and explainability

    People should be fully informed when a decision is informed by or is made on the basis of AI algorithms, including when it affects their safety or human rights, and in those circumstances should have the opportunity to request explanatory information from the relevant AI actor or public sector institutions.

    · Transparency and explainability

    It may also include insight into factors that affect a specific prediction or decision, and whether or not appropriate assurances (such as safety or fairness measures) are in place.