The principle "Asilomar AI Principles" has mentioned the topic "safety" in the following places:

    · 5) Race Avoidance

    Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards.

    · 6) Safety

    · 6) safety

    · 6) Safety

    AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

    · 6) Safety

    AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.

    · 16) Human Control

    · 16) human control

    · 22) Recursive Self Improvement

    AI systems designed to recursively self improve or self replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.