The principle "Principles for the Stewardship of AI Applications" has mentioned the topic "security" in the following places:

    1. Public Trust in AI

    AI is expected to have a positive impact across sectors of social and economic life, including employment, transportation, education, finance, healthcare, personal security, and manufacturing.

    2. Public Participation

    Agencies should provide ample opportunities for the public to national standard for a specific aspect related to AI is not essential, however, agencies should provide information and participate in all stages of the rulemaking process, to the extent feasible and consistent with legal requirements (including legal constraints on participation in certain situations, for example, national security preventing imminent threat to or responding to emergencies).

    9. Safety and Security

    Safety and security

    9. Safety and Security

    Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.

    9. Safety and Security

    Agencies should promote the development of AI systems that are safe, secure, and operate as intended, and encourage the consideration of safety and security issues throughout the AI design, development, deployment, and operation process.

    9. Safety and Security

    Agencies should give additional consideration to methods for guaranteeing systemic resilience, and for preventing bad actors from exploiting AI system weaknesses, including cybersecurity risks posed by AI operation, and adversarial use of AI against a regulated entity’s AI technology.

    9. Safety and Security

    When evaluating or introducing AI policies, agencies should be mindful of any potential safety and security risks, as well as the risk of possible malicious deployment and use of AI applications.