The principle "Ethical Principles for AI in Defence" has mentioned the topic "for human" in the following places:

    Preamble: Our intent for the ethical use of AI in Defence

    The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values.

    Preamble: Our intent for the ethical use of AI in Defence

    The MOD will develop and deploy AI enabled systems for purposes that are demonstrably beneficial: driving operational improvements, supporting the Defence Purpose, and upholding human rights and democratic values.

    Preamble: Our intent for the ethical use of AI in Defence

    The MOD’s existing obligations under UK law and international law, including as applicable international humanitarian law (IHL) and international human rights law, act as a foundation for Defence’s development, deployment and operation of AI enabled systems.

    First principle: Human Centricity

    These effects should prioritise human life and wellbeing, as well as wider concerns for human kind such as environmental impacts, while taking account of military necessity.

    First principle: Human Centricity

    These effects should prioritise human life and wellbeing, as well as wider concerns for human kind such as environmental impacts, while taking account of military necessity.

    First principle: Human Centricity

    Conversely, there may be a strong ethical case for the development and use of an AI system where it would be demonstrably beneficial or result in a more ethical outcome.

    Third principle: Understanding

    What our systems do, how we intend to use them, and our processes for ensuring beneficial outcomes result from their use should be as transparent as possible, within the necessary constraints of the national security context.