The principle "Ethical Principles for AI in Defence" has mentioned the topic "accountability" in the following places:

    Second principle: Responsibility

    Second principle: responsibility

    Second principle: Responsibility

    Human responsibility for AI enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.

    Second principle: Responsibility

    Human responsibility for AI enabled systems must be clearly established, ensuring accountability for their outcomes, with clearly defined means by which human control is exercised throughout their lifecycles.

    Second principle: Responsibility

    The increased speed, complexity and automation of AI enabled systems may complicate our understanding of pre existing concepts of human control, responsibility and accountability.

    Second principle: Responsibility

    The increased speed, complexity and automation of AI enabled systems may complicate our understanding of pre existing concepts of human control, responsibility and accountability.

    Second principle: Responsibility

    Nevertheless, as unique moral agents, humans must always be responsible for the ethical use of AI in Defence.

    Second principle: Responsibility

    Human responsibility for the use of AI enabled systems in Defence must be underpinned by a clear and consistent articulation of the means by which human control is exercised, and the nature and limitations of that control.

    Second principle: Responsibility

    Irrespective of the use case, responsibility for each element of an AI enabled system, and an articulation of risk ownership, must be clearly defined from development, through deployment – including redeployment in new contexts – to decommissioning.

    Second principle: Responsibility

    In this way, certain aspects of responsibility may reach beyond the team deploying a particular system, to other functions within the MOD, or beyond, to the third parties which build or integrate AI enabled systems for Defence.

    Second principle: Responsibility

    Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI enabled system in Defence.

    Second principle: Responsibility

    Collectively, these articulations of human control, responsibility and risk ownership must enable clear accountability for the outcomes of any AI enabled system in Defence.

    Second principle: Responsibility

    There must be no deployment or use without clear lines of responsibility and accountability, which should not be accepted by the designated duty holder unless they are satisfied that they can exercise control commensurate with the various risks.

    Second principle: Responsibility

    There must be no deployment or use without clear lines of responsibility and accountability, which should not be accepted by the designated duty holder unless they are satisfied that they can exercise control commensurate with the various risks.

    Third principle: Understanding

    While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable.

    Third principle: Understanding

    While the ‘black box’ nature of some machine learning systems means that they are difficult to fully explain, we must be able to audit either the systems or their outputs to a level that satisfies those who are duly and formally responsible and accountable.

    Fourth principle: Bias and Harm Mitigation

    Those responsible for AI enabled systems must proactively mitigate the risk of unexpected or unintended biases or harms resulting from these systems, whether through their original rollout, or as they learn, change or are redeployed.