· Multi stakeholder collaboration

Children, parents, legal guardians, or other caregivers, as well as governments, relevant companies, civil society, and people from all walks of life, should be encouraged to participate in the discussion of the potential impact of AI on children. A multi stakeholder collaboration and shared responsibility governance model should be encouraged, and the interdisciplinary, cross domain, cross sectoral, cross organizational AI governance for children should be implemented.
Principle: Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development.

Related Principles

Fairness

Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups. This principle aims to ensure that AI systems are fair and that they enable inclusion throughout their entire lifecycle. AI systems should be user centric and designed in a way that allows all people interacting with it to access the related products or services. This includes both appropriate consultation with stakeholders, who may be affected by the AI system throughout its lifecycle, and ensuring people receive equitable access and treatment. This is particularly important given concerns about the potential for AI to perpetuate societal injustices and have a disparate impact on vulnerable and underrepresented groups including, but not limited to, groups relating to age, disability, race, sex, intersex status, gender identity and sexual orientation. Measures should be taken to ensure the AI produced decisions are compliant with anti‐discrimination laws.

Published by Department of Industry, Innovation and Science, Australian Government in AI Ethics Principles, Nov 7, 2019

· Be responsible

The development of AI should uphold a responsible attitude towards the next generations, fully consider and try to reduce and avoid the potential ethical, legal, and social impacts and risks that AI might bring to children. The research, design, deployment, and use of AI should invite children and their parents, legal guardians, and other caregivers to participate in the discussion, actively respond to the attention and concerns from all sectors of society, and establish a timely and effective error correction mechanism.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

1. Artificial intelligence should be developed for the common good and benefit of humanity.

The UK must seek to actively shape AI's development and utilisation, or risk passively acquiescing to its many likely consequences. A shared ethical AI framework is needed to give clarity as to how AI can best be used to benefit individuals and society. By establishing these principles, the UK can lead by example in the international community. We recommend that the Government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence. The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning, with input from as diverse a group of people as possible.

Published by House of Lords, Select Committee on Artificial Intelligence in AI Code, Apr 16, 2018

7 DIVERSITY INCLUSION PRINCIPLE

The development and use of AIS must be compatible with maintaining social and cultural diversity and must not restrict the scope of lifestyle choices or personal experiences. 1) AIS development and use must not lead to the homogenization of society through the standardization of behaviours and opinions. 2) From the moment algorithms are conceived, AIS development and deployment must take into consideration the multitude of expressions of social and cultural diversity present in the society. 3) AI development environments, whether in research or industry, must be inclusive and reflect the diversity of the individuals and groups of the society. 4) AIS must avoid using acquired data to lock individuals into a user profile, fix their personal identity, or confine them to a filtering bubble, which would restrict and confine their possibilities for personal development — especially in fields such as education, justice, or business. 5) AIS must not be developed or used with the aim of limiting the free expression of ideas or the opportunity to hear diverse opinions, both of which being essential conditions of a democratic society. 6) For each service category, the AIS offering must be diversified to prevent de facto monopolies from forming and undermining individual freedoms.

Published by University of Montreal in The Montreal Declaration for a Responsible Development of Artificial Intelligence, Dec 4, 2018

7. Open and Collaboration

Cross disciplinary and cross boundary exchanges and cooperation should be encouraged in the development of AI. Coordinated interactions should be fostered among international organizations, government agencies, research institutions, educational institutions, industries, social organizations and the general public in the development and governance of AI. With full respect for the principles and practices of AI development in various countries, international dialogues and cooperation should be encouraged to promote the formation of an international AI governance framework with broad consensus.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019