· Privacy protection

The development of AI should protect children's privacy in a much stricter manner. The collection of information on children should follow the principle of "legal, proper and necessary", ensure their guardians' informed consent, and avoid illegal collection and abuse of children's information. AI systems should ensure that children, parents, legal guardians, or other caregivers have the rights to consent, refuse, erase data, revoke authorizations, etc.
Principle: Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development.

Related Principles

· Informed consent

Measures should be taken to ensure that stakeholders of AI systems are with sufficient informed consent about the impact of the system on their rights and interests. When unexpected circumstances occur, reasonable data and service revocation mechanisms should be established to ensure that users' own rights and interests are not infringed.

Published by Beijing Academy of Artificial Intelligence (BAAI); Peking University; Tsinghua University; Institute of Automation, Chinese Academy of Sciences; Institute of Computing Technology, Chinese Academy of Sciences; Artifical Intelligence Industry Innovation Strategy Alliance (AITISA); etc. in Beijing AI Principles, May 25, 2019

· Be responsible

The development of AI should uphold a responsible attitude towards the next generations, fully consider and try to reduce and avoid the potential ethical, legal, and social impacts and risks that AI might bring to children. The research, design, deployment, and use of AI should invite children and their parents, legal guardians, and other caregivers to participate in the discussion, actively respond to the attention and concerns from all sectors of society, and establish a timely and effective error correction mechanism.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

· Legal system improvement

Stakeholders of AI should consciously and strictly abide by the code of conduct, laws and regulations, and technical specifications related to children. AI legislations should pay attention to the impact of AI on children's rights and interests, and should make it clearly and effectively reflected in the legal system. Both governance institutions and strict review and accountability mechanisms should be established to severely punish individuals and groups that abuse AI to infringe upon children's rights and interests.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

4. Respect for Privacy

AI development should respect and protect the privacy of individuals and fully protect an individual’s rights to know and to choose. Boundaries and rules should be established for the collection, storage, processing and use of personal information. Personal privacy authorization and revocation mechanisms should be established and updated. Stealing, juggling, leaking and other forms of illegal collection and use of personal information should be strictly prohibited.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence, Jun 17, 2019

1 Protect autonomy

Adoption of AI can lead to situations in which decision making could be or is in fact transferred to machines. The principle of autonomy requires that any extension of machine autonomy not undermine human autonomy. In the context of health care, this means that humans should remain in full control of health care systems and medical decisions. AI systems should be designed demonstrably and systematically to conform to the principles and human rights with which they cohere; more specifically, they should be designed to assist humans, whether they be medical providers or patients, in making informed decisions. Human oversight may depend on the risks associated with an AI system but should always be meaningful and should thus include effective, transparent monitoring of human values and moral considerations. In practice, this could include deciding whether to use an AI system for a particular health care decision, to vary the level of human discretion and decision making and to develop AI technologies that can rank decisions when appropriate (as opposed to a single decision). These practicescan ensure a clinician can override decisions made by AI systems and that machine autonomy can be restricted and made “intrinsically reversible”. Respect for autonomy also entails the related duties to protect privacy and confidentiality and to ensure informed, valid consent by adopting appropriate legal frameworks for data protection. These should be fully supported and enforced by governments and respected by companies and their system designers, programmers, database creators and others. AI technologies should not be used for experimentation or manipulation of humans in a health care system without valid informed consent. The use of machine learning algorithms in diagnosis, prognosis and treatment plans should be incorporated into the process for informed and valid consent. Essential services should not be circumscribed or denied if an individual withholds consent and that additional incentives or inducements should not be offered by either a government or private parties to individuals who do provide consent. Data protection laws are one means of safeguarding individual rights and place obligations on data controllers and data processors. Such laws are necessary to protect privacy and the confidentiality of patient data and to establish patients’ control over their data. Construed broadly, data protection laws should also make it easy for people to access their own health data and to move or share those data as they like. Because machine learning requires large amounts of data – big data – these laws are increasingly important.

Published by World Health Organization (WHO) in Key ethical principles for use of artificial intelligence for health, Jun 28, 2021