6. Provide transparency, explainability and accountability for children

Strive to explicitly address children when promoting explainability and transparency of AI systems. Use age appropriate language to describe AI. Make AI systems transparent to the extent that children and their caregivers can understand the interaction. Develop AI systems so that they protect and empower child users according to legal and policy frameworks, regardless of children's understanding of the system. Review, update and develop AI related regulatory frameworks to integrate child rights. Establish AI oversight bodies compliant with principles and regulations and set up mechanisms for redress.
Principle: Requirements for child-centred AI, Sep 16, 2020

Published by United Nations Children's Fund (UNICEF) and the Ministry of

Related Principles

· Legal system improvement

Stakeholders of AI should consciously and strictly abide by the code of conduct, laws and regulations, and technical specifications related to children. AI legislations should pay attention to the impact of AI on children's rights and interests, and should make it clearly and effectively reflected in the legal system. Both governance institutions and strict review and accountability mechanisms should be established to severely punish individuals and groups that abuse AI to infringe upon children's rights and interests.

Published by Beijing Academy of Artificial Intelligence (BAAI), Peking University, Tsinghua University and the Chinese Academy of Sciences, together with enterprises that focus on AI development. in Artificial Intelligence for Children: Beijing Principles, Sep 14, 2020

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

Chapter 2. The Norms of Management

  5. Promotion of agile governance. Respect the law of development of AI, fully understand the potential and limitations of AI, continue to optimize the governance mechanisms and methods of AI. Do not divorce from reality, do not rush for quick success and instant benefits in the process of strategic decision making, institution construction, and resource allocation. Promote the healthy and sustainable development of AI in an orderly manner.   6. Active practice. Comply with AI related laws, regulations, policies and standards, actively integrate AI ethics into the entire management process, take the lead in becoming practitioners and promoters of AI ethics and governance, summarize and promote AI governance experiences in a timely manner, and actively respond to the society’s concerns on the ethics of AI.   7. Exercise and use power correctly. Clarify the responsibilities and power boundaries of AI related management activities, and standardize the conditions and procedures of power operations. Fully respect and protect the privacy, freedom, dignity, safety and other rights of relevant stakeholders and other legal rights and interests, and prohibit improper use of power to infringe the legal rights of natural persons, legal persons and other organizations.   8. Strengthen risk preventions. Enhance bottom line thinking and risk awareness, strengthen the research and judgment on the potential risks during the development of AI, carry out systematic risk monitoring and evaluations in a timely manner, establish an effective early warning mechanism for risks, and enhance the ability of manage, control, and disposal of ethical risks of AI.   9. Promote inclusivity and openness. Pay full attention to the rights and demands of all stakeholders related to AI, encourage the application of diverse AI technologies to solve practical problems in economic and social development, encourage cross disciplinary, cross domain, cross regional, and cross border exchanges and cooperation, and promote the formation of AI governance frameworks, standards and norms with broad consensus.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

Chapter 5. The Norms of Use

  18. Promote good use. Strengthen the justifications and evaluations of AI products and services before use, fully get aware on the benefits of AI products and services, and fully consider the legitimate rights and interests of various stakeholders, so as to better promote economic prosperity, social progress and sustainable development.   19. Avoid misuse and abuse. Fully get aware and understand the scope of applications and potential negative effects of AI products and services, and earnestly respect the rights of relevant entities not to use AI products or services, avoid improper use, misuse and abuse of AI products and services, and avoid unintended cause of damages to the legitimate rights and interests of others.   20. Forbid malicious use. It is forbidden to use AI products and services that do not comply with laws, regulations, ethical norms, and standards. It is forbidden to use AI products and services to engage in illegal activities. It is strictly forbidden to endanger national security, public safety and production safety, and it is strictly forbidden to do harm to public interests.   21. Timely and Proactive feedback. Actively participate in the practice of AI ethics and governance, prompt feedback to relevant subjects and assistance for solving problems are expected when technical safety and security flaws, policy and law vacuums, and lags of regulation are found in the use of AI products and services.   22. Improve the ability to use. Actively learn AI related knowledge, and actively master the skills required for various phases related to the use of AI products and services, such as operation, maintenance, and emergency response, so as to ensure the safe and efficient use of them.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

2. Ensure inclusion of and for children

Strive for diversity amongst those who design, develop, collect and process data, implement, research, regulate and oversee AI systems. Adopt an inclusive design approach when developing AI products that will be used by children or impact them. Support meaningful child participation, both in AI policies and in the design and development processes.

Published by United Nations Children's Fund (UNICEF) and the Ministry of in Requirements for child-centred AI, Sep 16, 2020