6 Open Data and Fair Competition

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall promote (a) open access to datasets which could be used in the development of AI systems and (b) open source frameworks and software for AI systems. AI systems must be developed and deployed on a “compliance by design” basis in relation to competition antitrust law.
Principle: The Eight Principles of Responsible AI, May 23, 2019

Published by International Technology Law Association (ITechLaw)

Related Principles

3. Security and Safety

AI systems should be safe and sufficiently secure against malicious attacks. Safety refers to ensuring the safety of developers, deployers, and users of AI systems by conducting impact or risk assessments and ensuring that known risks have been identified and mitigated. A risk prevention approach should be adopted, and precautions should be put in place so that humans can intervene to prevent harm, or the system can safely disengage itself in the event an AI system makes unsafe decisions autonomous vehicles that cause injury to pedestrians are an illustration of this. Ensuring that AI systems are safe is essential to fostering public trust in AI. Safety of the public and the users of AI systems should be of utmost priority in the decision making process of AI systems and risks should be assessed and mitigated to the best extent possible. Before deploying AI systems, deployers should conduct risk assessments and relevant testing or certification and implement the appropriate level of human intervention to prevent harm when unsafe decisions take place. The risks, limitations, and safeguards of the use of AI should be made known to the user. For example, in AI enabled autonomous vehicles, developers and deployers should put in place mechanisms for the human driver to easily resume manual driving whenever they wish. Security refers to ensuring the cybersecurity of AI systems, which includes mechanisms against malicious attacks specific to AI such as data poisoning, model inversion, the tampering of datasets, byzantine attacks in federated learning5, as well as other attacks designed to reverse engineer personal data used to train the AI. Deployers of AI systems should work with developers to put in place technical security measures like robust authentication mechanisms and encryption. Just like any other software, deployers should also implement safeguards to protect AI systems against cyberattacks, data security attacks, and other digital security risks. These may include ensuring regular software updates to AI systems and proper access management for critical or sensitive systems. Deployers should also develop incident response plans to safeguard AI systems from the above attacks. It is also important for deployers to make a minimum list of security testing (e.g. vulnerability assessment and penetration testing) and other applicable security testing tools. Some other important considerations also include: a. Business continuity plan b. Disaster recovery plan c. Zero day attacks d. IoT devices

Published by ASEAN in ASEAN Guide on AI Governance and Ethics, 2021

7 Privacy

Organisations that develop, deploy or use AI systems and any national laws that regulate such use shall endeavour to ensure that AI systems are compliant with privacy norms and regulations, taking into account the unique characteristics of AI systems, and the evolution of standards on privacy.

Published by International Technology Law Association (ITechLaw) in The Eight Principles of Responsible AI, May 23, 2019

1. Principle of collaboration

Developers should pay attention to the interconnectivity and interoperability of AI systems. [Comment] Developers should give consideration to the interconnectivity and interoperability between the AI systems that they have developed and other AI systems, etc. with consideration of the diversity of AI systems so that: (a) the benefits of AI systems should increase through the sound progress of AI networking; and that (b) multiple developers’ efforts to control the risks should be coordinated well and operate effectively. For this, developers should pay attention to the followings: • To make efforts to cooperate to share relevant information which is effective in ensuring interconnectivity and interoperability. • To make efforts to develop AI systems conforming to international standards, if any. • To make efforts to address the standardization of data formats and the openness of interfaces and protocols including application programming interface (API). • To pay attention to risks of unintended events as a result of the interconnection or interoperations between AI systems that they have developed and other AI systems, etc. • To make efforts to promote open and fair treatment of license agreements for and their conditions of intellectual property rights, such as standard essential patents, contributing to ensuring the interconnectivity and interoperability between AI systems and other AI systems, etc., while taking into consideration the balance between the protection and the utilization with respect to intellectual property related to the development of AI. [Note] The interoperability and interconnectivity in this context expects that AI systems which developers have developed can be connected to information and communication networks, thereby can operate with other AI systems, etc. in mutually and appropriately harmonized manners.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

Chapter 4. The Norms of Supply

  14. Respect market rules. Strictly abide by the various rules and regulations for market access, competition, and trading activities, actively maintain market order, and create a market environment conducive to the development of AI. Data monopoly, platform monopoly, etc. must not be used to disrupt the orderly market competitions, and any means that infringe on the intellectual property rights of other subjects are forbidden.   15. Strengthen quality control. Strengthen the quality monitoring and the evaluations on the use of AI products and services, avoid infringements on personal safety, property safety, user privacy, etc. caused by product defects introduced during the design and development phases, and must not operate, sell, or provide products and services that do not meet the quality standards.   16. Protect the rights and interests of users. Users should be clearly informed that AI technology is used in products and services. The functions and limitations of AI products and services should be clearly identified, and users’ rights to know and to consent should be ensured. Simple and easy to understand solutions for users to choose to use or quit the AI mode should be provided, and it is forbidden to set obstacles for users to fairly use AI products and services.   17. Strengthen emergency protection. Emergency mechanisms and loss compensation plans and measures should be investigated and formulated. AI systems need to be timely monitored, user feedbacks should be responded and processed in a timely manner, systemic failures should be prevented in time, and be ready to assist relevant entities to intervene in the AI systems in accordance with laws and regulations to reduce losses and avoid risks.

Published by National Governance Committee for the New Generation Artificial Intelligence, China in Ethical Norms for the New Generation Artificial Intelligence, Sep 25, 2021

· Build and Validate:

1 To develop a sound and functional AI system that is both reliable and safe, the AI system’s technical construct should be accompanied by a comprehensive methodology to test the quality of the predictive data based systems and models according to standard policies and protocols. 2 To ensure the technical robustness of an AI system rigorous testing, validation, and re assessment as well as the integration of adequate mechanisms of oversight and controls into its development is required. System integration test sign off should be done with relevant stakeholders to minimize risks and liability. 3 Automated AI systems involving scenarios where decisions are understood to have an impact that is irreversible or difficult to reverse or may involve life and death decisions should trigger human oversight and final determination. Furthermore, AI systems should not be used for social scoring or mass surveillance purposes.

Published by SDAIA in AI Ethics Principles, Sept 14, 2022