2. Principle of data quality

Users and data providers should pay attention to the quality of data used for learning or other methods of AI systems. [Main points to discuss] A) Attention to the quality of data used for learning or other methods of AI Users and data providers may be expected to pay attention to the quality of data (e.g., the accuracy and completeness of data) used for learning or other methods of AI, with consideration of the characteristics of AI to be used and its usage. If the accuracy of the judgment of AI is impaired or declines, it may be expected to let AI systems learn again with paying attention to the quality of data. In what cases and to what extent are users and data providers expected to pay attention to the quality of data used for learning or other methods? B) Attention to security vulnerabilities of AI by learning inaccurate or inappropriate data Users and data providers may be expected to pay attention to the risk that the security of AI might become vulnerable by learning inaccurate or inappropriate data.
Principle: Draft AI Utilization Principles, Jul 17, 2018

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan

Related Principles

III. Privacy and Data Governance

Privacy and data protection must be guaranteed at all stages of the AI system’s life cycle. Digital records of human behaviour may allow AI systems to infer not only individuals’ preferences, age and gender but also their sexual orientation, religious or political views. To allow individuals to trust the data processing, it must be ensured that they have full control over their own data, and that data concerning them will not be used to harm or discriminate against them. In addition to safeguarding privacy and personal data, requirements must be fulfilled to ensure high quality AI systems. The quality of the data sets used is paramount to the performance of AI systems. When data is gathered, it may reflect socially constructed biases, or contain inaccuracies, errors and mistakes. This needs to be addressed prior to training an AI system with any given data set. In addition, the integrity of the data must be ensured. Processes and data sets used must be tested and documented at each step such as planning, training, testing and deployment. This should also apply to AI systems that were not developed in house but acquired elsewhere. Finally, the access to data must be adequately governed and controlled.

Published by European Commission in Key requirements for trustworthy AI, Apr 8, 2019

3. Principle of collaboration

AI service providers, business users, and data providers should pay attention to the collaboration of AI systems or AI services. Users should take it into consideration that risks might occur and even be amplified when AI systems are to be networked. [Main points to discuss] A) Attention to the interconnectivity and interoperability of AI systems AI network service providers may be expected to pay attention to the interconnectivity and interoperability of AI with consideration of the characteristics of AI to be used and its usage, in order to promote the benefits of AI through the sound progress of AI networking. B) Address to the standardization of data formats, protocols, etc. AI service providers and business users may be expected to address the standardization of data formats, protocols, etc. in order to promote cooperation among AI systems and between AI systems and other systems, etc. Also, data providers may be expected to address the standardization of data formats. C) Attention to problems caused and amplified by AI networking Although it is expected that collaboration of AI promotes the benefits, users may be expected to pay attention to the possibility that risks (e.g. the risk of loss of control by interconnecting or collaborating their AI systems with other AI systems, etc. through the Internet or other network) might be caused or amplified by AI networking. [Problems (examples) over risks that might become realized and amplified by AI networking] • Risks that one AI system's trouble, etc. spreads to the entire system. • Risks of failures in the cooperation and adjustment between AI systems. • Risks of failures in verifying the judgment and the decision making of AI (risks of failure to analyze the interactions between AI systems because the interactions become complicated). • Risks that the influence of a small number of AI becomes too strong (risks of enterprises and individuals suffering disadvantage by the judgment of a few AI systems). • Risks of the infringement of privacy as a result of information sharing across fields and the concentration of information to one specific AI. • Risks of unexpected actions of AI.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

5. Principle of security

Users and data providers should pay attention to the security of AI systems or AI services. [Main points to discuss] A) Implementation of security measures Users may be expected to pay attention to the security of AI and take reasonable measures in light of the technology level at that time. In addition, users may be expected to consider measures to be taken against security breaches of AI in advance. B) Service provision, etc. for security measures AI service providers may be expected, with regard to their AI services, to provide services for security measures to end users and share incident information with end users. C) Attention to security vulnerabilities of AI by learning inaccurate or inappropriate data Users and data providers may be expected to pay attention to the risk that AI’s security might become vulnerable by learning inaccurate or inappropriate data. [as referred to in supra 2) Principle of data quality―Main point B)]

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

6. Principle of privacy

Users and data providers should take into consideration that the utilization of AI systems or AI services will not infringe on the privacy of users’ or others. [Main points to discuss] A) Respect for the privacy of others With consideration of social contexts and reasonable expectations of people in the utilization of AI, users may be expected to respect the privacy of others in the utilization of AI. In addition, users may be expected to consider measures to be taken against privacy infringement caused by AI in advance. B) Respect for the privacy of others in the collection, analysis, provision, etc. of personal data Users and data providers may be expected to respect the privacy of others in the collection, analysis, provision, etc. of personal data used for learning or other methods of AI. C) Consideration for the privacy, etc. of the subject of profiling which uses AI In the case of profiling by using AI in fields where the judgments of AI might have significant influences on individual rights and interests, such as the fields of personnel evaluation, recruitment, and financing, AI service providers and business users may be expected to pay due consideration to the privacy, etc. of the subject of profiling. D) Attention to the infringement of the privacy of users’ or others Consumer users may be expected to pay attention not to give information that is highly confidential (including information on others as well as information on users’ themselves) to AI carelessly, by excessively empathizing with AI such as pet robots, or by other causes. E) Prevention of personal data leakage AI service providers, business users, and data providers may be expected to take appropriate measures so that personal data should not be provided by the judgments of AI to third parties without consent of the person.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018

8. Principle of fairness

AI service providers, business users, and data providers should take into consideration that individuals will not be discriminated unfairly by the judgments of AI systems or AI services. [Main points to discuss] A) Attention to the representativeness of data used for learning or other methods of AI AI service providers, business users, and data providers may be expected to pay attention to the representativeness of data used for learning or other methods of AI and the social bias inherent in the data so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. as a result of the judgment of AI. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is attention expected to be paid to the representativeness of data used for learning or other methods and the social bias inherent in the data? Note: The representativeness of data refers to the fact that data sampled and used do not distort the propensity of the population of data. B) Attention to unfair discrimination by algorithm AI service providers and business users may be expected to pay attention to the possibility that individuals may be unfairly discriminated against due to their race, religion, gender, etc. by the algorithm of AI. C) Human intervention Regarding the judgment made by AI, AI service providers and business users may be expected to make judgments as to whether to use the judgments of AI, how to use it, or other matters, with consideration of social contexts and reasonable expectations of people in the utilization of AI, so that individuals should not be unfairly discriminated against due to their race, religion, gender, etc. In light of the characteristics of the technologies to be used and their usage, in what cases and to what extent is human intervention expected?

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in Draft AI Utilization Principles, Jul 17, 2018