• Rethink Privacy

Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. [Recommendations] • Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles. • Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development. • Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society. • It takes data for AI to protect data: Governments should adopt policies to reduce barriers to the sharing of data for cybersecurity purposes.
Principle: AI public policy principles, Oct 18, 2017

Published by Intel

Related Principles

· 5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· 2.4 Cybersecurity and Privacy

Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

6. Principle of privacy

Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties. [Comment] The privacy referred to in this principle includes spatial privacy (peace of personal life), information privacy (personal data), and secrecy of communications. Developers should consider international guidelines on privacy, such as “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” as well as the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning and other methods: ● To make efforts to evaluate the risks of privacy infringement and conduct privacy impact assessment in advance. ● To make efforts to take necessary measures, to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of development of the AI systems (“privacy by design”), to avoid infringement of privacy at the time of the utilization.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017

4. Privacy and security by design

AI systems are fuelled by data, and Telefónica is committed to respecting people’s right to privacy and their personal data. The data used in AI systems can be personal or anonymous aggregated. When processing personal data, according to Telefónica’s privacy policy, we will at all times comply with the principles of lawfulness, fairness and transparency, data minimisation, accuracy, storage limitation, integrity and confidentiality. When using anonymized and or aggregated data, we will use the principles set out in this document. In order to ensure compliance with our Privacy Policy we use a Privacy by Design methodology. When building AI systems, as with other systems, we follow Telefónica’s Security by Design approach. We apply, according to Telefónica’s privacy policy, in all of the processing cycle phases, the technical and organizational measures required to guarantee a level of security adequate to the risk to which the personal information may be exposed and, in any case, in accordance with the security measures established in the law in force in each of the countries and or regions in which we operate.

Published by Telefónica in AI Principles of Telefónica, Oct 30, 2018