• Rethink Privacy

Privacy approaches like The Fair Information Practice Principles and Privacy by Design have withstood the test of time and the evolution of new technology. But with innovation, we have had to “rethink” how we apply these models to new technology. [Recommendations] • Adopt Robust Privacy Laws: Based on the OECD Fair Information Practice Principles. • Implement Privacy by Design: Follow Intel’s Rethinking Privacy approach to implement Privacy by Design into AI product and project development. • Keep data secure: Policies should help enable cutting edge AI technology with robust cyber and physical security to mitigate risks of attacks and promote trust from society. • It takes data for AI to protect data: Governments should adopt policies to reduce barriers to the sharing of data for cybersecurity purposes.
Principle: AI public policy principles, Oct 18, 2017

Published by Intel

Related Principles

2. Privacy Principles Privacy by Design

o We have implemented an enterprise wide Privacy by Design approach that incorporates privacy and data security into our ML and associated data processing systems. Our ML models seek to minimize access to identifiable information to ensure we are using only the personal data we need to generate insights. ADP is committed to providing individuals with a reasonable opportunity to examine their own personal data and to update it if it is incorrect.

Published by ADP in ADP: Ethics in Artificial Intelligence, 2018 (unconfirmed)

· 5. Incorporate privacy design principles.

We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data.

Published by Google in Artificial Intelligence at Google: Our Principles, Jun 7, 2018

· 2.4 Cybersecurity and Privacy

Just like technologies that have come before it, AI depends on strong cybersecurity and privacy provisions. We encourage governments to use strong, globally accepted and deployed cryptography and other security standards that enable trust and interoperability. We also promote voluntary information sharing on cyberattacks or hacks to better enable consumer protection. The tech sector incorporates strong security features into our products and services to advance trust, including using published algorithms as our default cryptography approach as they have the greatest trust among global stakeholders, and limiting access to encryption keys. Data and cybersecurity are integral to the success of AI. We believe for AI to flourish, users must trust that their personal and sensitive data is protected and handled appropriately. AI systems should use tools, including anonymized data, de identification, or aggregation to protect personally identifiable information whenever possible.

Published by Information Technology Industry Council (ITI) in AI Policy Principles, Oct 24, 2017

• Require Accountability for Ethical Design and Implementation

The social implications of computing have grown and will continue to expand as more people have access to implementations of AI. Public policy should work to identify and mitigate discrimination caused by the use of AI and encourage designing in protections against these harms. [Recommendations] • Standing for “Accountable Artificial Intelligence”: Governments, industry and academia should apply the Information Accountability Foundation’s principles to AI. Organizations implementing AI solutions should be able to demonstrate to regulators that they have the right processes, policies and resources in place to meet those principles. • Transparent decisions: Governments should determine which AI implementations require algorithm explainability to mitigate discrimination and harm to individuals.

Published by Intel in AI public policy principles, Oct 18, 2017

6. Principle of privacy

Developers should take it into consideration that AI systems will not infringe the privacy of users or third parties. [Comment] The privacy referred to in this principle includes spatial privacy (peace of personal life), information privacy (personal data), and secrecy of communications. Developers should consider international guidelines on privacy, such as “OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data,” as well as the followings, with consideration of the possibility that AI systems might change their outputs or programs as a result of learning and other methods: ● To make efforts to evaluate the risks of privacy infringement and conduct privacy impact assessment in advance. ● To make efforts to take necessary measures, to the extent possible in light of the characteristics of the technologies to be adopted throughout the process of development of the AI systems (“privacy by design”), to avoid infringement of privacy at the time of the utilization.

Published by Ministry of Internal Affairs and Communications (MIC), the Government of Japan in AI R&D Principles, Jul 28, 2017