Industry Insights

AI and Data Protection Laws: A Primer for Security Leaders

April 15, 2024

Yesterday, Italy’s data protection authority, Garante, issued a notice to OpenAI for potential violations of the EU General Data Protection Regulation (GDPR). This event is not isolated; it reflects an increasing global focus on the regulatory challenges posed by artificial intelligence (AI). 

Here, we explore the implications of these developments for technology companies and their use of generative AI tools, against the backdrop of existing data protection laws.

Garante’s GDPR Warning: A Wake-Up Call

Garante's action, echoing a previous 30-day data processing ban on OpenAI, signals a heightened regulatory interest in AI. TechCrunch highlights concerns rooted in several GDPR articles, ranging from the principles of personal data processing to specific conditions related to children's consent and the necessity of Data Protection Impact Assessments for high-risk processing activities. (We should note that there’s not been confirmation of the allegations, nor a response from OpenAI at the time of writing.)

This scrutiny is not limited to Italy. Poland's Office for Personal Data Protection (UODO) is investigating a complaint about ChatGPT, indicating a broader European concern over AI and privacy.

The Global Landscape of AI Regulation

The regulatory landscape is adapting as we learn more about the risks of AI. In Europe, the imminent EU AI Act, with its risk-based approach, promises to directly impact companies working with Large Language Models (LLMs) and other AI technologies. 

Across the Atlantic, the US is also moving forward. The Biden administration's Executive Order and the proposed "AI Bill of Rights" hint at future federal privacy legislation, although its final form remains uncertain. 

At regional levels, laws like Illinois’ Biometric Information Privacy Act and Canada’s Directive on Automated Decision-Making are setting precedents for AI usage, particularly concerning consent and automated decision-making in the public sector.

There are plenty of risks associated with AI to unpack, and these laws are trying to balance meaningful data privacy legislation with fostering innovation.

Existing Data Privacy Laws: Understanding The Current Battlefield

GenAI tools need data. The models underlying these tools need lots of data to train on, which – as we have seen – can lead to compliance headaches. 

At the same time, these tools are asking employees to upload documents, spreadsheets, and other files that may easily contain sensitive data that can land companies in hot water. Our own research, for example, found that 40% of AI apps require some sort of data upload

Because of this, we should note that existing data protection laws like GDPR, CCPA, HIPAA, and various state-specific acts remain crucial for security and compliance teams. In fact, these regulations should remain the immediate focus. (If you want to dig into specifics about GDPR and data privacy, scroll down to the bottom of the page.)

These laws, with their specific provisions on personal data, automated decision-making, and data protection assessments, lay the foundational framework within which AI systems must operate. 

The Industry's Response: Self-Regulation and Best Practices

While we’re all excitedly following to see what governments do, the industry is not waiting idly. Initiatives like the National Institute of Standards and Technology's Artificial Intelligence Risk Management Framework (AI RMF 1.0) is a fantastic start

OWASP’s Top 10 for LLM is another great initiative that includes measures for protecting against challenges like prompt injection and data leakage. 

Finally, Gartner has been advocating for greater efforts around trust, risk and security management for AI (known as AI TRiSM), for several years. More recently they have made this even more specific towards generative AI to cater to the current needs of security leaders. 

Practical Steps for Security Leaders

Garante's GDPR warning is just one example of the increasing complexity in AI compliance. 

While it can be hard to keep updated with these emerging frameworks and understand what this means for us, there are some practical things security leaders can do:. 

  • Understand and Monitor AI Tool Usage. It's crucial for companies to know what GenAI tools they are using internally and what data is being shared.
  • Align with Existing Data Protection Laws. Organizations should continue to operate under existing data protection laws while staying alert to any changes.
  • Embrace Industry Guidelines. While not legally binding, industry-led advice provides valuable insights for AI compliance.
  • Contribute! If you’re passionate about securing this new area, don’t be passive and get involved. For example, the OWASP Top 10 is looking for security leaders to contribute. 

Appendix: Specific Articles Relevant to AI in GDPR

  • Article 9: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.”
  • Article 22: “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.”
  • Article 35: “Where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall, prior to the processing, carry out an assessment of the impact of the envisaged processing operations on the protection of personal data.”

Request a demo

Alastair Paterson