Industry Insights

What CISOs Are Concerned About: Generative AI and Data Protection in 2024

June 17, 2024

Companies are excited about generative AI (GenAI), and rightly so. McKinsey anticipates that generative AI could increase labor productivity by 0.1% to 0.6%. This equates to a staggering $4.4 trillion added to the global GDP every year. 

GenAI adoption is expected to soar over the next 12 months. According to Gartner, 55% of organizations have already implemented or are piloting GenAI. Security teams are under pressure to adopt GenAI.

However, if companies are to realize GenAI’s promise, we need to find a way to allay the growing data privacy concerns. Alongside CIOs and AI Steering Committees, Chief Information Security Officers (CISOs) have been tasked with navigating these concerns and overcoming privacy and regulatory hurdles. 

Before building the Harmonic product, we spoke with more than 70 CISOs to better understand some of these challenges. This article delves into their primary concerns and the strategies they are employing to mitigate risks associated with generative AI.

Data as the Primary Risk

One of the most pressing concerns for CISOs is the security and privacy of data. As GenAI systems rely heavily on large datasets to function effectively, the potential for data breaches and leaks becomes a significant risk. One CISO succinctly captures this sentiment by stating, “Data is the risk.” Despite implementing stringent measures, there is still a lack of adequate controls to detect and respond to incidents effectively. This concern is echoed by another CISO who highlights the fear of personal or customer data leakage as a pervasive issue across industries.

Regulatory and Compliance Challenges

The regulatory landscape for data protection is complex and varies significantly across regions. In Europe, where data protection regulations are particularly stringent, CISOs face the dual challenge of ensuring compliance and managing the risk of data breaches. One CISO points out the difficulty of catching data breaches line-by-line with Data Loss Prevention (DLP) solutions, emphasizing the need for robust compliance measures. Another CISO further underscores this point, noting that CISOs are primarily responsible for managing legal and compliance risks associated with AI models.

Adoption and Internal Policies

CISOs can be cautious about adopting generative AI technologies, often favoring internal models to mitigate risks associated with external AI services. One organization’s CISO, for instance, uses internal models via Microsoft and has strict guidelines against using unsanctioned AI services. Similarly, another CISO notes that their company has developed specific AI usage policies and is continuously training employees on safe AI usage. To assist in this effort, we at Harmonic Security have created a simple and fast policy generator that can be used as a starting point for creating your internal policy. You can access it here.

Technical and Implementation Concerns

The technical challenges of securely integrating AI are another major concern for security leaders. One Security Engineering Manager highlights the difficulties of monitoring and classifying data used in AI models, despite having a pro-open policy. Another CISO points out that existing security policies need to adapt to the nuances of AI training processes and the associated risks. These technical hurdles underscore the need for robust frameworks to ensure the secure implementation of AI technologies.

Investment in AI Technology

Despite the risks, companies are investing in AI technologies, albeit with a strong focus on data protection. One CISO’s organization is investing in private Azure AI capabilities and conducting hackathons to explore generative AI use cases safely. Another company is also investing significantly in AI models from Microsoft, including Copilot and Bing, with a hefty budget. These investments reflect a cautious but forward-thinking approach to leveraging AI’s potential while prioritizing data security.

Insights from Leading CISOs

The perspectives of leading CISOs provide valuable insights into the current state of generative AI and data protection:

  • CISO 1: “We are absolutely right to say that data is the risk… We have put up the speed limit signs and 80% of people will not really speed, but I have no way of identifying the 20% that might - we have no speed cameras.”
  • CISO 2: “Strong data regulations… every industry and company fear personal or customer data leakage.”
  • CISO 3: “Explosion of interest just meant we had to demonstrate we are on top of it… How can we get our clients more value from Gen AI.”
  • CISO 4: “The broader concern is what is training the LLMs… CISOs will be mostly responsible for legal & compliance risk. It’s inaccurate, doesn’t take into account minorities.”

Conclusion

The insights and quotes from these leading CISOs highlight the challenges and considerations involved in managing GenAI and data protection in 2024. Data security remains the primary concern, with regulatory compliance, technical implementation, and cautious adoption strategies also playing crucial roles. As companies continue to invest in AI technologies, the need for robust data protection measures and comprehensive internal policies becomes ever more critical.

By staying vigilant and proactive, CISOs can navigate the complexities of generative AI and ensure that their organizations harness its potential safely and responsibly.

Request a demo

Team Harmonic