Industry Insights

2025: What Lies Ahead in Securing GenAI

December 19, 2024

With only a couple of weeks left in 2024, let’s turn our gaze to what we might expect to see in 2025 around AI security. I think there are five themes that are likely to rear their heads over the next 12 months:

  1. Compliance looms large. Incoming AI compliance frameworks are going to cause a headache for organizations.
  2. Gradual shifts in third-party risk. This won’t be fixed overnight, but we’re going to see a step-change in how companies manage third parties – especially with regard to AI.
  3. “Security for AI” improvements. The tech companies behind the models will get even better at securing them and offsetting model attacks. 
  4. Data is the latest perimeter in vogue. With plenty of funding behind it, expect data security to be the talk of 2025.
  5. AI for security operations goes mainstream. AI agents for SOC automation represent a huge opportunity for security teams looking to achieve real productivity boosts.

For good measure, I also asked ChatGPT, Copilot, Claude, Perplexity, Gemini, and Groq for their predictions…you can find those at the end!

Compliance Looms Large

Compliance for AI is something we’re going to hear a lot about in 2025 and I just wrote about it in my Security Week column

There’s quite a bit to unpack here and it applies pretty much everywhere in the world. The EU AI Act is the obvious candidate, but there’s plenty to pay attention to in the US. National regulatory initiatives, such as the proposed SEC rules need attention but there is a growing patchwork of state-level legislation, such as Colorado’s Artificial Intelligence Act. This is just one example; there are no fewer than 15 US states that have enacted AI-related legislation, with more in development. 

If for you (like me) the EU AI Act is still a bit of an enigma, we’ve snagged a reprint license from Gartner called “Getting Ready for the EU AI Act, Phase 1: Discover & Catalog”. 

The EU Act isn’t going to come into force overnight. Much like GDPR, this will be a phased approach. The first stage is at the beginning of February when organizations operating in the EU must ensure that employees involved in AI use, deployment, or oversight possess adequate AI literacy. Later in 2025,  any new AI models based on GPAI standards must be fully compliant with the act. Just like GDPR, there will be significant fines for non-compliance. 

I think there are some great opportunities here. We’ve been busy working with the likes of KnowBe4 to create new, more intelligent ways of assigning security awareness training that is based on user behavior. 

More generally, aside from the EU AI Act, there is a huge opportunity for security awareness training. I’m excited to see what Human Risk Management platforms like CybSafe will conjure up in 2025.

Gradual Shifts in Third-Party Risk

Let’s face it: we’re long-due a shift in how we manage third parties. This is one of many pre-existing problems in the industry that AI has shone a light on. 

Going forward, I’d wager that we’re going to be speaking less about the “AI problem” and rather a third-party risk problem. Sure, you can block ChatGPT and buy an enterprise subscription to Copilot, but are you really going to block Grammarly, Canva, DocuSign, LinkedIn, or the ever-growing presence of Gemini through your Chrome browser?

As more organizations choose to buy rather than build, there’s going to be an awful lot of AI to manage. 

Yet (crucially) the way we do third-party risk fundamentally doesn’t work. We’re still sending rigid questionnaires that give the facade of security. 

To borrow from my previous section, we’re going to see frameworks emerge that will give these teeth. The Digital Operational Resilience Act (DORA) introduces industry-specific requirements that intersect with AI use, particularly in financial services and other regulated sectors.

Boost for Protecting against AI Model Threats 

At the beginning of 2024, we were told that there was a storm coming. We saw a proliferation of new frameworks and taxonomies for tracking these. It included prompt injections, model hallucinations, bias, and other attacks against AI models.

While there has been a host of new prompt injection techniques published, these have been relatively few public cases or organizations’ in-house AI models being compromised in this way. This could be because most organizations are opting to buy versus building their own models, and so the impetus to improve the security of the models falls largely on the model providers themselves.

On this side of the house, there have been some good improvements from model providers, such as Microsoft. Last month, they released a suite of new tools to detect “risky AI use” and deal with some of these oversharing and prompt injection risks. 

Data Security Solidifies as the “Perimeter du Jour”

When it comes to securing GenAI, one area that is still underserved is data protection. In reality, most approaches to this are using legacy technologies and approaches like regular expressions or labeling.

Data security has probably been underserved for the last decade, but some recent money and investments in the space – along with the need – will catapult it to be spoken about a good deal over the next 12 months. 

AI for Security Goes Mainstream

Finally, I’ve been speaking to a great number of security leaders over the past several months who are interested in exploring how AI can help their security programs. They want to take the load off the security and have security tools take actions more autonomously. 

I don’t think this is another SIEM/SOAR/XDR debate; this level of automation will be expected across the security stack. 

If 2024 was about experimentation and exploration, I anticipate that we’re going to start seeing security programs more earnestly embracing AI-for-defense. I'm particularly excited to see what will come of the agentic future of AI. We've already seen plans around Google’s Project Mariner, where enterprises will benefit from all sorts of AI use cases at the browser level.

What did AI think?

But who cares what I think?

I asked six GenAI tools “what is the most interesting prediction you can make for 2025”, and the answers were eerily…boring. Only Perplexity gave what I think is mildly interesting – and backed it up with sources.

But here you go:

  • ChatGPT. By 2025, artificial intelligence will likely play a central role in personalizing global education, tailoring learning experiences to each individual's cognitive strengths and weaknesses in real-time.
  • Google Gemini. 2025 will see a significant shift in the global energy landscape, with renewable energy sources surpassing fossil fuels in terms of installed capacity.
  • Copilot. In 2025, AI-powered cyber defenses will become so advanced that they will autonomously counteract and neutralize threats in real-time, transforming the cybersecurity landscape.
  • Claude. In 2025, I predict a breakthrough in quantum computing will enable complex molecular simulations that could dramatically accelerate drug discovery for previously intractable diseases like certain aggressive cancers.
  • Perplexity. Nvidia could potentially grow to twice the value of Apple, becoming the most profitable company of all time by leveraging its revolutionary AI chips.
  • Groq. The most interesting prediction for 2025 is that autonomous AI agents will become significantly more advanced, taking on more useful tasks and potentially transforming industries like finance, entertainment, and personal assistance.

Request a demo

Alastair Paterson