There’s little doubt that 2023 was the year for generative AI and Large Language Models (LLMs). It was a pivotal year that reshaped our interaction with technology. I won’t recap the AI explosion in depth here, but I’d encourage you to check out our recent research, ChatGPT one year on: How 10,000+ AI tools have changed the workplace and redefined data security.
As we approach 2024, the Year of the Dragon, I want to take this time to outline five predictions shaping the AI landscape. Don’t worry, they’re not all doom and gloom; you will see that there’s plenty of opportunity to be optimistic.
Prediction 1: Scaling and Consolidating AI Use
AI models—and LLMs in particular—have become a central focus for businesses and technology enthusiasts alike. With clear and demonstrative productivity gains to be had, even the most resistant security leaders will struggle to hold back the tides of AI adoption next year. (For what it's worth, our research shows that only 20% of CISOs are actively trying to block AI apps right now).
In 2023, we saw an overwhelming surge in AI applications adopted by enterprises, with over 10,000 apps emerging to create a complex ecosystem. This has created somewhat of a headache for security leaders, who are still grappling with understanding the unsanctioned, unmanaged AI in use (known now as Shadow AI).
Of course, leading the pack are Microsoft Copilot and OpenAI, though competition is brewing with the likes of Google, Meta, and even Byte Dance.
While the recent leadership turmoil at OpenAI shows that we should expect the unexpected, 2024 will still likely be a story of consolidation and clarity at the top of the tree.
Amidst all the hype, some organizations have attempted to develop in-house models that would be better served going with off-the-shelf approaches. When some of these DIY approaches fail or benefits are delayed, off the shelf will replace them later in 2024. The big providers are working hard on more enterprise-ready controls and deployment options, hastening adoption.
The initial novelty of some of the thousands of smaller apps will wear off in 2024—certainly those that are not differentiated and were really just wrappers for ChatGPT in one way or another. Some of those capabilities will be swallowed up by updates to ChatGPT in any case and, although there will still be thousands in existence, it’s doubtful how many will receive meaningful traffic.
It’s not all about the big handful of players at the top however: Given it’s been a year since ChatGPT arrived, that’s been about the right amount of time for founders of new enterprise startups to understand the technology, get funded and build a first version of their product to take to market in 2024. These enterprise apps will be more thoughtful than the first set of chat bot wrappers, Silicon Valley-backed, and targeting all business functions from HR and legal to sales and marketing. This is the wave the enterprise needs to keep the closest eye on, since there will be great business benefits for sure, but a lot of new companies that often have security as an afterthought.
With any hope, it'll feel a little less like the wild west and a clearer picture will emerge of what we need to protect as 2024 progresses.
Prediction 2: Breaching AI Defenses
Pen testers and red teamers have been getting their teeth into a variety of LLMs lately, and it seems like every week there's a new article about some security flaw. Take the recent example where DeepMind researchers managed to get ChatGPT to reveal snippets of its training data.
LLMs are inherently vulnerable to prompt injection attacks which can manipulate their behavior in a range of malicious ways. As such, I'm betting we'll continue to see experts uncover more and more cracks in Large Language Models (LLMs). Worse still, we’ll probably see news from attackers doing this, too.
But let’s not forget about the tens of thousands of other AI apps we’ve seen swirling around in 2023. My guess? We're going to see more than one of these apps, especially the smaller, boutique ones built on the early GPTs with shaky security, get hit next year. The ones that have access to sensitive data and are given permission to take automated actions will be top of the list. They're just too tempting for attackers to pass up.
Prediction 3: Regulations Get Teeth
Regulatory frameworks for AI are gaining momentum. Right now, a lot of these are pledges with nice words but zero penalties attached. This will change, and change fast.
This will likely start at the local level where we have already seen some movement, including with New York City’s Local Law 144 and Illinois’ Biometric Info Privacy Act. You can bet California and others will follow closely behind.
This is being accompanied by a big international effort. These important international efforts include the recent U.S. Executive Order, which signals a global move towards stronger AI governance. Additionally, the UK hosted the international AI Safety Summit, leading to the Bletchley Park declaration. These incremental developments will be vital for creating a secure and responsible AI ecosystem.
Europe is taking a leadership role here. The EU AI Act, which will hopefully be finalized by mid 2024, categorizes AI systems by risk level and imposes regulations to ensure their safety, transparency, non-discrimination, and environmental friendliness. You can also expect updates to EU GDPR as AI security and data security become more interwoven. Whatever is passed in Europe may become the standard for the rest of the world, as it was to a large extent for GDPR.
Prediction 4: Frameworks Take Off
The next twelve months will likely see the standardization and widespread adoption of security frameworks as security leaders grapple with getting their arms around the risks from AI. The top three that we’re seeing attract attention are:
- OWASP Top 10. The top security and safety issues that developers and security teams must consider when building applications leveraging LLMs.
- Mitre Atlas. A knowledge base of adversary tactics and techniques based on real-world attack observations and realistic demonstrations from AI red teams and security groups.
- NIST AI 100-1. A framework for those designing, developing, deploying, or using AI systems to help manage the many risks of AI and promote trustworthy and responsible development and use of AI.
I anticipate that these frameworks will become fundamental parts of security programs, offering a structured approach to managing the evolving threats in the AI space. To make this successful, we need to be active participants; sharing information and further contributing to these frameworks.
Prediction 5: Less 'Winging It’ from Security Pros
From the rapid onset of ChatGPT to the explosion of thousands of AI apps, security leaders have been doing enough to stay afloat. The majority of CISOs we spoke to (75%) already had a policy for AI usage. By the end of next year, that figure will be closer to 100%. However, almost no leaders are actually monitoring for policy violations. Yet.
The reality is that, despite the various international efforts to grapple with AI, it will be the private sector that will lead innovation in security. This will include embracing new ways to validate policies, monitor for violations, and improve employee training.
Indeed, there’s also a big opportunity here. AI has shone a light on how inadequate much of the cyber security industry already is, by raising the stakes further. The good news is that LLM technology can also be used to help fix security issues including finding vulnerabilities in code ahead of time, and doing a much better job of detecting sensitive material leaking out of organizations.
Summary
The upcoming year promises rapid innovation in AI and LLMs, but it's not without its challenges. The focus will be on embedding AI into our security frameworks, not just as an afterthought but as a cornerstone of our digital defenses.