Best Practices

GenAI security: A case for using NIST CSF

April 15, 2024

“Sooner or later, everything old is new again”

Stephen King, The Colorado Kid

This month we’ll finally witness the launch of NIST's Cybersecurity Framework 2.0, which introduces a new “govern” stage.

This update, although not directly focused on AI security, offers an effective approach to addressing some of the major challenges associated with the rise of Generative AI (GenAI). In fact, for the vast majority of organizations, NIST CSF 2.0 may well be more helpful than the new AI-specific frameworks and regulations that only apply to a smaller subset of organizations that are building their own in-house AI models and practices.

In this blog, we’ll explore:

  • To what extent AI changes things for security teams;
  • What is different about NIST CSF 2.0;
  • Best practices and Harmonic’s tips at each stage of the framework;
  • Why we need to learn past lessons and avoid being the “Department of No”.

The impact of AI on security

It’s not new for products to be built on AI; machine learning, natural language processing, and other techniques have been used for many years in a number of products–albeit never mainstream. However, since OpenAI’s release of ChatGPT in November 2022, the number of products using GenAI has skyrocketed. Like it or not, no SaaS app now seems complete without some form of “AI”.

But what does this all mean for security?

We often rush to think about the security of AI itself. This is where you have the likes of HiddenLayer and Cranium that are doing some awesome things when it comes to building secure AI. We will be digging into some of the key themes around this topic in a future blog, so stay tuned! This is also where we see an increasing number of new regulations (such as the EU AI Act) and frameworks (such as NIST AI Risk Management Framework). These are all great for businesses building products with AI, but for the vast majority of organizations that do not build products with AI, it is a different challenge. 

Instead, it’s about how to keep data secure as employees flock to use the tens of thousands of AI apps for their day-to-day jobs. In many ways this is an extension of existing, known challenges; shadow IT, supply chain management, and the explosion of BYOD (Bring Your Own Device).

That’s not to say there’s nothing new about AI. The use of GenAI has thrown up some real, new challenges around intellectual property, of course. For example, for software engineers using code generation tools, the ownership and licensing of that code enters murky waters.

Looking ahead to NIST CSF 2.0

Ten years ago, NIST released version 1 of its Cybersecurity Framework, which it iterated on further in 2018 in Version 1.1. Since its release, it has served as an excellent and well-respected guide for building a cybersecurity program. These steps are not meant as a checkbox exercise but an important framework for building a narrative around your security program.

Last summer, NIST released a public draft of a new, version 2.0 CSF, which will be formally released this month. Although we do not know final details of the framework, we can be fairly sure about one fairly major change: the introduction of the “govern” stage.

“Govern” has been introduced to “cover organizational context; risk management strategy; cybersecurity supply chain risk management; roles, responsibilities, and authorities; policies, processes, and procedures; and oversight”. Given the scope of this new stage, the CSF’s suitability to serve as a guide for security Gen

AI is significantly boosted. 


With the caveat that this may be slightly different by the time it is published, let’s explore in a bit more detail. 

Step 1. Govern

When it comes to securing GenAI, the Govern stage is a game-changer for NIST CSF; it elevates the conversation to consider the business needs and risk management. Crucially, it is about roles and responsibilities and ensuing senior leaders are part of the conversation. 

Areas of consideration:

  • Establish organizational context. Each businesses’ needs and reliance around GenAI will be different. Geography and industry will also have bearings on data privacy regulations
  • Create GenAI policies: Define clear policies and guidelines that encompass the use and management of GenAI technologies. Learn from past security challenges to avoid becoming a "Department of No" and instead adopting a "yes, and" approach that enables users, while stopping the inevitable circumventing of rules when a block is put in place. Work with people, not against them. 
  • Establish an AI Working Group: Create a cross-departmental group of senior leaders that focus on AI. This typically draws from security, engineering, IT, and legal teams. Meet at a regular cadence to adapt policies, track changes, and communicate with leadership.
  • GenAI supply chain management. Supply chain management is now a significant part of Govern, meaning that you should make sure that there are processes in place for creating new suppliers and ensuring those suppliers are handling your data securely and are not using your data to train their models. 

Step 2. Identify

The next step is to actually understand what AI applications are being used in the business, and how users are interacting with them.

Areas of consideration:

  • Application audits. Know what people are using and what LLMs are behind those applications. Consider if any apps are operating in geographies on sanctions lists. Don’t forget to focus on newly-adopted technologies. Proactively reaching out to users when they begin to use new tools to offer them guidance will go a long way to fostering trust. 
  • Use case mapping. Don’t just use technology–get closer to the user base to understand their use cases and needs, which can help in tailoring cybersecurity measures more effectively. Some of the security leaders we work with have hundreds of use cases defined and listed.
  • Software Bill of Materials. Consider how to track if developers are using coding assistant tools to generate code that may be insecure or subject to licensing considerations. Sema Software have called this a GBOM - Generative Bill of Materials.
  • Monitor data privacy policies. More generally, know the data privacy policies of the AI apps in use, and review at least once a year to ensure the policies have not changed.

Step 3. Protect

This stage is all about using “safeguards to prevent or reduce cybersecurity risk.”

Areas of consideration:

  • Employee awareness and training. Regularly train employees on the importance of data security and the potential risks associated with GenAI technologies. GenAI usage should be provided in addition to existing awareness training. At the same time, you should update your current training on social engineering, phishing and business email compromise to reflect how attackers are using GenAI to create convincing lures. Staying updated and relevant on the threat landscape is a critical component of your overall GenAI security strategy. Challenge yourself to create a policy that isn’t just legalese but is written and communicated to users in a way they will actually understand. 
  • Robust access controls. To avoid the risk of password reuse and account takeover, ensure that users are using strong forms of authentication. Favor using Single Sign-On (SSO) and phishing-resistant multi-factor authentication (MFA) to ensure that only authorized users can access sensitive data.
  • Block risky AI apps. Although excessively strict blocking policies can have adverse effects, consider blocking high risk AI apps that you know put your data and employees in danger. If you do block apps, make sure you don't give users a dead end. Provide an alternative or spark up a conversation about needs and requirements.

Step 4. Detect

While the focus should be on the first three stages, any sensible defense-in-depth approach should have some measures in place to detect sensitive data being put into GenAI applications. Even near misses will help to inform if your policies are understood by users.

Areas of consideration:

  • Use existing monitoring tools. Although they may not be perfectly implemented and will not paint the full picture, consider using existing firewalls,  cloud access security brokers (CASBs), and other monitoring tools to detect categories of data that might indicate sensitive data leakage.
  • Detect anomalous activities. Combine threat insights from your identity provider, identity threat detection or UEBA tool to identify suspicious patterns of a user that can indicate an insider risk issue.
  • Dedicated GenAI security tools. Consider a tool like Harmonic Security to detect and categorize traffic to AI apps.

Step 5. Respond

If you are in the position to detect an employee using a risky app or sharing sensitive data, think through potential response options.

Areas of consideration:

  • Communicate. Let the user know why you have blocked them using the tool and offer them an alternative, approved tool. If there are no tools that can serve their use case, consider purchasing a tool that can do so in a secure manner.
  • Block upload. Some tools have emerged that enable you to block sensitive data being uploaded to platforms like ChatGPT. If you’re using a DLP tool already, ask if they support this functionality. Dedicated GenAI security tools, including Harmonic Security, also offer this functionality.

Step 6. Recover

Finally, if you know that users have been sharing data with an AI app that created an incident, there’s a few measures to consider. 

Areas of consideration:

  • Gain control. If many employees are already using a tool that you want to gain control of, one of the more tactical options for recovering your data is to purchase the tool in question. Doing so will often enable you to ringfence the data and reduce the ongoing risk. This will likely have the added benefit of improved access controls. Of course, this will not be possible in a lot of cases. 
  • Business Continuity Planning. Develop and regularly update a business continuity plan that includes scenarios involving GenAI technologies to minimize downtime and ensure a quick return to normal operations. We have had significant issues with hyperscale infrastructure in the past, including Dyn, AWS and Google Cloud. There have already been stories of DDoS attacks against OpenAI’s API and ChatGPT services. If your business depends heavily on processes that leverage OpenAI’s APIs, for example, ensure that they do not become a single point of failure. 
  • Think long term. Learn from incidents and near misses to iterate and improve your cybersecurity program. Consider how you currently handle Shadow IT incidents and apply this to GenAI Apps. This can include a number of measures, including implementing least privilege principle, deploying better monitoring capabilities, or better processes to communicate with end users.

Summary: Update your focus as well as your frameworks

We’re always on the lookout for a new shiny tool or framework. However, when it comes to securing GenAI, it may well be the case of “everything old is new again”.

Given how much GenAI tools can improve productivity, security leaders cannot be perceived as the “Department of No” again. This is an opportunity to rethink the role of security and how we interact with end users in the pursuit of protecting the organization’s data. I hope that the guidance above shows that this isn’t just a technology problem. Security practitioners need to make an effort to meet users where they are, learn their processes, and develop compelling policies that are transparently delivered.

Finally, look out for the final publication of the NIST CSF 2.0 as it may well provide you with a helpful structure for tackling these difficult challenges. By ensuring you have the right blend of people, processes, and technology to reduce data privacy risks and be an enabler. 

Request a demo

Alastair Paterson