Best Practices

The Vanilla Ice Guide to AI Usage Policies: Critiquing 4 Common Approaches

April 15, 2024

You’ve probably seen a bunch of “free AI usage policy” templates swirling around online. Pretty cool, right? It’s cool until you realize it’s pretty generic and that no-one will read it anyway. 

Given that we know AI usage policies are important, how do we create policies that are…good?

From speaking with dozens of security leaders, we’ve learned that there’s four main approaches that define how an AI usage policy looks:

  1. Block initially, but ease over time
  2. Legal-focused
  3. Trust and hope for the best
  4. Guide and collaborate with end users

We’ll dig into each of those approaches before sharing why Vanilla Ice might have been ahead of his time when thinking about AI Usage policies.

Don’t leave your security wide open 

“Block and Wait”

A significant number of CISOs are in a holding pattern: they are waiting to understand the full risks of these GenAI tools and don’t want to be too hasty. From a security and risk perspective, it makes most sense to block these tools–at least in the interim.

The benefit here is primarily securing data and not exposing employees to unnecessary risks. However, it also buys time for the organizations to better understand the specifics around core technologies that are soon to be released, like Copilot for Microsoft 365.

Of course, the pressure is on to enable employees with these tools. Productivity gains and competitive advantage means that a failure to adopt GenAI could leave the organization behind. 

A more nuanced variant of this approach is provided by Chris Gunner, who recommends

“implementing a block on web-based AI tools (using your web proxy, allowing the vendor to manage the plethora of tools available), but allowing use after staff read and accept a short policy and complete some training.” 

You can read more about the policy he recommends in Chris’ LinkedIn article, “AI Usage Policy”. 

“Legal-Easy There”

The second (and my least favorite) approach is the policies that are written with a legal audience in mind instead of an employee audience. They assume that employees may misuse GenAI tools, and need to ensure the organization is protected from liability. The focus of these policies, therefore, is more about what is legally permissible.

While these may, in theory, create clear boundaries for employees, they are likely written in impenetrable language that employees a) don’t want to read and b) can’t read even if they wanted to.

Instead, assume good intent and write policies in a way that supports user needs, instead of the “tell it to the judge” mentality. More on that in a bit.

“Trust and Hope”

The third type of approach is by far the most lenient and puts the most trust in employees. Some security leaders have simply referred to this as a “Don’t be a D***” policy. The premise is simple; allow employees to use GenAI tools but ensure they know not to upload sensitive data to these platforms. This assumes employees are mature, responsible, and rational in their understanding of “sensitive data”.

While this approach evidently empowers employees and boosts their productivity, it does run the risk of being too vague to effectively protect sensitive data. 

“Guide, Don’t Chide”

The fourth type of approach is one that places the end user at the center. It educates and guides users on responsible AI use, instead of creating excessively strict policies that they will inevitably bypass

Done well, this approach actually engages users in understanding the potential risks and benefits of GenAI tools. It can even offer specific training and resources to ensure they use these tools effectively and safely. Finally, this ensures that you never give the user a deadend; if they can’t use a tool, you can direct them to an approved tool. Even if there is no tool in place, communicate this! Let the user know that "we don't have anything here, but let's see what we can do".

This approach moves away from the concept of “policy violations” and instead thinks about how to bring that user into compliance.

Of course, this approach has its own downsides. There is no quick fix or switch you can flick. It’s hard work to truly understand users and their needs. Second, it still requires some level of monitoring in place to ensure you have the right visibility to maintain compliance with global and local frameworks.

Stop, Collaborate, and List: 3 Factors to Consider with AI Usage Security Policies

The four approaches discussed above have their own advantages and disadvantages, and they are by no means mutually exclusive. A sensible approach will draw inspiration from these different areas to ensure the right policy for your organization. 

Vanilla Ice, in his infinite wisdom, has some excellent guidance to shape our AI security policies. "Stop, Collaborate and Listen," perfectly encapsulates the essence of what modern AI usage policies should embody. Let's break it down:

  • Stop sensitive data getting out by understanding how your users use these tools. This will help to understand the scale of the problem and justify blocking a tool if you do decide to do so. Use monitoring tools (such as Harmonic) to get visibility of the problem and have a clear definition of what constitutes sensitive data.
  • Collaborate with different stakeholders. GenAI use cases stretch to all parts of the business, and so AI usage policies will impact everyone. Effective policies come from understanding the needs and concerns of all parties involved (without over-focuses on certain departments, cough legal).
  • Listen to the needs and feedback of users. Once your policy is in place, it's not set in stone. Be open to feedback from those it affects the most – your employees. Their insights can help refine your approach, making it more effective and less intrusive. Regular review cycles can incorporate this feedback, ensuring your policy evolves alongside the AI landscape.

In Summary

By embracing these different approaches and Vanilla Ice’s principles, your AI usage policy can move from an ignored, unread document into an effective relationship that drives innovation. 

Nailing the perfect AI usage policy isn’t exactly straightforward, but we can strive to be better than using pre-built templates. Give guidance that works for your users, include them in the process, and bolster the security of the entire organization.

Yo VIP. Let’s kick it. 

Request a demo

Michael Marriott