Best Practices

Reducing Shadow AI Risks: Applying Lessons from The Prohibition Era

April 15, 2024

Next week it will be 105 years since the United States introduced the 18th Amendment, establishing the prohibition of alcohol. The goal? To tackle genuine societal issues, such as poverty, by banning alcohol. 

We now know that this heavy-handed approach backfired spectacularly. It drove the alcohol industry underground, increased smuggling, and fueled the rise in organized crime. In fact, the 18th Amendment remains the only constitutional amendment ever repealed. 

Now, as we explore the world of GenAI tools, we encounter similar challenges. If we create overzealous policies and blanket bans on AI tools, we risk forcing users underground to use unknown tools with unknown consequences. 

The Risks of Shadow AI 

GenAI tools are game-changers for workers, boosting productivity by up to 40%. For this reason, we should expect adoption to continue to increase. In fact, Forrester Research predicts that sixty percent of workers will use their own AI to perform their job and tasks by the end of 2024

The issue comes when employees are using tools that have not been approved by the business. You may already know this as Shadow IT, which has been around as a concept for several years.  However, GenAI makes this far more compelling: Who wouldn’t want a lot of the drudgery of their job function automated? Employees are constantly trying out and adopting new tools outside of the watchful gaze of security and IT. 

Shadow AI can be considered a subset of Shadow IT, but one that is growing exponentially. Last year, our research found that the number of GenAI tools exploded to more than 10,000. This will only become more complex over the coming years and industry analysts expect that organizations will struggle to manage the associated “regulatory, privacy and security issues”.

Security Teams are Right to be Skeptical 

While organizations know they will benefit from gains in productivity, there are some pretty significant risks that they’re weighing up too.

If a software engineer generates code from a GenAI tool, who owns that intellectual property? What happens if a customer success manager uploads a spreadsheet with customer PII? What if a product manager creates a slick set of slides for a roadmap, inadvertently exposing commercial information?

Armed with these concerns, a surprisingly large number of enterprises are banning the use of AI apps entirely.

Don’t get me wrong; these are valid concerns to real risks. However, slamming the door shut with strict policies won't necessarily keep employees from using them. It might just push them into the shadows, making it tough to maintain security standards and away from our watchful eyes. Embracing and recommending safe, vetted alternatives can ensure that employees use the tools that have MFA, clear policies, and are less risky. 

Step 1: Understand Underlying User Needs

If you want to limit employees adopting random GenAI tools, you need to begin by understanding why they are adopting them. We need to get inside their heads to anticipate their needs and concerns. 

Beyond the likes of OpenAI and Copilot, there’s a huge ecosystem of AI tools that cater to specific parts of employees’ jobs. Below, we have produced a market map of enterprise GenAI tools based on the usage we see in our client base. 

You can then start to understand how different roles may use these apps. For example, 

Marketing - Produce SEO-worthy content and killer presentations and visuals

Software Engineers - Review, Suggest code and Automate workflows

Accounting - Enhance spreadsheets and create charts

Data Analysts - Crunch numbers in large datasets

Sales - Create good email headers and content, while tracking engagement

Customer Success - Enable customers with the right content

Legal - Analyze documents and ask questions

General/All - Improve timekeeping and manage email inbox

Armed with this information, we can craft sensible policies that balance innovation with safeguarding our data. Once you know how employees wish to use these tools, you can offer reputable alternatives and have a clear way to request tools going forward. 

Step 2: Education, Education, Education

Next, educating employees about AI policies and secure usage should be a component of any security awareness training. It’s not enough to simply say “don’t share sensitive stuff”.

First, there's the legal and ethical side of things. Employees need to be clued in on data privacy regulations like GDPR and HIPAA. Understanding these rules is key to steering clear of legal pitfalls and maintaining the trust of clients and the public. It's about respecting boundaries in the digital world. 

Second, employees need clear guidelines on how to handle sensitive data. This is especially important in industries like finance and healthcare where the stakes are high. Educating staff on what data is off-limits and the right way to manage the data they can access is essential in preventing breaches and safeguarding the company’s reputation. 

Step 3: Audit

Auditing shadow AI usage involves proactive measures to detect and manage unauthorized AI tools within an organization. An important first step is to audit the current use of these tools to understand what is already in place. This includes identifying any new or emerging tools that have been adopted without official sanction. 

To effectively track shadow AI, organizations can leverage specialized tools that exist on the market. These tools are designed to monitor and report on unauthorized software usage, helping to maintain control over the IT environment. 

While these can help to identify users that have signed up for additional services, you may need to take additional steps to understand the actual usage.For more comprehensive guidance on managing shadow IT, the National Cyber Security Centre (NCSC) offers some great insights and recommendations. Their guidance can be found at NCSC's Shadow IT Guidance.

Conclusion

Let's not repeat a prohibition-esque approach by pushing these incredible tools into the shadows with excessive restrictions. 

There will always be tools that employees use on the side outside of the security team’s gaze. However, there’s plenty we can do to reduce the risks of Shadow AI, educate employees, and detect risky applications being used by employees. 

Request a demo

Michael Marriott