Best Practices

Shadow AI – Should I be Worried?

June 10, 2024

This is an except from an original blog posted on SecurityWeek. View the full blog here: https://www.securityweek.com/shadow-ai-should-i-be-worried/

Since OpenAI’s release of ChatGPT in November 2022, the number of products using Generative AI has skyrocketed. Right now there are some 12,000 AI tools available promising to help with over 16,000 job tasks and we’re seeing this number grow at around 1,000 every month.

The growth of these tools is fast outpacing the capability of employers to control them. In the UK alone Deloitte has found that 1m employees (or 8% of the adult workforce) have used GenAI tools for work. And only 23% of this number believe their employer would have approved of this use. This indicates that either their employer doesn’t have a policy about using AI safely or they’re just ignoring it in the hope that the perceived productivity gain is worth the risk. As we have seen with ‘Shadow IT’ – if employees believe there are productivity gains to be made by using their own devices or third party services then they’ll do it….unless firms come up with pragmatic policies and safeguards to work with new areas of technology.

Most GenAI apps are thin veneers of ChatGPT – minus the safeguards

Employers are entitled to be cautious. Most of the 12,000 tools listed above are in the main thin veneers over ChatGPT with clever prompt engineering designed to make them appear differentiated and aligned to help with a specific job function. However, unlike ChatGPT which at least has some safeguards against data protection, these GPTs offer no defence about where company data will ultimately end up – it can still be sent to any number of spurious third party websites with unknown security controls.

We analyzed the most popular GPTs and found that forty percent of the apps can involve uploading content, code, files, or spreadsheets for AI assistance. This raises some data leakage questions if employees are uploading corporate files of those types, such as:

  • How do we know that the data does not contain any sensitive information or PII?
  • Will this cause compliance issues?

As this ecosystem grows, the answers to these questions become more complex. Whilst the launch of the ‘official’ GPTstore might help vet some of these apps we still do not know enough about the review process and the security controls they have in place, if any. The privacy policy states that GPTs do not have access to chat history, but little else. For example, files uploaded into these GPTs could be seen by third parties. It’s unlikely that a carefully curated and largely secure ‘App Store’ such as we have seen for mobile apps will evolve. Instead it is likely that at best the GPT store could become a user’s first port of call but if they can’t find what they need, alternatives are readily available elsewhere.

Digging a little deeper into key concerns

Broadly speaking, security concerns are based around the following:

Privacy and Data Retention Policies – every GenAI app will have different privacy and data retention policies that employees are unlikely to assess before proceeding. Worse still, these policies shift and change over time, making it difficult to understand a firm’s risk exposure at any point. Leaving this up to your employees’ discretion will likely lead to compliance headaches later down the line, so these apps must be considered as part of a robust third party risk program. Some applications explicitly give themselves permission to train future models on the data uploaded to them for example.

Prompt Injection – AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed – such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments.  An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.

Continue reading on SecurityWeek.com

Request a demo

Alastair Paterson