Industry Insights

ChatGPT one year on: How 10,000+ AI tools have changed the workplace and redefined data security

April 15, 2024

Exactly one year ago, OpenAI released ChatGPT to the public. It was an incredible leap forward and within just the first 5 days, more than 1 million people used it. Now, after a year of rapid progress, it boasts 100 million weekly active users itself, and has spawned a plethora of applications based on the same type of Large Language Model (LLM) technology.  These applications promise significant productivity gains to the employees who use them.

But what does this mean for security teams in today's enterprises? 

In this blog, we analyze the proliferation of AI tools, their usage by employees, and the accompanying risks.

The Proliferation of AI Tools

To make this ecosystem even more complex, OpenAI have announced they will soon be releasing a store that will enable users to share and sell custom GPT bots.

And this is just Open AI! While ChatGPT dominates the landscape, there is a similar enormous adoption of Microsoft’s Copilot, and Google’s Bard–with Google set to release more functionality in its highly-anticipated ‘Gemini’ release next year. Just this week, ByteDance (the creator of TikTok) announced their own Feishu 7, an enterprise productivity AI application for the workplace. 

The explosion in AI tool availability and use has dramatically changed how employees work. Recent studies suggest these tools can boost worker performance by up to 40% compared to those not using them.

The Emergence of Shadow AI

However, if we are to truly understand the impact of Large Language Models (LLMs) within the workspace, we must look beyond the major tech players. There is a whole ecosystem of AI-powered apps out there Futurepedia lists 5,576 different AI apps available, and AITopTools.com more than 10,000. 

Security teams often don't know which apps their employees use daily, posing significant risks to third-party risk and data security programs. Many of these apps come from small companies lacking robust enterprise-level security. A recent Hacker News article referred to these as “Indie AI Startups.” 

This phenomenon has become known as “Shadow AI”, and is still an emerging concern for CISOs. In Forrester’s 2024 Predictions, it recommends that “companies must identify apps that could increase their exposure and double down on third-party risk management.”

Top 1,000 AI Tools: Looking Beyond the Behemoths

We're familiar with ChatGPT and similar solutions, but what about the multitude of other tools? What are they doing, and how does this impact employees and threat models? 

In order to better understand the types of tools employees are using, we analyzed the top 1,000 AI sites. We omitted existing products that had incorporated AI into their existing services, as well as ChatGPT, Bard, and Copilot. 

From that list, we were able to break this down into eight primary categories: Code Assistant, Content Creation, Customer Service, Email Assistant, File Analysis, General Productivity, Presentation Creation, and Spreadsheet Creation. You can see the most popular categories distributed below, but the most common are customer service, content creation, and coding assistant.

Code Assistant

Streamlines the coding process by providing features to generate, review, and manage code efficiently. It aids in writing code, debugging, and understanding complex code structures, making it easier for developers to maintain and improve their software. 

Content Creation and Copywriting
Simplifies the process of creating written content for various professional needs. It assists in drafting articles, blogs, marketing copy, and other written materials, promising to enhance creativity and saving time. 

Customer Service

Provides tools to handle customer inquiries, feedback, and support tickets automatically. It also includes a knowledge base to inform and assist customers, improving overall customer satisfaction. 

General Productivity

Tools that enhance overall productivity and efficiency. Features like time tracking, calendar management, and task prioritization help users manage their time better and increase their work output. 

Email Assistant

Helps organize and manage email inboxes. It offers features to sort, prioritize, and respond to emails. Additionally, it can generate email drafts and responses, aiming to make email management more efficient and less time-consuming. 

File Analysis

Provides in-depth analysis of various file types, including PDFs. It allows users to upload files and receive insights, summaries, or data extractions, making it easier to understand and utilize the information contained in these documents. 

Presentation Creation

Aids in creating professional and compelling presentations. It can generate presentation decks from provided content, aiming for visually appealing and coherent slides that effectively communicate the intended message. 

Spreadsheet Support 

Specializes in handling and interpreting spreadsheet data. It allows users to upload CSV files and other spreadsheet formats, providing analysis and insights, making sense of complex data sets and facilitating data-driven decision-making.

Understanding Risks of AI Adoption

All of these apps clearly have a huge potential to improve the productivity of the workforce, but we should not be blind to the risks. Instead, if we understand the apps that are out there and being used, we can better understand some of the likely security issues and create policies and processes to reduce risk.

Data Leakage

Beyond the risk of sensitive data being included in prompts themselves, forty percent of the apps (400) can involve uploading content, code, files, or spreadsheets for AI assistance. This raises some data leakage questions if employees are uploading corporate files of those types, such as: 

  • How do we know that the data does not contain any sensitive information or PII?
  • Will this cause compliance issues?

In fact, when we surveyed dozens of CISOs, data leakage was the most common concern when it comes to AI adoption. (85.7% of interviewed CISOs listed data leakage as a top concern.)

While in essence, this is not very different from data being sent to other SaaS applications or otherwise out of the enterprise, AI is so new, evolving so quickly and is poorly understood that it is a significant current concern.  In addition, anything that can automate tedious aspects of employee roles will be tempting for them to use.  

Privacy and Data Retention Policies 

Second, every one of these apps will have different privacy and data retention policies that employees are unlikely to assess before proceeding. Worse still, these policies shift and change over time, making it difficult to understand your risk exposure at any point. 

Fortunately, many of the apps we reviewed have clear data usage and retention policies. But there were plenty of exceptions. Leaving this up to your employee’s discretion will likely lead to compliance headaches later down the line, so these apps must be considered as part of a robust third party risk program.  Some applications explicitly give themselves permission to train future models on the data uploaded to them for example. 

Prompt Injection

AI tools built on LLMs are inherently prone to prompt-injection attacks which can cause them to behave in ways they were not designed - such as potentially revealing previously uploaded, sensitive information. This is particularly concerning as we give AI more autonomy and agency to take actions in our environments.  An AI email application with inbox access could start sending out confidential emails, or forward password reset emails to provide a route in for an attacker, for example.

Account Takeover

Finally, what happens if an attacker gains access to an employee’s account with full access to chat history, file uploads, code reviews, and more? Many offered social logins (login via Google, etc), but the option to sign up with an email and password was an option. Of these, very few apps we analyzed required MFA default. Given how frequently passwords are exposed, this raises the potential for accounts to be taken over.  While obtaining one single prompt may not be that interesting, the aggregation of a lot of prompts from a senior employee could give a more comprehensive view of a company’s plans or IP.

Secure By Design? Reducing Risk in the AI Age

As well as the anniversary of ChatGPT, this week also saw a pledge from 16 countries to make AI systems that are “Secure by Design”. While it’s easy to view the lack of teeth with cynicism, let’s be optimistic about the future. We have the opportunity to get ahead of this and consider security from the very start. 

Sure, some of this will come from government pledges and regulations, but a good deal needs to come from businesses and security leaders. There are practical steps security leaders are taking: 

  1. Shadow AI. Get visibility over which apps and “Shadow AI” your employees are using; and understand why they are using it to inform your threat model.  Understand key use cases and value drivers and provide safe alternatives to high-risk applications.
  2. Vendor Assessments. Be curious and demanding of your vendors. Hacker News published a list of questions to ask vendors, which is definitely worth checking out: https://thehackernews.com/2023/11/ai-solutions-are-new-shadow-it.html
  3. Create and Refine Policies. Most companies by now have some sort of AI Policy. Make sure this is iterated and updated in line with how your employees are actually using the tools.  Educating employees and nudging them in the right direction is as critical as ever.
  4. Monitor for Policy Violations. Implement systems to detect policy breaches. While many organizations have rules that stipulate you cannot share sensitive data, few are able to monitor for policy violations. 

AI has advanced significantly since ChatGPT's launch. The pace of change is accelerating, demanding innovative security tools and processes to meet these new challenges. Staying on top of new developments and sharing best practices will be key in navigating this evolving landscape.

Request a demo

Alastair Paterson