Another blog on DeepSeek!? Haven’t we had enough?
Well, here’s the thing—DeepSeek has completely disrupted how organizations approach GenAI governance. It has forced teams to rethink their AI policies and implementation strategies, showing just how flawed many current approaches are.
The AI Usage Policy Implementation Mess
Most organizations we speak to now have a) some form of AI policy and b) a cross functional team/committee/effort to oversee the AI strategy.
Where this gets a bit messy is in the implementation, where there are two camps:
- Block Everything Approach.
Companies block GenAI apps entirely using CASB categories, which struggle to keep up with new tools and services. What’s more, employees still want to use them, so they shift to personal devices, mobile apps, or other workarounds. And before long, security teams realize that most SaaS applications already have AI baked in, making the ban somewhat meaningless.
- ‘Trust and Hope’ Approach.
Organizations allow AI tools but warn employees not to enter sensitive data. The issue? It’s all based on trust. And then along comes DeepSeek, raising concerns about data security—especially when user data ends up being stored in China.
DeepSeek has made it clear that neither of these strategies truly work. Block everything? Employees will find a way around it and you will always be reactive. Allow AI but rely on good intentions? Sensitive data will inevitably leak.
The Rise of DeepSeek: A Timeline
We’ve been talking incessantly about DeepSeek for the past 2-3 weeks, but their chatbot has been around since November 2023. DeepSeek is not new. It should have already been on the radar of any AI governance program – alongside thousands of other AI apps.
Let’s have a recap of how this all came about.
- July, 2023: DeepSeek first emerges as an entity, funded and backed by the company High-Flyer.
- November, 2023: DeepSeek first emerges, with its first AI model, DeepSeek Coder, focused on code. It soon introduces its general-purpose large language model and an Alpha of their web-based chatbot is made available.
- May 2024: DeepSeek launched DeepSeek-V2, praised for its strong performance and lower training cost.
- September 2024: DeepSeek released DeepSeek V2.5, which merged DeepSeek V2 Chat and DeepSeek Coder V2 into a unified model, improving performance in general capabilities and coding tasks.
- December 2024: The company unveiled DeepSeek-V3, a mixture-of-experts model designed for multi-domain language understanding with cost-effective performance.
- January 20, 2025: DeepSeek released the DeepSeek-R1 model, an open-source AI model designed for advanced reasoning, problem-solving, and real-time decision-making.
- January 27, 2025: Following the release of DeepSeek-R1, the company's AI assistant app surpassed ChatGPT as the most-downloaded free app on the U.S. iOS App Store. This rapid rise contributed to a significant sell-off in tech stocks, with Nvidia's share price dropping by over 12%.
- January 29, 2025: Cybersecurity firm Wiz discovered a publicly accessible database belonging to DeepSeek, exposing over a million lines of sensitive data, including chat histories and API keys. Wiz promptly notified DeepSeek, which secured the database within an hour.
DeepSeek data sharing policies
One of the biggest issues with DeepSeek isn’t just its technology—it’s its data collection and privacy policies. (To be clear, we’re not talking about the open-source models you can find on Hugging Face.) DeepSeek’s privacy policy explicitly states that user data is used to train its models and that this data is stored in China.
And it’s not just DeepSeek—other AI tools, including DouBau, Ernie Bot, Kimi Moonshot, and Qwen Chat, have similarly vague or concerning data retention policies. But this isn’t just a “China problem.” Plenty of AI tools in the U.S. also train on user data with questionable transparency.
This growing concern has fueled interest in AI Asset Inventories, largely driven by emerging regulations like the EU AI Act. However, most organizations still rely on manual legal reviews to identify risky AI apps—a slow and unreliable process, given how quickly new tools emerge.
Should you block DeepSeek?
Yes, probably. And technically, it’s not hard to do.
But the real question is: How will you protect employees from the next DeepSeek?
Employees have a way of bypassing bans. Many thought they had blocked ChatGPT, only to find that employees found workarounds.
So, rather than just blocking individual tools, organizations need to focus on education—explaining why certain tools are restricted and offering safer alternatives.
Are your employees using DeepSeek?
Despite its sudden rise to the top of the App Store, DeepSeek’s usage isn’t necessarily widespread. However, constant coverage of it creates the “Barbara Streisand Effect”—the more we talk about it, the more people become curious and try it.
The core issue remains: organizations need an AI asset inventory that doesn’t just rely on procurement teams spotting red flags in privacy policies. Simply telling employees “don’t put sensitive data in AI tools” isn’t an effective strategy.
A more effective solution includes:
- A continuous AI asset inventory tracking GenAI tools and GenAI-enabled apps
- Controls to block sensitive data from being entered into AI applications
- Custom policies tailored to users, data types, and applications
Protect Sensitive Data from Leaking into DeepSeek with Harmonic Security
Want to learn how to track AI assets, safeguard sensitive data, and enforce AI policies effectively?
Check out this 3–4 minute overview of how Harmonic provides continuous AI asset monitoring—including apps like DeepSeek—and how it helps organizations protect data and enforce policies with precision.