Today we’re announcing the availability of Harmonic Protect – a new way of approaching data protection that is more suited to the GenAI era.
Let’s dig in.
The Background: Legacy Data Protection has Failed
Data privacy risks are the main concerns holding back Generative AI adoption. And for good reason. Security teams fear that if they try to secure sensitive from leaking into GenAI, they will add a further workload onto an already unmanageable list of responsibilities.
However, this is all stemming for an outdated approach to data protection, which has told you that you must:
- Label all your data;
- Create hundreds of complex regex rules;
- Aim to block users doing “bad” things.
In reality, though:
- Labeling is too hard;
- Rules are too noisy;
- Security teams are overwhelmed so they go into “monitor only” mode;
- Employees use the tools regardless, circumventing controls to do their jobs.
The result? Sensitive data goes undetected. This is a catastrophic waste of human talent clicking through meaningless PII false positives.
This was all true before the emergence of GenAI. Now it's untenable.
Harmonic’s Fresh Approach
Building data protection for today’s challenges gives us a chance to rethink things.
Harmonic Protect empowers security teams to safeguard sensitive data without the need for extensive data labeling or complex rule-setting. Instead, we help enable them to interact with the end user at the point of data loss, helping to avoid sensitive data leaking while coaching end users toward compliance.
Turnkey Solution
With Harmonic, it’s the equivalent of “hitting the easy button for data protection”. One CISO has coined it “zero touch data protection”.
Instead of having to create and maintain hundreds of regex rules, security teams can simplify toggle on pre-trained data protection models for PII, payroll data, source code, sensitive documents, and more. In doing so, we save teams from having to spend time labeling all their data. It’s as simple as turning on a switch.
What’s really cool about this is that we don’t train on customer data; the models are pre-trained on our unique dataset of publicly available data.
Directly Engage with End Users
Security teams are realizing that blocking GenAI tools is not enough; employees will use these tools regardless. By enabling end users to use these tools while nudging them against exposing sensitive data, security teams have a way to enable the business’ AI use.
Harmonic’s models are exceptionally fast, making accurate assessments within 200 milliseconds. This is 300 times quicker than if you were to use a LLM like ChatGPT.
This means that Harmonic can intercept the data before it is leaked, and provide nudges to the end user. These nudges are customizable, can link to company policies, and can re-direct to approved applications.
Automate Workflows
Nudging users is just the beginning. With our webhooks, you can create additional workflows in security automation platforms. For example, if an employee is repeatedly uploading sensitive data to ChatGPT, they can send tailored security training.
To discover more, check out of Tines story: https://www.tines.com/library/stories/1247085/coach-users-to-use-ai-safely-with-harmonic-security-and-slack
Summary
With Harmonic, you can confidently support the secure adoption of Generative AI without overwhelming the security team.
To learn more about Harmonic Protect, arrange a demo or grab some time to meet us at Black Hat! We’d love to show you what it’s all about!