Company Updates

Harmonic Origins and the Failures of Legacy Data Protection

July 11, 2024

The origin story: a decade of data leaks

In 2011, it became clear that the traditional enterprise perimeter was vanishing, and data leaks soon become rampant. The company we started 13 years ago, Digital Shadows, detected this data on behalf of more than 500 organizations. By 2022, we were detecting billions of leaked files every week, most of which through human error.

Throughout that journey, it was clear that data protection had failed. 

Digital Shadows was acquired in July 2022, and four months later, ChatGPT launched, exacerbating data leakage and shadow IT concerns. Together with my co-founder, Bryan Woolgar-O’Neil, we set about creating a company that could offer a fresh approach to data protection. 

Bryan, the former CTO at Digital Shadows, and I also shared a history in the same band.  Naturally, we couldn’t resist the musically-charged “Harmonic” name.

Byran and I in the glory days

Why has data protection failed?

We’ve spent billions of dollars and decades of effort and data protection doesn’t work. I’ve been lucky enough to speak with a number of security leaders over the past decade to try and understand why. 

Here’s my best understanding:

  1. Labeling is too hard. It takes too long, lacks executive appetite, and will never be comprehensive enough to fully work.
  2. Regex is outdated. Legacy pattern matching techniques like Regex don’t do an effective job of detecting sensitive data, especially in unstructured formats
  3. Security teams can’t cope. The false positive rate is extraordinarily high, with some CISOs estimating upwards of 99% false positives. The sad reality is that most security teams are too busy, which means that alerts get ignored.

Obviously, DLP and other legacy tools still have a role to play: they’re important for adhering to the myriad of growing compliance and regulatory requirements. Similarly, these tools can be powerful for context as part of an insider risk investigation. 

But – let’s face it – they are not doing a good job of protecting sensitive data from leaking out. 

Shifting mindsets to be user-centric

Technology alone won't solve these challenges; we need a mindset shift from blocking users to securely enabling them. 

Gartner reports that 69% of employees intentionally bypass cybersecurity guidance. Most aren't malicious—they're simply trying to work more efficiently. If we don't meet users where they are, we risk driving them further away from existing controls. 

Everyone claims to be customer-obsessed, and rightly so. We're extending that obsession to the end user, ensuring security teams have what they need to provide the best possible user experience.

Experts and specialism versus generic large language models

Of course, the emergence of ChatGPT and its peers presents an opportunity. These have colossal abilities to understand unstructured data in a way we’ve never been able to before. But LLMs aren’t the solution for everything. 

For example, if you were to feed every piece of data leaving the business to GPT 4o, it might do a pretty reasonable job of understanding if that data is sensitive, if you had the time to set it up right. Unfortunately, it’s expensive and slow so it doesn’t work for security teams who want to provide a seamless, positive experience for end users. 

We have trained smaller, more specialized models and made them incredibly fast – 300 times faster than ChatGPT and 90% more accurate. Instead of harvesting customer data to train these models, we’re pre-training them with our unique dataset.

We will soon be releasing a suite of turnkey expert models that can tell you if unstructured data is sensitive in near real time. 

Moving quickly with practitioner guidance

I’m fortunate to have a phenomenal team with me. Of the 23 Harmonic employees, 20 previously worked together at Digital Shadows. I like to think this gives us a bit of a headstart, as we know a thing or two about building security products for enterprises. I cannot believe how quickly the team has been able to create an enterprise-ready offering that’s already deployment to tens of thousands of end users. 

At the same time, we need to stay agile and stay response to the needs of security teams. From day zero, our conversations with security practitioners have shaped what we build and how we build it. I’m fortunate to have an incredible network of CISOs as advisors and mentors to help guide our product. A great example of this is Mark Sutton, CISO of Bain Capital, who joined our board last month.

Join us on the journey

Our commercial launch is less than a week away, so keep your eyes peeled for some exciting announcements from the team.

If you’d like to learn more about what we’re building, request a demo or grab some time to speak with us at Black Hat in August

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Alastair Paterson