Company Updates

Harmonic Security: New beginnings

April 15, 2024

Today I am delighted to announce the launch of Harmonic Security with a $7m seed round, helping companies to adopt Generative AI without risking the security and privacy of their data.

I’d like to thank our lead investors Ten Eleven Ventures who alongside Storm Ventures are backing us for a second time. Mark, Alex and Ryan were some of my biggest supporters in building Digital Shadows and it’s an honor to partner with them again. My co-founder and former Digital Shadows CTO Bryan Woolgar-O’Neil has helped me assemble an incredible founding team.

So why jump straight back into the startup world so soon after the Digital Shadows journey?

First, from my time running Digital Shadows I could see every day how much sensitive material from our clients was leaking onto the internet one way or another. Customer data, proprietary code, IP, corporate documents and strategy papers continuously make their way out of organizations. As such, it’s been clear to me for some time that existing data protection technology has failed.

Second, the arrival of the Generative AI wave has brought attention back to data protection like never before and brings with it some incredible opportunities for innovation in security. While the technology we are building can address any data leakage scenario, we are focusing initially on the risks from AI-adoption since we hear that coming up again and again.

The proliferation of generative AI tools

I should say, I am a huge believer in the potential for Generative AI to do good in the world and I broadly buy into studies like the one from McKinsey forecasting that it will add $4.4 trillion annually to the global economy. However, it also brings new risks.

Since its inception, enterprises have worried about their data ending up in ChatGPT and being exposed in some way. But ChatGPT is of course the thin end of the wedge when it comes to generative AI foundational models, and in fact is better protected than most:

  • Many other commercial models are being developed by the likes of Google, Anthropic, Cohere and InflectionAI amongst many others
  • There is a plethora of open source models popularized on the model sharing site Hugging Face, including Meta’s Llama model amongst thousands of others
  • Many national governments are building models. Reuters reports that Chinese organizations launched 79 large-language models (LLMs) in the country over the past three years for one example.

This is just the beginning. These foundational models have only emerged in earnest since the beginning of 2023 and on top of this there is already an application ecosystem of over 8,000 third party apps. These apps have a variety of different purposes with many targeting different business functions in enterprises. They also have a wide range of security maturity levels, some sites for instance are encouraging accounting employees to upload spreadsheets of confidential financial data into third party apps to help them write annual reports, likely without considering security accreditations or compliance standards. Silicon Valley is funding a new wave of startups, many targeting enterprises with product-led growth approaches and security as an afterthought.

Legacy data protection tools fall short

The staggering growth of this ecosystem has meant that in response CISOs are struggling to ‘get their arms around the problem’. They lack visibility into what AI services their employees are using. They are under pressure to ‘be innovative’ and enable the enterprise to use and adopt AI rather than blocking everything. They are worried about security issues with AI but often lack the time and resources to get on top of the problem. Some are already rolling out AI security policies but even they struggle to monitor for compliance.

Even the most tech-savvy companies such as Apple have banned the use of generative AI but this is likely to be a temporary position and unsustainable over the long term.

Worse, organizations are finding existing data protection tools are not up to the challenge. Legacy rules-based engines are still prevalent - a failed 20-year-old technology set. The streets are littered with companies with half-finished attempts at data labeling and classification, doomed to failure. Existing approaches do not address the risks from AI, and fundamentally very few organizations are resourced well enough to deploy them well. Finally, few security teams have deep AI skill sets in-house or are likely to hire them in the near-term.

How can companies adopt AI without sacrificing data security?

Companies need a way to see all the different AI services they are using and put appropriate controls around them to keep their data security and privacy intact. They need to do all of this with their existing, stretched team while enabling AI innovation for the rest of the business.

Harmonic Security does just that:

  • Visibility: Harmonic provides a complete picture of AI adoption in the enterprise, including which in-use services pose significant risks to security or privacy.
  • Constitutional Data Protection: Our unique approach to data protection allows organizations to specify a set of principles that govern their data, coupled with automated human-like decision making to keep their data secure. This is the revolution the data protection industry needs.
  • Automation and education: Harmonic’s virtual analyst reduces load on the existing security team by educating employees and resolving incidents automatically.

This turns the security team into innovation champions who can allow the business to embrace this technology revolution while sleeping well at night, confidently demonstrating to leadership that their sensitive data is protected.

Speak with us

If you would like to find out more about what we are doing, please do schedule a briefing here:

https://www.harmonic.security/book-a-meeting

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Alastair Paterson