Product Updates

Securing Sensitive Data in Perplexity AI

August 27, 2024

Perplexity is an AI-powered search engine that launched in 2022. It does not build its own models; rather, it focuses on allowing users to ask questions and receive succinct summaries along with citations from reliable sources, rather than just a list of links to explore.

Amid a rise in enterprise use, we dig into how Harmonic is helping to protect sensitive data being exposed.

The Rise of Perplexity AI

Interest in Perplexity AI has peaked recently, with a reported eightfold increase in queries in the last year. According to recent research from Andreessen Horowitz, Perplexity AI has the third most unique monthly visits of all Gen AI web products. 

However, it’s the engagement and personas that make it interesting from an enterprise perspective. Andreessen Horowitz’s research found that “Perplexity slightly edges out ChatGPT in visit duration (at over seven minutes) according to Similarweb data, suggesting that users are deeply engaged”. 

According to CNBC’s analysis of their pitch deck, “more than 8 in 10 Perplexity users have an undergraduate degree, while 3 in 10 are in a “senior leadership position” and 65% are in “high-income white-collar professions,” such as medicine, law and software engineering.”

It’s not surprising, therefore, that we have seen businesses investing in Perplexity AI instead of ChatGPT or Copilot.

Detecting sensitive data

Our team has been busy ensuring we have expanded coverage to prevent sensitive data from being exposed to this application.

Harmonic now detects sensitive data uploaded into Perplexity AI. If interventions are enabled, these will also be displayed to end users. As with OpenAI ChatGPT, Microsoft Copilot, and Google Gemini coverage, the browser extension intercepts the prompt before is sent. You can see an example of an intervention below.

The beauty of Harmonic’s data protection models is that they can assess considerably more context and can detect sensitive data in highly unstructured formats. This is in stark contrast to the rigid, ineffective rules that are possible with regular expressions used by conventional tools. 

As with other alerts, sensitive data detections related to Perplexity will be displayed in the Alerts section of the portal. Here, security teams can:

  1. See what type of sensitive data was input
  2. View the full prompt
  3. Observe what actions the end user took
  4. View additional context on the user and app

Nudging end users towards secure use

By continuing to expand our coverage for data protection, we can help security teams to securely adopt GenAI while ensuring end users have the right guardrails to use tools securely.

To learn more about Harmonic’s data protection models, please reach out to arrange a meeting: https://www.harmonic.security/get-demo

Request a demo

Concerned about the data privacy implications of Generative AI? You're not alone. Get in touch to learn bore about Harmonic Security's apporach.
Michael Marriott