We collaborate with insurance companies worldwide to support the secure use of GenAI by their employees. In this blog, we explore some of the top AI applications in the insurance industry and explain how we help minimize the risk of sensitive data being exposed when using GenAI tools.
Popular AI use cases for insurance
The insurance industry is poised to significantly benefit from advancements in AI, and are actually some of the most advanced adopters of AI. While the insurance industry isn’t always known for the fast adoption of new technologies, AI adoption in insurance has grown by 25% in the past year. By 2025, it is estimated that 95% of customer interactions in the insurance industry will be facilitated by AI.
AI has the ability to significantly improve productivity in underwriting, claim processing, customer service, and fraud detection.
We can glean specific AI case studies from a helpful study conducted by Google ‘Real-world gen AI use cases from the world's leading organizations | Google Cloud Blog. All of these are, of course, specific to the use of Google products but they help to bring some of these use cases to life:
- Five Sigma “created an AI engine which frees up human claims handlers to focus on areas where a human touch is valuable, like complex decision-making and empathic customer service. This has led to an 80% reduction in errors, a 25% increase in adjustor’s productivity, and a 10% reduction in claims cycle processing time.”
- HDFC ERGO “built a pair of insurance ’superapps’ for the Indian market. On the 1Up app, the insurer leverages Vertex AI to give insurance agents context-sensitive ’nudges’ through different scenarios to facilitate the customer onboarding experience.”
- Loadsure “utilizes Google Cloud's Document AI and Gemini AI to automate insurance claims processing, extracting data from various documents and classifying them with high accuracy. This has led to faster processing times, increased accuracy, and improved customer satisfaction by settling claims in near real-time.”
- The Trumble Insurance Agency “used Gemini for Google Workspace to significantly improve its creativity and the value that it delivers to its clients with enhanced efficiency, productivity, and creativity.”
- Hiscox “used BigQuery and Vertex AI to create the first AI-enhanced lead underwriting model for insurers, automating and accelerating the quoting for complex risks from three days down to a few minutes.”
Risks of exposing sensitive data in AI tools
All of these advances are incredible and can lead to real growth for insurance companies. But there are some risks to manage. According to ESG, 82% of security leaders are concerned about data leakage as employees increasingly use generative AI.
For the insurance industry, there’s plenty of data that needs protecting.
There are certain types of information that will be sensitive for most companies, such as financial records, customer data, and intellectual property. In addition to this, any exposed personal health information (PHI) exposure or claims data with sensitive financial details would fall under the myriad of data privacy regulations.
However, there is data specific to their business models, such as claims histories, premium calculations, and coverage details.
While traditional data loss prevention (DLP) was designed to detect highly structured data like credit cards and social security numbers, it’s impossible to detect and prevent these more unstructured types of data from leaking out.
Specialized language models for detecting exposed insurance data
The Harmonic team has been working with AI teams and security teams and different insurers across the US and UK to build a better way to protect data related to their industry. This has involved building small language models that are trained specifically to detect data like insurance claims. Furthermore, because these models are pre-trained, there is no need to train on customer data.
The insurance claim model works with a simple, plain-English data definition. This explains what type of data is within scope, what type of data must be present, and why it is important to the business.
Because these are small, lightweight models, these can also be placed “in-line”. This enables security teams to nudge the end users at the point of data loss - thereby taking the burden off the security team. With context on why this data is sensitive, the rationale behind blocking the request will also be clearly explained to the end user.
One CIO, a customer of Harmonic, estimated that there were 75% fewer false positives compared to traditional DLP and that “It would take forever with our current tooling to try and get feature parity or even come close to it. We’d be writing regexes forever.”
It would take forever with our current tooling to try and get feature parity or even come close to it. We’d be writing regexes forever.
How insurance companies use Harmonic to prevent GenAI data leaks
We put together a 3-minute video to show how insurance companies work with Harmonic. This includes a demo of our “insurance claims data” model and how to operationalize it by enforcing your GenAI usage policy.
If you’d like to explore a partnership with Harmonic, find a time to speak with our team: https://www.harmonic.security/get-demo