In this blog, discover how to transform vague AI warnings into robust, secure controls that empower your team and safeguard sensitive data. We explore how clear, enforced real-world policies can drive innovation and efficiency in the insurance industry. Dive in to uncover the roadmap to secure, AI-driven success.
The Insurance Industry and AI Governance
Just like other enterprises, many insurance companies provide enterprise licenses for AI tools such as ChatGPT, Copilot, and other generative AI applications. However, often their only directive to employees is a vague warning: "Don’t upload sensitive data." This approach fails to acknowledge how employees actually interact with AI tools and does little to mitigate real risks. Without structured guidance, security controls, and training, employees may unknowingly expose confidential data, bypass enterprise policies, or develop workarounds that put sensitive customer and company information at risk.
A more detailed, enforceable, and user-centric AI usage policy is essential. Rather than simply restricting AI use or issuing blanket bans, insurance companies must develop policies that reflect real employee behavior, offer clear guidelines, and implement sensible security controls. This will not only reduce the risk of data leaks but will also foster a secure and compliant AI-driven innovation environment.
Step 1: Understand How Employees Actually Use AI (and Why Blocking Doesn’t Work)
Employees in the insurance industry turn to AI for efficiency, using it to draft emails, summarize documents, generate policy comparisons, or analyze claims data. In a high-pressure, customer-facing industry like insurance, time is money, and AI offers a way to reduce manual effort and increase productivity.
However, when AI tools are blocked at the enterprise level, employees don’t simply stop using them—they find alternative ways to access these tools.
Shadow AI—the use of AI applications outside of enterprise approval—has become a widespread issue. A staggering 71-78% of employees admit to using AI tools without official authorization. Many of them log into personal AI accounts or use free-tier versions of AI models, bypassing security controls. In fact, 90% of AI adoption in the workplace occurs through personal accounts, meaning companies have no visibility into how AI is being used, what data is being shared, or whether proprietary information is at risk.
The consequences of shadow AI use in insurance can be severe. When employees input claims data, customer PII, underwriting notes, or confidential risk models into AI systems outside enterprise control, the information may be stored, processed, or even used to train external AI models. This could lead to non-compliance with HIPAA, GDPR, and other privacy regulations. Furthermore, insurers risk regulatory penalties and reputational damage if sensitive customer data is exposed.
Instead of banning AI outright, insurers must adopt a governance approach that acknowledges AI's value while addressing security risks. This means understanding AI usage patterns, developing clear guidelines, and implementing real-time monitoring to ensure compliance without disrupting workflows.
Step 2: Get an AI Asset Inventory to Understand the Problem
Before insurers can enforce AI policies, they need to understand the scope of AI adoption within their organizations. Many companies struggle with AI governance because they lack visibility into how AI tools are being used. Conducting an AI asset inventory is the first step toward informed policy creation.
An AI usage audit should identify both sanctioned and unsanctioned AI applications within the organization. This includes analyzing network logs, employee surveys, and third-party software integrations to determine where AI is being used. A thorough audit will provide insight into how different teams and roles are leveraging AI—whether for claims processing, underwriting, fraud detection, or customer service automation.
Once an inventory is established, teams should create an AI governance committee (if not done so already). This committee should include representatives from IT security, compliance, data privacy, legal, and business operations teams. Security teams should take a leading role in identifying risk levels, ensuring compliance with industry regulations, and defining the organization’s AI strategy.
Regulatory considerations must also be factored into the inventory process. The EU AI Act mandates AI asset inventories, requiring organizations to document AI usage, assess risks, and ensure compliance. Insurance firms operating internationally must align their AI governance strategies with evolving regulatory requirements.
Step 3: Build a Policy That Reflects Real Usage
A common mistake insurers make is adopting generic AI policies that don’t align with how employees actually use AI. Many companies rely on templates that outline broad restrictions but fail to provide practical guidance. This leads to confusion, noncompliance, and employees circumventing policies to continue using AI tools.
A robust AI policy should be tailored to the company’s specific workflows and risks. It should clearly define acceptable AI use cases, specify approved tools, and outline strict data-sharing guidelines.
Instead of vague directives like “don’t upload sensitive data,” policies should provide explicit instructions. For example, a claims adjuster should know whether they can use AI to generate summaries from claim reports but must not input raw customer data - and which tools to use. Similarly, underwriters should be aware of how AI can assist with risk modeling but must avoid inputting proprietary actuarial formulas into third-party tools.
Data sharing rules should also be clearly articulated. Employees need to understand what data classifications are considered sensitive, whether AI-generated content is subject to compliance review, and what safeguards must be in place before using AI-generated outputs in official communications.
Policies should also provide employees with a simple and transparent process for requesting new AI tools. If a department wants to adopt a new AI-powered risk assessment tool, there should be an established approval workflow that involves security vetting and compliance checks. This prevents employees from adopting unsanctioned tools and encourages responsible AI adoption.
Step 4: Conduct Interactive, Ongoing Training
For a successful governance program, insurers need to implement structured AI training programs. Unfortunately, this is often one of the most overlooked areas.
Policies alone won’t change behavior—effective training is essential for ensuring AI governance compliance. Yet, many still treat AI training as an afterthought, offering click-through compliance warnings that employees ignore. Real training must be engaging, role-specific, and continuous.
A successful AI training program should include interactive workshops that educate employees on AI risks, security controls, and data handling best practices. Training should be customized based on job function—claims processors, underwriters, and customer service representatives all interact with AI differently and require tailored guidance.
Step 5: Enforcement and Sensible Controls
With an approved AI policy in place, insurers must implement effective security controls to ensure compliance while enabling innovation. Employees want to use AI but need clear guidelines to avoid exposing customer data.
One customer of Harmonic found that, after adding controls, AI use actually increased by 300% while sensitive data exposure decreased by 72%.
When done effectively, AI controls do not hinder usage; rather, they increase adoption by providing employees with confidence in secure AI workflows.
Controls will be different for every organizations, but there are some common denominators:
- Blocking risky AI platforms – Prevent data uploads to unapproved AI sites, particularly those hosted in jurisdictions with weaker data protections like China.
- Preventing sensitive data uploads to personal accounts – Ensure that AI interactions occur within enterprise-approved environments rather than free-tier AI models.
- Redirecting users to sanctioned AI tools – Provide direct, seamless access to approved AI applications that meet compliance and security requirements.
- Monitoring AI usage and engaging employees proactively – Track unauthorized AI interactions and reach out to employees to understand their needs. If a team relies on an unapproved AI tool, security teams should investigate and provide an enterprise-sanctioned alternative.
By implementing these controls, insurers can establish a governance framework that enables safe AI adoption while protecting sensitive data and maintaining regulatory compliance.
Get AI Governance Right, and AI Adoption Will Soar
A well-structured AI usage policy doesn’t just protect sensitive data—it empowers employees to use AI securely and efficiently. Organizations should embed AI security considerations into existing governance frameworks rather than creating isolated AI policies. By proactively managing AI adoption, insurance security teams can enhance efficiency, reduce regulatory exposure, and position themselves as leaders in the AI-driven future of insurance.