Demonstrably Compliant: Closing the AI Governance Gap at a Global Law Firm

Demonstrably Compliant: Closing the AI Governance Gap at a Global Law Firm
A major international law firm with an AI usage policy, a security stack including Zscaler and a browser security tool, and a growing problem: the policy existed, but there was no way to prove it was being followed.
The Starting Position
Law firms operate under a confidentiality obligation that runs deeper than most industries. Client matter information sits under both contractual and professional obligation, and the consequences of a data breach extend well beyond regulatory fines into the trust that underpins every client relationship.
When generative AI entered legal workflows, it brought a specific complication on top of the general governance challenge. Many client engagements now include explicit contractual restrictions on the use of third-party AI systems with client data. Those restrictions apply regardless of what an individual lawyer considers reasonable. They are commitments the firm has made, and clients are increasingly asking how the firm intends to honor them.
This firm had taken the right steps by most visible measures. An AI usage policy was in place, specifying approved platforms, restricted data categories, and the obligations that applied to client matter content. Two security tools provided some visibility: a web gateway that filtered access to AI sites, and a browser security tool that monitored browser activity. The problem was that neither could tell the security team what was actually going into AI prompts.
There was a gap between the policy the firm had published and the assurance it could provide. Closing that gap was the objective.
What the Existing Stack Could Not See
Zscaler, in this deployment, could confirm that attorneys were visiting AI platforms. It could apply access controls at the domain level: allow this site, block that one. What it could not do was inspect the content being submitted. A lawyer on an approved platform, pasting in a summary of a client matter in active litigation, looked identical at the network level to a lawyer asking the same tool to help draft a cover letter.
The browser security tool in place offered broader activity visibility, but its classification capabilities were built around pattern matching and structured data detection. It could identify a Social Security number or a credit card format. It could not identify a narrative description of a client's negotiating position, a draft of a privileged communication, or a summary of matters under active legal dispute.
For a firm whose most sensitive content is professional prose rather than structured records, that limitation was decisive. The content that most needed protection was precisely the content that existing tools could not see.
The Proof of Value
Harmonic was deployed via browser extension across the firm's workforce using their existing device management infrastructure. An initial silent monitoring period gave the security team a complete view of AI activity before any policy controls were applied. This phase typically produces the most accurate data, because user behavior is unaffected.
The findings showed meaningful deviation from the firm's stated policy across several dimensions.
Non-approved platform usage was material. Despite a policy specifying a defined set of approved tools, staff were using AI platforms outside that set, including some that had not been assessed against the firm's data handling requirements.
Personal account usage on enterprise-licensed platforms was present across multiple seniority levels. An attorney using the firm's ChatGPT Enterprise account operates under a Data Processing Agreement that governs how that data can be used. The same attorney logged into a personal account on the same platform does not. Consumer and personal tier terms of service for major AI platforms typically permit data use for model training. Without account-level visibility, the two are indistinguishable, and the security team had none.
Sensitive data categories relevant to client work appeared in prompts across multiple tools and account types. The categories included active client matter context, privileged communication drafts, litigation strategy, and transactional terms. None of this triggered any alert in the existing security stack.
Deviation from stated AI policy confirmed across tool type, platform, and account type
Client-sensitive content categories detected in prompts on both approved and non-approved platforms
Personal account usage on enterprise-licensed platforms present across multiple practice groups
Why the Compliance Gap Matters
The contractual and professional obligations at stake here are specific. When a client agreement restricts the use of third-party AI systems with matter data, that restriction applies at the prompt level: it is the act of submitting client content to a third-party AI system that creates the breach, not the use of AI tools in general. A policy that prohibits it but cannot detect when it happens provides no real assurance.
The distinction Harmonic draws (between a SASE or web gateway that sees traffic and a prompt-level inspection system that understands content) is the difference between knowing that staff are using AI tools and knowing what they are doing with them. For a firm that needs to tell clients it is complying with AI-related contractual commitments, only the latter answers the question.
Harmonic's small language models read prompts contextually, identifying client-sensitive material from meaning rather than from pattern matches. A partner asking an AI to draft a response to opposing counsel does not trigger a regex rule. A summarization request for content from an active deal contains no structured data field. The only way to identify these as sensitive is to understand what they say, which is the function the SLMs perform.
The Governance Framework
Following the proof of value, the firm put in place a tiered control framework aligned to the classification of the underlying work.
General productivity use (work unconnected to client matters) was permitted across a defined set of approved platforms, with ongoing monitoring to detect any drift toward sensitive content.
Client matter work required routing through enterprise-licensed accounts only, with prompt-level controls applied to intercept detected sensitive content categories before submission. Users are shown an intervention that explains what was detected and offers guidance on how to proceed.
High-risk platforms and personal account usage on enterprise-licensed tools were restricted at the browser level, with a clear policy communication to staff explaining the rationale.
The firm can now produce a real answer to the question clients are increasingly asking: not "do you have an AI policy" but "how do you know it is being followed." The monitoring data, the control framework, and the audit trail Harmonic provides constitute that answer in a form that survives scrutiny.
The Outcome
Within a defined proof of value window, the security team had moved from a position of policy-without-verification to one where AI usage across the firm was visible, controlled, and auditable. The findings were used to refine the firm's AI policy against real behavior data rather than assumed use patterns, and the governance framework put in place was aligned to the professional obligations the firm carries rather than to generic enterprise DLP categories.
For a business built on confidentiality, the ability to demonstrate that AI tools are being used within the boundaries of client commitments is a crucial professional requirement.
To find out more or discuss how this could apply to your organization, visit harmonic.security or get in touch with the team directly.


