From Invisible to Governed: Taking Control of AI Across a Major UK Rail Operator

A large UK passenger rail operator with thousands of employees, a permissive approach to AI, and no visibility into how it was being used. A three-week proof of value that changed the picture entirely.
The Starting Position
Like many large organisations, this passenger rail operator had adopted a pragmatic stance on AI: allow staff to explore and use tools freely while formal governance caught up. Microsoft Copilot was the approved corporate platform. Beyond that, usage was largely unmanaged.
The security team had some visibility through their web gateway — enough to know that staff were visiting AI platforms. What they could not see was which tools were in use beyond the obvious ones, which accounts users were logged into, what data was being submitted, or whether any of it was sensitive. Their existing DLP controls, built around document classification and pattern matching, had no meaningful coverage over free-text AI prompts.
AI policy was in development. In the meantime, there were no controls in place to distinguish between a member of staff using an enterprise-licensed, data-protected platform and one using a free consumer tool that trains on everything submitted to it.
The gap in plain terms
The operator knew AI was being used across the business. It did not know by whom, on which platforms, under which accounts, or with what data. There was no way to enforce policy that had not yet been written, and no baseline from which to write it.
The Proof of Value
Harmonic was deployed as a browser extension via the organisation’s existing device management platform — no infrastructure changes, no endpoint agents, no user-facing disruption. An initial test group validated the deployment before a wider rollout to the full user population.
Monitoring ran in silent mode first, giving the security team a complete and unfiltered picture of AI activity before any controls were applied. This is the phase that typically produces the clearest evidence — because nothing has changed for users, the data reflects real behaviour.
11,400+ AI prompts observed
100% of users interacting with AI
72% of apps in use train on data
The 100% figure covers more than dedicated AI tools. Alongside ChatGPT and Copilot, the deployment surfaced AI usage embedded in Canva, Miro, Notion, Smartsheet, Grammarly, and others — platforms that carry AI capability within them but are not typically considered in scope for AI governance. None of this had been visible before.
The 72% reflects apps that either explicitly state they train on user data, or do not specify — the latter category predominantly comprising tools hosted in jurisdictions with limited data protection frameworks. Of the tools in active use, only 28% carried any meaningful data protection assurance.
What Visibility Revealed
Once the team could see what was being submitted to AI platforms, the breadth of exposure became clear. Sensitive data was present across a wide range of categories — not concentrated in one department or one tool, but distributed across the business and across platform types. For an operator handling infrastructure, staffing, and customer data at scale, the risk profile across each of those categories was material.
The platforms
ChatGPT — in its free and personal paid tiers — accounted for the largest share of both overall usage and sensitive data events. But the distribution across other platforms was significant: design tools, productivity software, and browser-based AI assistants all appeared in the findings. Several of these would not have been flagged by a web gateway or an AI-specific access policy.
The data categories
Sensitive data appeared across source code, financial projections, employment records, legal and HR content, and cloud access credentials. For a rail business, several of these categories carry additional weight: source code and credentials touching operational systems, HR and legal content relating to a large and complex workforce, and financial data relevant to a regulated industry. The HR and legal category produced the most significant individual events — active disciplinary matters, grievance documentation, and confidential employee information submitted to consumer platforms that retain and train on that data.
An important characteristic of this category is that none of it would have been detectable by a regex-based DLP tool. It is unstructured, contextual, and carries no classification label. The only way to identify it as sensitive is to understand what it means — which is the function Harmonic’s small language models are built to perform.
The accounts
A significant proportion of usage — including on platforms where the operator held enterprise licences — was occurring under personal accounts. An employee using ChatGPT on a corporate licence has data protection coverage. The same employee using a personal account on the same platform does not. Without account-level visibility, the two are indistinguishable.
Moving from Uncontrolled to Governed
The proof of value established the baseline. The path forward was structured in three phases, agreed before the POV closed.
Phase 1
Control
Block high-risk platforms where the security posture of the vendor cannot be verified. Apply policy controls to restrict sensitive data from entering unmanaged tools, using the proof of value findings to prioritise.
Phase 2
Integrate
Connect to the identity provider to enrich alerts with department and user context. Configure role-based access, single sign-on, and tailored reporting views for AI governance stakeholders. Integrate with the SIEM.
Phase 3
Enable
Deploy inline interventions that redirect users to approved platforms when sensitive data is detected, rather than simply blocking access. Add organisation-specific detection rules. Use alert patterns to identify training needs by department.
The intent across all three phases is the same: replace an uncontrolled and unmonitored environment with one where AI adoption can continue and expand, with the governance in place to make it sustainable. Restricting access to AI tools is not the objective — establishing the visibility and controls needed to use them safely is.
The outcome
Within three weeks, the security team had a complete map of AI usage across the organisation — tools, platforms, accounts, and data types — that had not existed before. They had documented evidence to support a board-level business case, a tested control framework, and a clear path from the position they were in to the position they needed to reach.
The team's own assessment at close of the proof of value: the problem was real, the value was clear, and the path to procurement was straightforward.
To find out more or discuss how this could apply to your organisation, visit harmonic.security or get in touch with the team directly.


