Over the next 18-24 months, organizations will face an increasingly complex regulatory environment as governments accelerate their oversight of artificial intelligence (AI).
Among the most impactful frameworks right now is the EU AI Act, which officially entered force in August 2024. The first phase of its rollout begins in less than 1 month.
This blog shouldn’t be interpreted as legal advice; rather it has some pointers for topics and pain points we’re likely to see emerging throughout 2025.
p.s. Gartner have been doing some great research here and you can read that here “Getting Ready for the EU AI Act, Phase 1: Discover & Catalog”.
EU AI Act: A Gradual Yet Transformative Rollout
The EU AI Act will be the first framework that looks to tackle some of the risks associated with AI. Specifically, the regulation looks at five key risks:
- Erosion of accuracy or integrity.
- Data privacy breaches and security issues.
- Public safety risks due to AI malfunctions.
- Bias, discrimination, and ethical concerns related to profiling.
- Lack of transparency and accountability.
Reading through the EU AI Act articles (https://artificialintelligenceact.eu/ai-act-explorer/), you’ll see references to both “providers” and “deployers”.
The EU AI Act officially came into force on August 1, 2024, but its provisions take effect in phases, ultimately culminating in August 2026. Key milestones include:
- February 2, 2025: Prohibitions on social scoring and biometric categorization begin, alongside requirements for AI literacy among employees involved in AI deployment and oversight.
- August 2025: New provisions for generative AI models come into effect.
- August 2026: The majority of the Act’s provisions become applicable.
This approach is one that data privacy professionals will be familiar with (and mirrors the implementation of the GDPR) due to its gradual rollout. It will allow organizations to start building the necessary compliance infrastructure while navigating the complexities of the regulation.
AI Literacy: What does February have in store?
While a good chunk of the EU AI Act is most pressing for AI providers, there’s plenty for “deployers” to ponder too.
For the 2nd Feb phase, we’re likely to hear a whole lot about “AI Literacy”. The EU AI Act states that “Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
I think this is going to take a bit of a shift in mentality. Most security teams are in “block” mode; hoping to limit the use of AI among employees over risks about what it can mean. To truly improve AI literacy, organizations ought to engage with employees and encourage them to use AI securely.
Getting Ahead of the Game: Four Ways Harmonic Helps
The first phase of the EU AI Act is less than a month away. While this will be a phased rollout, here are four areas I think we should start focusing on now:
1. Creating an AI Inventory
Some GenAI is pretty obvious and you can get a sense of the usage via gateway tools, CASBs, etc. However, the vast majority of AI applications in use are embedded within SaaS platforms. A comprehensive inventory of these systems is essential for identifying and managing AI-related risks. Tools that automate the discovery and indexing of AI applications can significantly ease this process.
2. Risk Categorization
Different apps have different risks. The EU AI Act will categorize "prohibited" and “high risk” apps. AI apps must be assessed and categorized based on their potential risk. Certain applications, such as those involving biometric data or critical decision-making, will be “prohibited”.
The enforcement of these phases is a way out, but that doesn’t mean you shouldn’t be looking at the risk of using these apps. For example, do you know what app plans of GenAI tools are in use? Do you know what apps are training on your data?
3. User Education
By now, most organizations have rolled out some form of an “AI usage policy” (and if you haven’t - check out our AI Policy Generator here).
We probably need to try and think differently here. For example, we’re working with Tines and KnowBe4 to automatically the assigning of security awareness training based on risky user behavior when using GenAI.
4. Post-Deployment Monitoring
Finally, this can’t be a case of a one-time audit. Ongoing visibility into the performance and usage of AI systems is critical. Post-deployment monitoring helps organizations mitigate risks, track system integrity, and ensure compliance with evolving regulatory requirements.
The Broader Compliance Picture
The EU AI Act is just one piece of a larger puzzle. In the United States, the proposed SEC rules and a patchwork of state-level legislation—like Colorado’s Artificial Intelligence Act—create a fragmented compliance environment. China has iterated on its AI regulations since 2022, further complicating the global picture.
Broader frameworks, such as the Digital Operational Resilience Act (DORA) and NIS2, intersect with AI governance, adding complexity for industries like financial services. As national bodies across the EU take on the enforcement of these regulations, concerns about underfunding and resource allocation may challenge implementation.
Preparing for the Regulatory Future
Organizations must act now to prepare for the cascading regulatory changes of 2025 and beyond. Proactive steps, such as developing AI literacy programs, inventorying AI systems, and implementing risk management tools, will be critical. Those who invest early in scalable AI governance frameworks will not only avoid compliance headaches but also position themselves as leaders in responsible AI innovation.
The next two years represent a pivotal moment for organizations to align their AI strategies with the evolving regulatory landscape. By building transparency, accountability, and resilience into their operations, enterprises can navigate the growing compliance storm and emerge stronger on the other side.
Please give the Gartner research a read to learn more about discovering and cataloging AI: https://www.harmonic.security/resources/gartner-getting-ready-for-the-eu-ai-act-phase-1-discover-catalog
If you want help building an AI asset inventory or understanding the risks associated with your employees’ use of GenAI, get in touch!