The precedents are set. Is your governance demonstrable?
On 14 February 2026, the European Commission issued the first three formal enforcement decisions under the EU AI Act. Total fines amount to €18.4 million. The cases are now public — and the patterns they reveal are more instructive than the aggregate figure.
None of the three cases concerns a prohibited AI application. None concerns an AI system that demonstrably caused harm. All three concern the same thing: the absence of governance documentation for high-risk AI systems that were already in use before the compliance deadlines expired.
Case 1 — Financial services provider, Germany (€7.2M) A large insurer was using an AI system for claims assessments that falls within the high-risk category of "essential private services." The system was operational, performance was sound — but the required conformity assessment, technical documentation and logging requirements were absent. The regulator concluded that the organisation was aware of the compliance deadlines but had deferred action.
Case 2 — HR-tech provider, the Netherlands (€6.8M) A software vendor was offering an AI-driven recruitment system used by dozens of client organisations. The system — which scores and ranks candidates for selection processes — falls within the high-risk category of "employment and workforce management." The vendor had not fulfilled the conformity requirements; the client organisations using it bore shared liability for deploying the non-conforming system.
Case 3 — Public sector, Belgium (€4.4M) A public body was using an AI system to assess grant applications. The absence of demonstrable human oversight — an explicit requirement for high-risk systems — was the principal ground for the fine.
The first wave of enforcement does not target the worst offenders — it targets the worst documented. You need not have caused harm. You need not have deliberately broken the law. You need only be unable to demonstrate that you complied with the requirements.
Three conclusions apply directly:
Procured AI is no exemption Case 2 makes it clear: if you procure and deploy a non-conforming AI system, you bear shared liability. "Our supplier should have had this in order" is not a legal defence. Demand conformity documentation from your AI suppliers — now, not at the next contract renewal.
Documentation is the evidence In all three cases, the absence of documentation was the core of the enforcement decision. Good intentions, well-functioning systems and even good outcomes are no substitute for the legally required documentation. If it is not documented, it does not exist in the eyes of the regulator.
Human oversight is not optional Case 3 illustrates a point overlooked in many implementations: the requirement for demonstrable human oversight is not a philosophical principle but an operational obligation. You must be able to demonstrate that a human reviews and can override AI output — and that this actually happens in practice.
Ask your Legal or Compliance team for an overview of all AI applications that may fall within the high-risk categories. For each of those applications, ask: has a conformity assessment been conducted? Is there technical documentation? Are the logging requirements implemented? Is human oversight demonstrable?
If the answer to any of those questions is "no" or "unknown," you have a priority for the coming quarterly cycle. Not to be skipped — the precedents have now been set.
Not just insight — but a plan your board can execute.