Are you compliant — or do you merely assume so?
As of 1 January 2026, the full compliance requirements for high-risk AI systems under the EU AI Act apply. That was foreseeable. Yet an inventory of Dutch organisations shows that fewer than a third of executive boards can confirm that their AI portfolio has been assessed against the high-risk categories. The remaining two-thirds give one of the following responses: "Legal handles that," "IT handles that," or "we have no high-risk AI."
All three answers are problematic.
The EU AI Act distinguishes prohibited AI practices (in force since February 2025), high-risk AI systems (fully in force from January 2026) and other AI systems subject to transparency obligations.
High-risk is broader than most organisations assume. The categories include, among others:
Critical infrastructure AI used in energy, water, transport and financial infrastructure falls fully within scope — even where the AI forms part of a purchased system.
Education and vocational training AI that influences access to education or professional qualifications. Think automated selection systems for study programmes or training courses.
Employment and personnel management AI for recruitment, selection, promotion, dismissal or task allocation. Virtually every modern HR-tech application falls within scope.
Essential private and public services AI that influences access to credit, insurance or essential public services. Credit scoring algorithms are the most direct example.
Law enforcement and migration For organisations in those sectors, the most stringent requirements apply.
High-risk compliance requires board approval and a demonstrable accountable at C-level. Not a DPO project, not an IT project. The board is liable — and that means the board must be able to demonstrate that it has assessed, approved and monitored compliance.
For high-risk AI systems the EU AI Act requires: a conformity assessment prior to deployment, technical documentation of the system and its training data, logging of AI decisions for a minimum of six months, demonstrable human oversight of AI outputs, and a risk management system that is continuously maintained.
That last point — continuously maintained — is the most frequently overlooked requirement. Compliance is not a one-off certificate. It is a continuing obligation that must be embedded in your organisation's governance.
Inventory first Compile a complete list of all AI applications in your organisation — including purchased systems, integrated tools and systems operated by suppliers. "We have no high-risk AI" is rarely true — it is more often the result of an incomplete inventory.
Assess each application For each application, ask: does this fall within the high-risk categories? If the answer is "possibly," treat it as "yes" until you have legal advice.
Document the assessment process Even if you conclude that an application is not high-risk, document how you reached that conclusion. In a regulatory investigation, the assessment process is at least as important as the conclusion.
Put this to the board This is not a delegable matter. The board must see the findings, approve them and reassess annually.
Not only insight — but a plan your board can execute.