Praesum.ai AI Strategy & Governance — Boardroom Intelligence 2026 · Weekly

The Weekly
Presumption

What every executive needs to know this week

20 editions
Governance Week 12 · 16 Mar 2026 Latest edition

Autonomous AI agents
are now making real
business­decisions.

Who is accountable?

The question is no longer whether AI is making decisions inside your organisation. The question is whether your board knows which decisions — and whether governance is in place to underpin accountability.

Read the full edition →

In the first quarter of 2026, a tipping point has been reached. Microsoft Copilot Agents, Google Gemini Deep Research and OpenAI Operator are broadly available in enterprise environments — and are being actively deployed for tasks that previously required human decision-making: contract review, supplier selection, customer escalations, budget reallocation.

The governance implications are immediate and board-level. When an autonomous AI system makes a decision that causes harm — to a client, a supplier or the organisation itself — the accountability question remains unanswered in most organisations. The EU AI Act offers no direct traction here: autonomous agents fall into grey areas that will only be resolved through case law.

Organisations that establish clear parameters now — defining which decisions AI may take autonomously — are building a liability shield that will be worth its weight in gold within 18 months.

Reading time: 7 minutes · Relevant for: Executive Board, CFO, CLO
Archive · 2026
11
9 Mar 2026 Leadership

Supervisory boards are now asking concrete AI questions. Does your executive board have the answers?

A survey of 180 supervisory board members in the Netherlands and Belgium reveals a sharp shift. Where AI was a peripheral topic in 2024, it features in 2026 as a standing agenda item — with substantive questions on governance, liability and returns. Executive boards unable to answer these questions adequately are facing increasing pressure on their position.

Board-level AI literacy is no longer a nice-to-have. It is a minimum qualification.
6 min Read →
10
2 Mar 2026 Risk

AI-driven fraud targets the C-suite. Deepfakes in the boardroom.

Three documented cases in Q1 2026 in which AI-generated deepfake videos of CEOs were used for fraudulent payment instructions exceeding €2 million. The technology is available for less than €50 per month. Your finance function is probably unprepared — and that is a boardroom problem.

AI fraud is now a board-level risk. Human verification protocols are governance policy, not an IT measure.
4 min Read →
09
23 Feb 2026 Regulation

First EU AI Act fines issued. The precedents are set.

The European Commission has issued the first three formal fines under the EU AI Act — two in the financial sector, one in the recruitment sector. Total: €18.4 million. In all three cases, the ruling was that governance documentation was absent — not that the AI itself was prohibited. Compliance without documentation does not exist.

The first wave does not hit the worst offenders. It hits the worst documented. Is your AI governance demonstrable?
5 min Read →
07
9 Feb 2026 Governance

The AI Control Tower. A new governance model for the board.

Leading organisations are introducing a new governance layer in 2026: the AI Control Tower — a centralised overview of all AI activity, risks and returns, reporting directly to the executive board. Not an IT dashboard, but a board-level instrument.

Governance without visibility is theory. The AI Control Tower makes AI governable in practice.
5 min Read →
06
2 Feb 2026 Leadership

AI and the labour market. Beyond the redundancy wave — the real governance question.

The debate on AI and employment was too simplistic in 2025. The real board-level question is not how many jobs will disappear — but how the organisation manages the transition, who is responsible for reskilling, and how the board accounts to employees, works councils and society.

The labour market impact of AI is an ESG question. It belongs on the supervisory board's agenda.
5 min Read →
05
26 Jan 2026 Risk

Shadow AI in 2026. It is larger than you think — and it is now your responsibility.

New research shows that unsanctioned AI use within organisations has risen in 2026 to an average of 71% of all AI interactions. The tools are more powerful, the integration deeper, the data exposure greater. Four concrete measures a board can take now — without inhibiting innovation.

Prohibition does not work. Setting a framework does. That is a board decision, not IT policy.
4 min Read →
03
12 Jan 2026 Regulation

EU AI Act: high-risk systems fully in force. Are you in conformity?

As of 1 January 2026, the full requirements for high-risk AI systems apply: mandatory conformity assessments, technical documentation, logging of AI decisions and demonstrable human oversight. HR-AI, credit assessment and essential services all fall within scope. The list is broader than most compliance teams have accounted for.

High-risk compliance requires board approval and an identifiable C-level accountable. Not a DPO project.
5 min Read →
02
5 Jan 2026 Governance

The executive AI agenda for 2026. Ten decisions that matter this year.

Drawing on the developments of 2025 and the anticipated shifts of 2026, Praesum identifies ten board-level AI decisions every executive board must make this year. From governance structure to vendor strategy, from workforce policy to regulator communication. An agenda that is actionable — not as a checklist, but as a boardroom compass for the year ahead.

2026 is the year AI moves from experiment to institution. The board that does not steer that shift will be steered by it.
8 min Read →
01
1 Jan 2026 Strategy Opening of the year

AI is learning to understand the world. Not just predict words. World models — the shift that rewrites the rules.

While boardrooms are still debating ChatGPT, the fundamental architecture of AI is already shifting. From systems that predict the next word to systems that understand causality, plan actions and simulate the physical world. The board-level implications are greater than the LLM hype has ever been.

The board that enters 2026 with only an LLM strategy is preparing to fight the last war.
6 min Read →

What is coming

Outlook · Forthcoming editions

Governance is foresight. The developments below have not yet made the headlines — but they are inevitable. Every board that prepares now will hold a structural advantage in 12 to 24 months. The Weekly Presumption analyses them in advance.

13
Expected: April 2026 Strategy Outlook

AI and the redefinition of competition. When your competitor becomes an AI-first organisation.

The first wave of AI-native competitors is already visible in sectors such as financial services, insurance and professional services. They operate with 60% less overhead, 3x faster decision-making and a data model that traditional players cannot structurally replicate without fundamental reorganisation. The board that treats this as "still a long way off" is missing the moment at which intervention remains meaningful. Praesum analyses the first concrete cases and what they mean for your sector.

When an AI-first competitor enters your market, the question is not how you respond. The question is whether you are still capable of responding.
~7 min Coming soon
14
Expected: April 2026 Governance Outlook

The Chief AI Officer becomes mandatory. What that means for your executive structure.

Three EU member states are preparing legislation that would require large organisations to designate an identifiable AI accountable at board level — comparable to the DPO obligation under GDPR. In the United States the discussion has already progressed further: the SEC is considering AI governance reporting as part of mandatory investor disclosures. For boards, this means not whether a CAIO is coming, but when, with what mandate, and how it fits within the existing structure.

Organisations that designate an AI accountable now are not getting ahead of regulation. They are getting ahead of the market.
~6 min Coming soon
15
Expected: May 2026 Risk Outlook

AI systems fail at scale. How your board prepares for the first major AI crisis.

The likelihood of a significant AI incident affecting your organisation — or a direct competitor — increases exponentially as AI is embedded more deeply in operational processes. The first major public AI failure cases in Europe are a matter of months, not years. Organisations with an AI incident protocol in place are in a fundamentally different position from those that must build one ad hoc while the cameras are rolling. Praesum outlines the four scenarios every board must be prepared for.

An AI crisis is not a matter of if — but of when. The board that accepts that now will weather it better.
~8 min Coming soon
16
Expected: May 2026 Leadership Outlook

Work after AI. How the board leads — and does not avoid — the conversation with the organisation.

By the end of 2026, the first substantial reorganisations as a direct consequence of AI implementation will be a fact in the Netherlands. The board that waits for that moment surrenders control of the narrative, the support base and the culture. The question is not whether AI changes jobs — it does. The question is how an executive board communicates that honestly, in good time and with authority. Praesum is developing a communication framework for boards that wish to lead this conversation rather than trail behind it.

The organisation that speaks honestly with its people about AI retains the trust needed to navigate the transition.
~6 min Coming soon
17
Expected: June 2026 Regulation Outlook

The AI liability directive approaches. What the revised Product Liability Directive means for executives.

The revised EU Product Liability Directive — which explicitly classifies AI systems as products subject to liability — is expected to enter its implementation phase in mid-2026. The implication is direct: harm caused by AI systems can give rise to liability without the need to prove intent or negligence. For executives, this represents a fundamentally new risk classification for every AI application with external impact.

AI liability is becoming strict liability. That fundamentally changes the calculus of every AI project with external impact.
~7 min Coming soon
18
Expected: June 2026 Strategy Outlook

AI and the boardroom in five years. A scenario analysis for executive boards making choices today.

The midpoint of 2026 is the moment to look further ahead. Drawing on the trajectories now visible — technological development, regulation, market consolidation and labour market dynamics — Praesum outlines three scenarios for 2030: the AI-governed organisation, the hybrid organisation and the laggard. Each scenario carries concrete implications for the decisions boards are making now. Which path does your organisation choose — deliberately, or by default?

Strategy is choosing which future scenario you wish to inhabit. AI forces that choice forward — or it is made for you.
~9 min Coming soon
19
Expected: Q3 2026 Governance Outlook

AI governance as an ESG factor. How institutional investors will vote on your AI policy.

The major institutional investors — pension funds, asset managers, ESG funds — are establishing AI governance as an explicit voting criterion at shareholder meetings in 2026 and 2027. Proxy advisers such as ISS and Glass Lewis are developing AI governance frameworks alongside existing ESG criteria. For listed companies and PE-backed businesses working towards an exit, this is a new valuation risk that must be addressed now.

AI governance is becoming an ESG voting factor. Your 2027 shareholder meeting requires preparation today.
~6 min Coming soon
20
Expected: Q3 2026 Leadership Outlook

The human as competitive advantage. Why the best AI organisations invest more in people, not less.

The organisations that deploy AI most effectively in 2026 and 2027 are — paradoxically — the organisations that invest most in human judgement, human creativity and human leadership. AI amplifies the quality of human decision-making; it does not replace it. The board that understands this and acts accordingly builds a competitive advantage that cannot be replicated with more GPUs. Praesum analyses the organisational architecture of AI leaders — and what executives can learn from it.

The winner of the AI race is not the organisation with the most AI. It is the organisation with the greatest human quality that AI amplifies.
~7 min Coming soon
The Weekly Presumption

Every Monday, direct to your inbox. No noise — only what the board needs to know that week.