Praesum.ai Insights Week 12 · 2026
Week 12 16 Mar 2026 Governance

Autonomous AI agents are now making real business decisions.

Who is accountable?

Reading time: 7 minutes · Relevant for: CEO, CFO, CLO, Executive Board

In January 2026, Microsoft rolled out Copilot Agents broadly to the enterprise market. In February, Google followed with Gemini Deep Research for commercial use. In March, OpenAI made Operator available to selected enterprise clients. Together, they mark a tipping point that has passed quietly in most boardrooms — while carrying some of the most significant governance implications of 2026.

For the first time, autonomous AI systems that execute tasks independently — without direct human instruction for each action — are available as standard enterprise products. They book flights, process invoices, respond to customer emails, draft contracts, analyse legal documents and initiate payments. All autonomously. All on the basis of an instruction given at the outset — not on the basis of approval per action.

What autonomous agents can — and do — in practice

The most common use cases organisations are now deploying:

Financial Invoice processing, creditor payments within pre-set parameters, expense claim processing, and — in the most advanced implementations — cash management within defined ranges.

Legal and contractual Contract review, risk flagging in agreements, clause standardisation and — more dangerously — in some implementations the drafting of binding correspondence on behalf of the organisation.

HR CV screening, initial selection rounds, rejection emails, workforce scheduling optimisation.

Customer contact Autonomous complaint handling, refund processing, customer escalations — including making commercial commitments to customers without human approval.

The accountability question no one is answering

When an autonomous AI system makes an error — an overpayment, an incorrect commitment to a customer, a contractual clause that is legally problematic — who is accountable?

The honest answer is: no one knows for certain. The EU AI Act provides no direct traction for the most advanced agentic systems. The revised Product Liability Directive — which classifies AI as a product — is not yet in force. And the contractual liability clauses of the major AI vendors considerably limit their own exposure.

0 The number of Dutch organisations Praesum spoke with in Q1 2026 that have a formally established policy on which decisions autonomous AI agents may and may not take independently. Of the organisations already deploying agentic AI.
Boardroom insight

The question is no longer whether AI is making decisions inside your organisation. The question is whether your board knows which decisions — and whether there is governance in place that ensures accountability. Every executive board deploying agentic AI without a formal decision-making framework is taking a board-level risk that is uninsurable.

Three decisions the board must make now

Establish a decision-making matrix Which categories of decision may an autonomous AI system take independently? Which require human approval? Which are always reserved for a human, regardless of the efficiency gain? This is a board decision — not an IT configuration.

Contractually require logging and audit trails Every decision taken by an autonomous AI system must be logged in sufficient detail to be reconstructable after the fact. Demand this explicitly from your vendor — standard contracts frequently do not provide this adequately.

Define the incident protocol When an autonomous system makes an error — what is the escalation path? Who is informed? What are the first-response procedures? Organisations that document this now are fundamentally better positioned when something goes wrong — and something will go wrong.

The broader implication

Autonomous AI agents cannot be stopped. The productivity gains are too significant, the technology too widely available and the competitive pressure too real. The question for boards is not whether they will be deployed — but under what governance they are deployed.

Organisations that establish clear frameworks now are building a liability shield and a foundation of trust that will be invaluable in 18 months, when the first major public incidents occur.

Ready for the next step?

AI demands
boardroom grip.

Not just insight — but a plan your board can execute.