Three documented cases in Q1 2026. More than €2 million taken. The technology costs €50 per month. Your finance function is probably unprepared — and that is a boardroom problem.
In February 2026, the CFO of a mid-sized Dutch logistics company received a video call from his CEO. The CEO was on a business trip in Singapore. The call was urgent: a time-sensitive acquisition required an immediate transfer of €2.3 million to a Singaporean counterparty, outside the normal payment processes. The CEO was clear, calm and detailed. The CFO hesitated, but he knew his CEO. He made the transfer.
The CEO called back an hour later — from Singapore. He had never made the call. It was a deepfake: a real-time AI-generated video of his face, his voice, his manner of speaking. Assembled from publicly available video footage of conferences, interviews and company presentations.
This is not science fiction. This is Q1 2026 — and it happened twice more in Europe this quarter, with a comparable method and comparable sums.
The reflexive response is understandable: "We need to improve our IT security." But that misses the point. Deepfake fraud bypasses IT systems entirely. There is no phishing link, no malware, no data breach. The attack goes through people — through trust, through hierarchy, through the psychological pressure a CEO can exert on a CFO. That is a governance question, not a technical one.
The board that delegates this to IT is misidentifying the real risk owner. The real risk owner is the executive board itself — because this concerns the credibility of executive authority as an authentication mechanism. And that authority is now under attack.
The technology behind these attacks is not sophisticated in the sense of "accessible only to state actors." It is commercially available, increasingly user-friendly and — critically — increasingly indistinguishable from genuine video. Two years ago, a deepfake was still recognisable from lip-synchronisation problems and artificial skin texture. In 2026, the best models are, under controlled conditions, indistinguishable from the real thing.
All three cases we have been able to document display a pattern that is disturbingly consistent:
Time pressure as a weapon The fraudsters created urgency. A time-sensitive transaction, a deal that had to close "today," an external deadline that left no room for verification through normal channels. Time pressure is the most effective way to override the judgement of competent professionals.
Authority escalation In two of the three cases, the deepfake CEO was accompanied by an "external lawyer" or "investment banker" — either also a deepfake or a real accomplice — who confirmed the legitimacy of the request. Multiple authority confirmations dramatically reduce the likelihood of resistance.
Circumvention of normal processes The request was always framed as exceptional — outside normal approval procedures. "This goes through me" is the signal that would normally give a CFO pause. But when the CEO himself says it, and there is time pressure, and there is external confirmation, the exception becomes a mistake.
Public information as raw material In all three cases, the deepfake was assembled from publicly available material: press conferences, YouTube interviews, LinkedIn videos, company presentations. The more publicly visible your CEO is — which for a modern leader is almost an obligation — the more training material is available.
Deepfake fraud cannot be solved with a technical patch. It requires board-level decisions on verification protocols, payment processes and — fundamentally — an honest conversation about how your organisation handles authority and urgency.
AI fraud is now a board-level risk. Human verification protocols are not an IT measure — they are governance policy. The CFO who transfers funds on the basis of a video call with his CEO is not acting negligently. He is acting exactly as he always has. That is the problem.
The practical implications for your executive board are immediate:
Verify outside the compromised channel Every financial request received by video or telephone must be verified via a second channel — a separate telephone number, a pre-agreed codeword, or physical confirmation. This applies equally — and especially — to requests from the CEO. Make this policy explicit and discuss it with the entire executive board.
Set thresholds for exception processes Every payment request that falls outside normal approval procedures triggers an automatic verification protocol — regardless of who makes the request. This is not a lack of trust in the CEO. It is an acknowledgement that the CEO themselves may be the target of fraud.
Train your executive board and finance team explicitly Not once, but quarterly. Show people real deepfake examples. Discuss what time pressure does to judgement. Make "I want to verify this" socially acceptable — even when the CEO is pressing for speed.
Document the policy and establish it as board policy This is not an IT policy. It is a board decision that is recorded in writing, approved by the executive board and reviewed annually. It also provides protection: when an employee makes an error despite the policy, there is a governance trail.
Deepfake fraud is a symptom of a deeper problem: the erosion of trust as an authentication mechanism. Organisations have been built for decades on the assumption that a recognised face and a familiar voice are reliable signals. That assumption is now invalid.
This has implications that go beyond fraud prevention. It touches on how organisations make decisions at a distance, how executives exercise their authority in hybrid working environments, and how clients and business partners will in future verify the authenticity of communications. The board that treats deepfake fraud as an isolated security problem misses the structural impact on organisational authority.
The parallel with the introduction of email is instructive. In the 1990s, email was a new communication medium with virtually no authentication barrier. It took two decades before phishing, spoofing and email fraud were recognised as serious board-level risks — and the damage over those two decades was enormous. Video and voice as communication media now stand at the same inflection point.
The CFO who made the fraudulent transfer was an experienced professional with a twenty-year track record. He did precisely what his role demands: take an urgent request from his CEO seriously and act quickly. The system failed him — not the other way around. That is what the board must understand and must fix.
Not just on fraud — on the full AI landscape of your organisation.