What the revised Product Liability Directive means for executives.
The revised EU Product Liability Directive — which explicitly classifies AI systems as products to which liability attaches — is expected to enter implementation phase for EU member states in mid-2026. The implication is far-reaching: harm caused by AI systems may give rise to liability without any requirement to prove intent or negligence.
This is a fundamental shift in the liability landscape. Until now, the legal position of organisations using AI was relatively comfortable: the burden of proof lay with the injured party, and without a demonstrable fault, liability was difficult to establish. That comfortable starting point is disappearing.
Strict liability — liability without fault — means that when your AI system causes harm, you are liable for that harm, regardless of whether you acted carefully, configured the system correctly or took reasonable precautions. The path to damages is considerably shorter.
AI liability is becoming strict liability. That fundamentally changes the risk calculus of every AI project with external impact. Every application that takes decisions capable of harming customers, suppliers or third parties must be reassessed in this light.
Inventory your exposure Which AI applications have external impact? Where are AI systems making decisions that could cause harm to customers, suppliers or third parties? That inventory is the basis of your liability assessment.
Review your insurance positions Are your current liability policies adequate for AI-related harm? Many policies contain explicit exclusions for AI-related claims. This is a conversation with your insurance broker that must happen this quarter.
Document your due diligence When liability is asserted against you, evidence that you took reasonable measures — testing, validation, human oversight, documentation — is your best defence. Begin systematically documenting those measures today.
Not just insight — but a plan your board can execute.