The coverage gap
The insurance industry has the potential to regulate AI faster than any government.
AI systems are generating outputs that create legal and financial liability for the companies deploying them, from fabricated facts that trigger defamation suits to chatbot commitments binding their operators to deepfakes that enable fraud. The first institutions to respond to this emerging risk at scale have been insurance carriers. Some of the largest names in commercial insurance are seeking permission to exclude AI from standard liability policies entirely. Others are building dedicated AI coverage products for the first time.
The correlated loss problem
AIG, WR Berkley, and Great American have each asked state regulators for permission to exclude AI-related claims from standard corporate liability policies. WR Berkley’s proposed language would bar coverage for claims tied to “any actual or alleged use” of AI, including products or services that merely incorporate the technology. AIG told the Illinois insurance regulator that generative AI is a “wide-ranging technology” whose potential for triggering claims will “likely increase over time.” The company added that it has “no plans to implement” the exclusions immediately, framing the filing as a request for future flexibility rather than an imminent coverage change.
The incidents driving the retreat carry concrete price tags. Wolf River Electric, a Minnesota solar company, is seeking more than $110 million in damages from Google after the search engine’s AI Overview feature fabricated a state attorney general lawsuit against the company. According to the complaint, the hallucination cost the business $25 million in sales during 2024, as customers canceled contracts after encountering the false claims in search results. Air Canada was ordered to honor a bereavement discount that its customer-service chatbot invented during a conversation with a passenger. UK engineering firm Arup lost £20 million after employees approved a wire transfer during a video call with deepfake replicas of their colleagues. In each case, an AI system generated output that created legal liability for the company deploying it, with no human decision in the causal chain.
Individual losses at this scale remain manageable for the industry. Aon’s Kevin Kalinich told the Financial Times that carriers could absorb a $400 million or $500 million hit from a single company. The scenario the industry cannot price is an upstream failure in a widely deployed foundational model that triggers thousands of simultaneous claims across every enterprise using it. Kalinich described this as “systemic, correlated, aggregated risk.” Traditional liability underwriting assumes that losses are independent events. The concentration of enterprise AI on a small number of foundational models from OpenAI, Google, and Anthropic makes that assumption untenable.
Boilerplate insulation
Verisk’s ISO unit writes the standardized policy language that most US commercial insurers use. In January, the unit published new endorsement forms allowing carriers to exclude generative AI from commercial general liability coverage with regulator-approved boilerplate. Verisk reported strong carrier interest and expects rapid adoption.
The two primary forms, CG 40 47 and CG 40 48, give carriers a choice of scope. CG 40 47 excludes generative AI from both bodily-injury and advertising-injury coverage. CG 40 48 targets advertising and personal injury claims only, preserving physical-harm coverage while carving out the content-generation risks that have driven the most prominent lawsuits. Both forms define generative AI broadly enough to capture chatbots, content generators, and automated decision-making tools.
The practical consequence reaches any company using AI-generated content in marketing, customer service, or product guidance. A GL policy renewed since January may no longer cover a defamation claim arising from a chatbot’s output or a copyright suit triggered by AI-generated advertising copy. Many companies will encounter this change embedded in the fine print of a renewed policy, because the exclusion arrives as an endorsement attached to existing coverage with no separate notification of reduced protection.
A new opportunity
Munich Re’s HSB division launched the first dedicated AI liability insurance product for small and medium businesses in mid-March. The product covers bodily injury from AI-controlled physical systems, property damage caused by AI-generated instructions, and advertising injury from AI-produced content. HSB illustrated the scope with scenarios drawn from the claims it anticipates: an AI-managed HVAC system that creates a slip hazard, a chatbot-generated appliance installation guide that causes water damage, an AI-produced marketing brochure that incorporates copyrighted material. The company’s own survey found that 91 percent of businesses plan to use AI, and the product fills gaps that general liability policies are beginning to carve out.
QBE has introduced an endorsement covering fines under the EU AI Act, the first by a major insurer to reference a specific AI regulation as a coverage criterion. Coverage under the endorsement requires that the policyholder demonstrate compliance with the Act’s governance framework, tying insurability directly to regulatory standards.
The carriers seeking exclusions and the specialty insurers building AI coverage products have reached different conclusions about the same underlying question: whether AI risk can be modeled precisely enough to price. The major carriers see insufficient data and unprecedented correlation. The specialty carriers see a risk that governance requirements and narrow coverage definitions can constrain enough to underwrite.
A form of regulation
Insurance exclusions take effect at policy renewal. They require no legislative process, no enforcement infrastructure, and no judicial interpretation to become operative. A Swiss company with American insurance and a German manufacturer with Lloyd’s coverage face the same exclusion language if their carriers adopt the same endorsements.
The EU AI Act’s high-risk rules, originally scheduled for August 2026, may be delayed to December 2027 as trilogue negotiations continue. Three US state AI laws that took effect on January 1 are entangled in federal preemption disputes following an executive order that seeks to override them. Insurance exclusions have been active on policies renewed since January. The Verisk endorsements are available to every carrier in the US market, and carrier filings suggest that adoption will accelerate throughout the year.


