Gallagher Re: AI Risks Expose Gaps in Insurance Coverage

AI blueprint rendering of a bridge that doesn't meet in the middle

March 26, 2026 |

AI blueprint rendering of a bridge that doesn't meet in the middle

According to Gallagher Re's report, Smart Systems, Blind Spots: Rethinking Insurance for the AI Era, developed in association with the Massachusetts Institute of Technology and Testudo, the rapid adoption of artificial intelligence (AI) has outpaced the insurance industry's ability to address the risks it creates, introducing a class of AI-native liabilities that traditional insurance policies may not fully recognize or respond to. 

AI deployment across enterprise functions has expanded significantly, with organizations increasingly relying on machine learning models, natural language processing systems, and autonomous decision-making tools to support operations and customer interactions, according to the report. These systems process large volumes of data and operate within complex architectures, creating vulnerabilities that traditional security and risk frameworks are not designed to manage, per the report. 

The report identifies several categories of AI-specific risks, including adversarial manipulation, data poisoning, model drift, algorithmic bias, and explainability gaps, each of which can lead to operational failures, regulatory exposure, and reputational damage. These risks can result in incorrect decisions, compromised model integrity, and compliance failures, underscoring the distinct nature of AI-related exposures, according to Gallagher Re. 

These vulnerabilities are already producing measurable legal and financial consequences, with litigation related to generative AI rising sharply in recent years, according to the report. Data cited in the report shows cumulative lawsuits in the United States exceeding 700 between 2020 and 2025, with filings increasing by 978.1 percent over that period, indicating that AI-related liability is expanding faster than insurance and regulatory frameworks can adapt, per Gallagher Re. 

Despite this growing risk, existing insurance coverage remains fragmented, with multiple policy types—such as cyber, technology errors and omissions (E&O), product liability, and commercial general liability (CGL)—only partially addressing AI-related exposures, according to the report. Many AI risks, including hallucinations, algorithmic bias, and regulatory fines, are either excluded or not triggered under traditional policies, per the report. 

Cyber insurance, for example, may respond to AI-enabled cyberattacks or phishing incidents if policy triggers are met, but it generally does not cover liabilities arising from AI outputs such as defamation, hallucinations, or intellectual property infringement, according to Gallagher Re. The report notes that this creates a distinction between AI as an attack vector, which may be covered, and AI as a source of liability, which often is not. 

Technology E&O coverage is designed to protect providers of technology services rather than organizations deploying AI systems, limiting its relevance for most enterprises adopting third-party AI tools, according to the report. As a result, many deployers lack coverage for liabilities stemming from their use of AI, including financial loss, defamation, or data disclosure caused by AI outputs per Gallagher Re. 

Product liability insurance may apply in cases where AI systems cause physical harm, particularly where software is treated as a product under applicable law, but it does not extend to non-physical harms such as discrimination or financial loss, according to the report. This limitation leaves significant exposure for AI-driven incidents that do not result in bodily injury or property damage per the report. 

Similarly, commercial general liability policies may respond to certain third-party claims, such as defamation or advertising injury, but emerging exclusions—particularly those related to generative AI—may further restrict coverage for AI-related harms, according to Gallagher Re. The report emphasizes that most AI risks, including hallucinations and algorithmic failures, fall outside the scope of traditional CGL coverage. 

The report highlights a fundamental gap between AI risk characteristics and existing insurance structures, noting that many AI-related harms arise without traditional triggers such as security breaches or physical damage. For example, generative AI risks such as prompt injection, jailbreaking, and hallucinations can create liability through harmful outputs rather than system failures, as detailed in the report. 

Non-generative AI systems also present liability risks tied to model performance and decision-making, including algorithmic discrimination and model drift, which can lead to financial losses and litigation without any corresponding coverage under traditional policies, according to Gallagher Re. Case studies referenced in the report, including healthcare claim denials and algorithmic valuation failures, illustrate how these risks materialize in practice. 

The report further notes that liability is typically assigned to AI deployers rather than developers or vendors, as organizations using AI systems are generally responsible for their outputs in legal and regulatory contexts. Contractual structures often reinforce this allocation, with vendor liability caps and limited indemnities leaving deployers exposed to third-party claims per Gallagher Re. 

To address these gaps, the report identifies the emergence of stand-alone AI insurance products designed specifically for AI-related risks, including offerings from Munich Re, Armilla, and Testudo. These products target exposures such as model underperformance, hallucinations, and third-party liability, reflecting a shift toward coverage tailored to AI failure modes rather than traditional risk categories, according to the report. 

In addition to stand-alone products, insurers are introducing endorsements to clarify coverage within existing policies, including cyber insurance extensions for AI-related risks and professional indemnity enhancements for AI-driven services, according to Gallagher Re. However, the report notes that ambiguity remains, particularly where policy language does not explicitly address AI scenarios. 

Looking ahead, the report concludes that a widening gap persists between AI-related exposures and available insurance protection, with AI risks creating a growing category of uninsured enterprise liability. As AI adoption accelerates, the insurance market's ability to adapt—through product innovation, clearer policy language, and improved risk modeling—will determine how effectively these emerging risks are managed, according to Gallagher Re. 

March 26, 2026