AI-related exclusions are starting to appear more often in commercial insurance policies, but for now, cyber insurance is largely standing firm against them. Alexandra Bretschneider, vice president and cyber practice leader at Johnson Kendall Johnson, shared insights on how the industry is responding to the rising role of artificial intelligence in risks.
While some insurance carriers have added AI-specific exclusions elsewhere, most cyber insurers are actually clarifying that they will continue to cover losses caused by AI-driven attacks. This reassurance is important as AI-fueled threats, like deepfakes and sophisticated social engineering scams, become more common. Bretschneider pointed out that AI doesn’t so much create brand-new risks as it amplifies existing ones. The challenge remains in defining how AI-related digital events fit into coverage, especially if they lead to physical damage or injury.
The real struggle lies outside of cyber insurance, particularly in areas like management liability and professional indemnity. Some insurers have introduced broad AI exclusions with vague definitions—Berkley was cited as an example—casting a wide net that excludes any reliance on AI. These exclusions are now showing up in directors and officers (D&O), errors and omissions (E&O), employment practices, fiduciary, and crime coverage. The broad scope of these exclusions is raising concerns among industry watchers.
Despite this, Bretschneider advises companies not to panic over their current cyber insurance. Coverage generally remains intact for claims related to risks traditionally covered by these policies. She doesn’t see AI becoming its own separate insurance line anytime soon. Instead, AI risks will likely be folded into existing coverage like cyber. Some exceptions could appear for AI developers, who may need tailored protection reflecting their unique exposures.
Right now, insurance products designed specifically for AI risk are rare—probably fewer than five on the market. One of the few available options is Armilla AI, which helps cover some legal and financial harms tied to AI, but it doesn’t cover bodily injury or property damage. That leaves gaps, especially around failures or bad decisions caused by AI.
As AI exclusions become more common, clients are starting to rethink their risk strategies. Conversations now include controls on AI use, employee behavior, and access restrictions. Bretschneider sees this as a good thing, encouraging careful innovation with some guardrails.
When it comes to cyber threats, she dismisses the idea that insurers should treat AI attacks differently from human hackers. What matters is the impact, not whether the attacker is a person or AI. Insurers focus more on an organization’s processes, like how they verify and authenticate contacts, rather than the technology used to breach them.
Looking ahead, similar attention to process will spread to underwriting for D&O and E&O insurance. If a company relies on AI, underwriters will want to see how they validate their results and manage risks. They expect clear documentation and quality controls, just like with other fraud safeguards.
Bretschneider also noted that employee misuse of AI tools, such as uploading sensitive data to public platforms, would likely count as a breach and trigger coverage only if policy and regulatory rules are met.
Overall, the insurance industry is still adjusting to AI risks, with cyber insurance offering firm coverage but other lines tightening their language. For now, thoughtful risk management and ongoing conversations between insurers and clients remain key.