Explain This: Why Your Cyber Insurance Just Got Expensive (Or Disappeared)

Cyber insurers are treating AI as a separate risk class. That means new exclusions, doubled premiums, and coverage gaps you didn't plan for. Here's what changed and what to do this week.

Explain This: Why Your Cyber Insurance Just Got Expensive (Or Disappeared)

Explain This: Why Your Cyber Insurance Just Got Expensive (Or Disappeared)

Here's the part nobody wants to admit: your cyber insurance policy was written for a threat model that no longer exists.

In the last six months, cyber insurers have started treating AI usage as a separate risk class. That means new exclusions, doubled premiums, and coverage gaps you didn't budget for.

If you disclosed AI usage on your renewal application this year, you probably saw the premium jump. If you didn't disclose it, you just bought a policy that won't pay out when you need it.

Let me explain what changed and what to do this week.

What It Is

Cyber insurers are rewriting policies to separate AI risk from traditional cyber risk.

That means: - Named-peril coverage instead of all-risk coverage for AI incidents. - Higher premiums if you use AI in customer-facing applications. - Exclusions for "AI-related incidents" unless you buy a separate rider.

Lockton Re said it plainly in February 2026: AI needs its own risk class.

Underwriters are now asking specific questions about AI usage during applications. If you use AI for fraud detection, customer service, or decision-making, expect higher premiums or coverage limits that don't match your exposure.

Why It Matters

Most companies don't know their cyber policy excludes AI incidents until they file a claim.

Here's what triggers the exclusion: - A data breach caused by a prompt injection attack on your AI chatbot. - A lawsuit alleging your AI hiring tool discriminated against protected classes. - Regulatory fines because your AI recommendation engine violated privacy law.

The policy says "cyber incident." You think that covers AI. The insurer says AI is a separate exposure. You fight about it in court while the claim sits unpaid.

This isn't theoretical. Reddit threads from the last two weeks show real companies getting hit with doubled premiums after disclosing AI usage. One poster said their quote went from $15K to $30K annual premium because they checked the box that said "we use AI."

Where Teams Screw Up

The biggest mistake: treating AI like any other software.

Security teams add AI to the stack without telling legal. Legal renews the cyber policy without asking security if anything changed. Nobody reads the exclusions until the claim gets denied.

Second mistake: assuming "cyber insurance" means "anything that involves computers."

It doesn't. Cyber insurance was built for ransomware, phishing, and network intrusions. AI introduces new exposures that don't fit those categories: model theft, hallucination liability, discriminatory output, poisoned training data.

Third mistake: not documenting AI governance.

Insurers now ask: Do you have an AI usage policy? Do you track what models are in production? Do you log prompts and outputs? If the answer is no, expect coverage limits that don't match your risk.

What "Reasonable" Looks Like

If you're using AI in production, here's what underwriters expect to see:

1) An AI inventory. What models are in production? Who owns them? What data do they touch? If you can't answer this, you can't get affordable coverage.

2) Logging and monitoring. Prompt logs, output logs, usage logs. Insurers want proof you can reconstruct what happened when something goes wrong.

3) A policy that defines acceptable use. What can employees use AI for? What's off-limits? Who approves new models? If it's not written down, it doesn't exist.

4) Vendor risk management. If you use third-party AI (OpenAI, Anthropic, Google), do you have indemnification clauses? Do you know what their liability cap is? Most vendors cap liability at the fees you paid them. That won't cover a class action.

5) Separation of duties. Who can deploy a model to production? Who reviews output before it goes to customers? If the answer is "the same person," that's a control gap.

What to Do This Week

1) Read your cyber policy exclusions. Look for language about "AI," "machine learning," "automated decision-making," or "algorithmic systems." If it's excluded, you need a separate rider or a new policy.

2) Inventory your AI usage. Make a spreadsheet: model name, vendor, what it does, what data it touches, who owns it. If you can't fill this out in one hour, your AI governance is not ready for an insurance application.

3) Document your AI governance. Write down: acceptable use, approval process, logging requirements, incident response. If it's not in a policy document, insurers assume you don't do it.

4) Talk to your broker before renewal. Don't wait until the application. Ask: Does our current policy cover AI incidents? What documentation do underwriters want to see? What exclusions are standard?

5) Plan for higher premiums. If you use AI in production, budget for 20-50% higher premiums at renewal. If you don't disclose it and file a claim later, budget for litigation.


Subscribe if you want the legal version of risk management, not the marketing version.

What's the AI system in your stack that would hurt most if coverage excluded it?


Sources