Why AI Alone Won't Fix Submission Intake — And What Actually Does

Written by
Prakhar Mohan
Linkedin profile icon
Last Updated
April 2, 2026
Read in
12 min read
Subscribe to our Newsletter
Insights, trends, and strategies for faster, smarter underwriting, delivered to your inbox.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
We promise, no spam. Just good stuff ❤️
Subscribe on LinkedIn
  • The submission intake problem has been unsolved for more than a decade, not because carriers have not tried, but because every fix addresses one link in the chain without treating the chain as a system.
  • Generic AI speeds up intake but does not fix trust. When underwriters find outputs wrong even 5 to 10 percent of the time, they re-verify everything. The speed gain disappears into verification overhead.
  • Hybrid AI-plus-human models delivered about 20 percent ROI improvement in live deployments — real progress, but the accountability gap stayed open because underwriters could not trace outputs back to source.
  • The real fix requires verticalized AI trained on commercial P&C underwriting, plus an intelligent agentic service layer that makes every output verifiable. Accuracy first. Speed follows.
  • Carriers running CURE™ report 85 percent faster submission-to-decision, 32 percent more GWP per underwriter, and 700 basis points of loss ratio improvement.

A decade of BPO, offshore, and generic AI has not solved commercial underwriting's intake problem. Here is the real reason every fix fails and what the architecture that works actually looks like.

Michaela Morrison has been in insurance longer than most AI companies have existed. She started on the agency side, watched her first MGA try to drag agents into digital workflows in the 1990s, and has spent the years since building and deploying every kind of submission intake fix the industry has offered. At InsureTech Connect Vegas 2025, she said the thing every carrier and MGA leader feels but rarely says out loud.

"We still had a problem."

Not a minor inefficiency. Not a work-in-progress. A fundamental bottleneck that had survived portals, offshore teams, generic AI, and a hybrid model she was personally convinced would solve it. The bottleneck kept relocating. It never disappeared.

This piece examines why. And what the architecture that actually fixes it looks like, based on Michaela's decade of live deployments and what we have seen across 40+ commercial P&C carriers and MGAs running Pibit.ai's CURE™ platform in production.

What the industry tried, in order
Five approaches. Five outcomes.
Live deployment data
🏢
Proprietary portals
1990s-2000s · Agents refused to adopt them
Failed
🌎
Offshore BPO teams
2000s-2010s · Data errors, could not scale at peak
Failed
🤖
Generic AI
2018-2022 · Speed without trust means rework
Failed
⚙️
Hybrid AI plus human QA
2022-2024 · About 20 percent ROI. Accountability gap remained.
Partial only
Verticalized AI with intelligent agentic service layer
Verifiable accuracy. Explicit confidence signals. Adoption that holds.
What works

The decade-long pattern nobody talks about honestly

The submission intake problem is not new. Michaela traces it back further than most conversations go. In her first MGA role, leadership had built a proprietary system and tried to force agents to hand-key submissions into it. The agents in Arkansas said no. Some did not have internet. Some did not have printers. The MGA paid for both. The agents still resisted going to yet another portal.

"We're not trying to solve a new problem," she said at ITC. "We're just trying to solve the same problem that still hasn't necessarily been solved."

That early moment captures the structural challenge. The problem is not technological. It is about workflow friction, agent behavior, system fragmentation, and professional accountability. Every time the industry upgrades the tool, the friction finds a new place to live.

10+ yrs
Problem remains unsolved
Despite portals, BPO, OCR, and multiple AI waves
70%
UW time on pre-decision work
Extraction, re-keying, triage, broker follow-ups
53%
Submissions go to first quoting carrier
CIAB Commercial Insurance Survey 2025

Five stages. The same outcome every time.

Here is the sequence Michaela described at ITC Vegas, in order. Each stage was pursued with genuine conviction.

Submission intake: the attempted fixes
1990s to 2025 · Michaela Morrison
01
1990s-2000s · Portal era
Build your own portal. Force the agents to use it.

Every carrier built a proprietary intake system. Agents were navigating dozens of carrier relationships. Going to 15 different portals was not a workflow improvement. Adoption collapsed.

Adoption failure
02
2000s-2010s · Offshore era
Offshore teams would absorb the volume. Two problems appeared.

During peak months, teams still fell behind. Data integrity suffered: information keyed into wrong fields, cascading errors downstream. Underwriters ran data quality checks on top of underwriting.

Data errors plus peak-season gaps
03
2018-2022 · Generic AI era
Speed improved. Underwriters still checked everything.

Rather than reducing effort, AI added a verification step before real work could begin. Counterintuitively, younger underwriters were harder to convince than older ones. Until outputs are verifiable, re-checking is the rational response.

Speed gain consumed by re-verification
04
2022-2024 · Hybrid model era
AI plus human QA. 20 percent ROI. Trust gap stayed open.

Michaela ran this pilot with genuine conviction. ROI improved approximately 20 percent. But underwriters still did not trust outputs. "If I mess up, you're coming back on me." The accountability gap remained open because provenance was still missing.

About 20 percent ROI, accountability gap persisted
05
2025 · The honest assessment
Loss runs improved. Web research improved. The core problem remained.

By 2025, processing had genuinely improved. But submissions where underwriters could not fully verify the output were still being manually re-checked. "Every time we solve for problem A, the bottleneck moved to problem B and to problem C."

Bottleneck relocated, not eliminated
"We still had a problem." - Michaela Morrison, VP and GM, Method Workers' Comp, InsureTech Connect Vegas 2025

Why every fix moves the bottleneck instead of removing it

The pattern is not a coincidence. Four structural forces operate simultaneously, and fixing any one without addressing the others just changes where the constraint lives.

01 · Accountability
Underwriters hold 100 percent of the professional liability

When AI outputs a risk assessment and an underwriter acts on it, the liability stays with the underwriter. If something is wrong and they did not catch it, they are on the hook. AI adds a verification step rather than removing one.

100%
Accountability stays with the UW regardless of AI
02 · Domain depth
Generic AI does not understand underwriting

General-purpose models do not know what an XMod is, how to classify loss run categories by line of business, or how to identify a missing location on a schedule of values. The accuracy ceiling for generic AI applied to insurance is too low for professional accountability.

0
Adoption without provenance. Speed without accuracy fails.
03 · Architecture
Point solutions move bottlenecks, they do not eliminate them

Appetite validation, ingestion, risk scoring, triage, and prioritization are interdependent. Fix ingestion and triage becomes the new constraint. Improving any single stage in isolation does not improve the whole.

5
Connected stages from submission to decision
04 · Volume
Submission volume grew 15 to 20 percent. Headcount did not.

Agents now submit to multiple carriers simultaneously. Inbound volume has risen sharply, but bind-to-submit ratios have not. Manual workflows that were marginal at lower volumes become structurally unsustainable as the denominator changes.

15%
Annual submission volume growth on flat headcount

The Waymo analogy that explains the trust problem

At ITC Vegas, Michaela described riding in a Waymo and watching it fail at the edges: when valet workers moved cones, the car did not know how to exit the roundabout. At the Phoenix Airport, Waymo vehicles park in the wrong lane because the algorithm cannot handle the human chaos of a busy drop-off zone.

The Waymo is genuinely impressive technology. It is also not trusted for edge cases. What makes it usable at all is that it communicates confidence. It tells you what it knows, what it does not, and when it needs a human to take over.

In her own words · ITC Vegas 2025

"The vision of having that technology, but with the checks and balances to highlight to the underwriter: we feel this analysis is right at 100 percent confidence. Or we feel about 70 percent confident, and these are the areas you need to look at because that is where you need to use your underwriting judgment."

This is exactly what underwriters need from AI. Provenance is the product.

CURE™ extraction accuracy by field type
Loss run classification accuracy99.9%
Vehicle schedule extraction99.9%
XMod and premium calculation fields99.9%
Items explicitly flagged for UW review100%
Every data point is traceable to its exact location in the source document. Underwriters verify, not guess.

The Steve Jobs framework that reframes the whole problem

Jobs did not make the first iPhone better by improving the hardware keyboard. He deleted it entirely because the keyboard was optimizing the wrong thing. That is not an incremental improvement. It is a rethink of the system from first principles.

Submission intake has the same interdependency: fix appetite validation and ingestion improves; fix ingestion and risk scoring gets better data; fix risk scoring and triage becomes more precise. The stages compound when connected. They undermine each other when they are not.

What the architecture that actually works looks like

Three approaches compared
Why the first two fail and what the third requires
ApproachWhat it solvesWhat breaksROI reality
Generic AIProcessing speedAccuracy, trust, adoptionNear zero after re-verification overhead
Hybrid AI plus generic QASpeed plus partial accuracyAccountability gap, cost of both modelsAbout 20 percent, ceiling hit fast
Verticalized AI with agentic service layerAccuracy plus provenance plus speedNothing structural85% faster, 32% more GWP/UW, 700 bps loss ratio

Requirement 1: The AI must be verticalized for commercial P&C underwriting, not insurance-in-general. CURE™ is built exclusively for loss runs, schedules of values, class codes, endorsements, and XMods, not adapted from generic document processing.

Requirement 2: Every output must show its work. Each extracted field must be traceable to its exact location in the source document. Verifiable provenance is what earns adoption at scale.

Requirement 3: The service layer must be intelligent, not just human. Agentic AI services trained to flag inaccuracies, surface confidence levels, and identify what needs human review close the loop in a way that scales at production volume.

The takeaway

The submission intake problem has survived a decade of attempted fixes not because carriers have not tried hard enough. The problem survived because each fix addressed one link in the chain without treating the chain as a system.

The underwriter is not the bottleneck. Pre-decision work is the bottleneck. Eliminating it requires accuracy high enough to trust, provenance explicit enough to verify, and a workflow connected enough that fixing one stage actually improves the next.

That is what "we still had a problem" is pointing at. Not a failure of effort or investment. A failure of system design and a clear signal of what the right design needs to include.

Frequently Asked Questions

Why does AI fail to fix underwriting submission intake?

Generic AI applied to insurance produces general accuracy, not the underwriting-specific accuracy professionals require. When underwriters find AI outputs are wrong even 5 to 10 percent of the time, they re-verify everything manually, consuming the speed gain the AI was supposed to deliver. The deeper issue is accountability: underwriters are professionally responsible for every decision. Until AI can show its work at 100 percent verifiable accuracy, rational professionals will re-check outputs rather than act on them directly.

What is the difference between generic AI and verticalized AI for underwriting?

Generic AI is trained on broad datasets without underwriting domain depth. It does not understand what an XMod is, how to classify loss run categories by line of business, or how to flag a missing location on a schedule of values. Verticalized AI is trained exclusively on commercial P&C underwriting documents. The accuracy ceiling is fundamentally higher because the training data and validation logic are built for the specific problem.

How long has the submission intake problem existed in commercial insurance?

More than a decade. The industry has cycled through proprietary portals, offshore BPO teams, generic AI, and hybrid AI-plus-human models. Each approach moved the bottleneck rather than eliminating it. The core structural problem — that pre-decision work consumes roughly 70 percent of underwriter capacity — has persisted because no single fix has addressed the full connected workflow that submission intake requires.

About
Prakhar Mohan

Head of Marketing and Partnerships

Linkedin profile icon
Here's why:
Cut underwriting time by 85% without sacrificing accuracy or compliance
Scale your book of business without scaling your headcount
Seamless integration with your existing workflows and data sources
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Ready to optimize

Loss ratios, account win rate, and throughput?