Why AI Alone Won't Fix Submission Intake — And What Actually Does

- The submission intake problem has been unsolved for more than a decade, not because carriers have not tried, but because every fix addresses one link in the chain without treating the chain as a system.
- Generic AI speeds up intake but does not fix trust. When underwriters find outputs wrong even 5 to 10 percent of the time, they re-verify everything. The speed gain disappears into verification overhead.
- Hybrid AI-plus-human models delivered about 20 percent ROI improvement in live deployments — real progress, but the accountability gap stayed open because underwriters could not trace outputs back to source.
- The real fix requires verticalized AI trained on commercial P&C underwriting, plus an intelligent agentic service layer that makes every output verifiable. Accuracy first. Speed follows.
- Carriers running CURE™ report 85 percent faster submission-to-decision, 32 percent more GWP per underwriter, and 700 basis points of loss ratio improvement.
Michaela Morrison has been in insurance longer than most AI companies have existed. She started on the agency side, watched her first MGA try to drag agents into digital workflows in the 1990s, and has spent the years since building and deploying every kind of submission intake fix the industry has offered. At InsureTech Connect Vegas 2025, she said the thing every carrier and MGA leader feels but rarely says out loud.
"We still had a problem."
Not a minor inefficiency. Not a work-in-progress. A fundamental bottleneck that had survived portals, offshore teams, generic AI, and a hybrid model she was personally convinced would solve it. The bottleneck kept relocating. It never disappeared.
This piece examines why. And what the architecture that actually fixes it looks like, based on Michaela's decade of live deployments and what we have seen across 40+ commercial P&C carriers and MGAs running Pibit.ai's CURE™ platform in production.
The decade-long pattern nobody talks about honestly
The submission intake problem is not new. Michaela traces it back further than most conversations go. In her first MGA role, leadership had built a proprietary system and tried to force agents to hand-key submissions into it. The agents in Arkansas said no. Some did not have internet. Some did not have printers. The MGA paid for both. The agents still resisted going to yet another portal.
"We're not trying to solve a new problem," she said at ITC. "We're just trying to solve the same problem that still hasn't necessarily been solved."
That early moment captures the structural challenge. The problem is not technological. It is about workflow friction, agent behavior, system fragmentation, and professional accountability. Every time the industry upgrades the tool, the friction finds a new place to live.
Five stages. The same outcome every time.
Here is the sequence Michaela described at ITC Vegas, in order. Each stage was pursued with genuine conviction.
"We still had a problem." - Michaela Morrison, VP and GM, Method Workers' Comp, InsureTech Connect Vegas 2025
Why every fix moves the bottleneck instead of removing it
The pattern is not a coincidence. Four structural forces operate simultaneously, and fixing any one without addressing the others just changes where the constraint lives.
When AI outputs a risk assessment and an underwriter acts on it, the liability stays with the underwriter. If something is wrong and they did not catch it, they are on the hook. AI adds a verification step rather than removing one.
General-purpose models do not know what an XMod is, how to classify loss run categories by line of business, or how to identify a missing location on a schedule of values. The accuracy ceiling for generic AI applied to insurance is too low for professional accountability.
Appetite validation, ingestion, risk scoring, triage, and prioritization are interdependent. Fix ingestion and triage becomes the new constraint. Improving any single stage in isolation does not improve the whole.
Agents now submit to multiple carriers simultaneously. Inbound volume has risen sharply, but bind-to-submit ratios have not. Manual workflows that were marginal at lower volumes become structurally unsustainable as the denominator changes.
The Waymo analogy that explains the trust problem
At ITC Vegas, Michaela described riding in a Waymo and watching it fail at the edges: when valet workers moved cones, the car did not know how to exit the roundabout. At the Phoenix Airport, Waymo vehicles park in the wrong lane because the algorithm cannot handle the human chaos of a busy drop-off zone.
The Waymo is genuinely impressive technology. It is also not trusted for edge cases. What makes it usable at all is that it communicates confidence. It tells you what it knows, what it does not, and when it needs a human to take over.
"The vision of having that technology, but with the checks and balances to highlight to the underwriter: we feel this analysis is right at 100 percent confidence. Or we feel about 70 percent confident, and these are the areas you need to look at because that is where you need to use your underwriting judgment."
This is exactly what underwriters need from AI. Provenance is the product.
The Steve Jobs framework that reframes the whole problem
Jobs did not make the first iPhone better by improving the hardware keyboard. He deleted it entirely because the keyboard was optimizing the wrong thing. That is not an incremental improvement. It is a rethink of the system from first principles.
Submission intake has the same interdependency: fix appetite validation and ingestion improves; fix ingestion and risk scoring gets better data; fix risk scoring and triage becomes more precise. The stages compound when connected. They undermine each other when they are not.
What the architecture that actually works looks like
Requirement 1: The AI must be verticalized for commercial P&C underwriting, not insurance-in-general. CURE™ is built exclusively for loss runs, schedules of values, class codes, endorsements, and XMods, not adapted from generic document processing.
Requirement 2: Every output must show its work. Each extracted field must be traceable to its exact location in the source document. Verifiable provenance is what earns adoption at scale.
Requirement 3: The service layer must be intelligent, not just human. Agentic AI services trained to flag inaccuracies, surface confidence levels, and identify what needs human review close the loop in a way that scales at production volume.
The takeaway
The submission intake problem has survived a decade of attempted fixes not because carriers have not tried hard enough. The problem survived because each fix addressed one link in the chain without treating the chain as a system.
The underwriter is not the bottleneck. Pre-decision work is the bottleneck. Eliminating it requires accuracy high enough to trust, provenance explicit enough to verify, and a workflow connected enough that fixing one stage actually improves the next.
That is what "we still had a problem" is pointing at. Not a failure of effort or investment. A failure of system design and a clear signal of what the right design needs to include.
Frequently Asked Questions
Generic AI applied to insurance produces general accuracy, not the underwriting-specific accuracy professionals require. When underwriters find AI outputs are wrong even 5 to 10 percent of the time, they re-verify everything manually, consuming the speed gain the AI was supposed to deliver. The deeper issue is accountability: underwriters are professionally responsible for every decision. Until AI can show its work at 100 percent verifiable accuracy, rational professionals will re-check outputs rather than act on them directly.
Generic AI is trained on broad datasets without underwriting domain depth. It does not understand what an XMod is, how to classify loss run categories by line of business, or how to flag a missing location on a schedule of values. Verticalized AI is trained exclusively on commercial P&C underwriting documents. The accuracy ceiling is fundamentally higher because the training data and validation logic are built for the specific problem.
More than a decade. The industry has cycled through proprietary portals, offshore BPO teams, generic AI, and hybrid AI-plus-human models. Each approach moved the bottleneck rather than eliminating it. The core structural problem — that pre-decision work consumes roughly 70 percent of underwriter capacity — has persisted because no single fix has addressed the full connected workflow that submission intake requires.

Ready to optimize



.png)

.png)
.png)
.png)
.png)
.png)

