Commercial auto underwriting: why 12 years of losses point to a data problem, not a rate problem
.png)
- Commercial auto combined ratios have exceeded 103% in 12 of 14 years, signaling systemic underwriting profitability issues.
- Rate increases have flattened to 2.9% (Q4 2025), proving that traditional pricing cannot overcome poor data quality at submission.
- The root cause is not underwriting judgment but submission data accuracy. Incomplete loss runs, misclassified exposures, and buried fleet details drive blind underwriting decisions.
- 90% of carriers are testing AI, but only 22% have moved to full production, largely due to accuracy verification challenges.
- Underwriters equipped with verified, extraction-ready data see 700 basis points of loss ratio improvement and 32% higher GWP per underwriter.
Commercial auto underwriting has lost money for 12 straight years. That is not a rate problem. That is a data problem.
The numbers tell the story. The combined ratio for commercial auto has exceeded 103% in 12 of the past 14 years, with S&P Global projecting a 104.4% combined ratio in 2026, rising to 106.3% by 2029. Carriers have absorbed nearly $10 billion in net underwriting losses over the past two years alone. Meanwhile, rate increases have moderated to just 2.9% as of Q4 2025, according to WTW's Insurance Marketplace Realities data. The math is simple: you cannot rate your way out of a data problem.
Key takeaways
- Commercial auto combined ratios have exceeded 103% in 12 of 14 years, signaling systemic underwriting profitability issues.
- Rate increases have flattened to 2.9% (Q4 2025), proving that traditional pricing cannot overcome poor data quality at submission.
- The root cause is not underwriting judgment but submission data accuracy. Incomplete loss runs, misclassified exposures, and buried fleet details drive blind underwriting decisions.
- 90% of carriers are testing AI, but only 22% have moved to full production, largely due to accuracy verification challenges.
- Underwriters equipped with verified, extraction-ready data see 700 basis points of loss ratio improvement and 32% higher GWP per underwriter.
The real cost of incomplete submission data
Most underwriters believe they are losing money because they are mispricing risk. The truth is more uncomfortable: they are underwriting blind.
A typical commercial auto submission arrives incomplete. The loss run is truncated. Driver details are scattered across three different forms. The fleet composition is buried in an Excel tab that nobody told you existed. Classification codes are inconsistent. The broker swears the prior underwriting summary is accurate, but it was written 18 months ago and nobody has verified it since. The submission management system marks the file as "ready for underwriting," but the underwriter knows better. They spend 40 minutes assembling the actual story before they can even think about pricing.
This is not a complaint about brokers. This is a systemic problem baked into how the industry collects and structures data at submission. Underwriters have learned to live with it. They compensate with judgment, with conservative pricing, with tighter guidelines. But judgment and conservative pricing are expensive when you are trying to grow and when competitors are willing to bet they can extract the signal from the noise.
The cost shows up in loss ratios. Misclassified exposures are underpriced. Excluded or hidden loss history is not factored in. Driver information is incomplete, leading to underestimation of hazard. Fleet size and composition are underestimated. The underwriter prices the risk they can see. The loss runs on the risk they cannot.
Why rates alone cannot fix this
The carrier with the worst data does not solve the problem by raising rates by 3%. They solve it by improving data quality or by walking away from the business. Most carriers do both.
Rate increases are blunt instruments. A 3% rate increase across the entire book raises the rate on the best risks and the worst risks equally. The best risks walk to a competitor with better data. The worst risks stay, because they know they are mispriced and the competitor does not. This is known as adverse selection, and it is the silent killer of underwriting profitability in commercial auto.
WTW's data shows that rate increases have decelerated dramatically. Carriers have learned that rate alone does not move the needle on combined ratios when the submission data is unreliable. They have hit a ceiling. Further rate increases only accelerate the cream skimming. The next lever is data quality.
The data problem in plain terms
Commercial auto underwriting requires accuracy across three dimensions: loss history, exposure detail, and risk context.
Loss history is often incomplete. Loss runs are truncated at submission. Carriers receive data that goes back 3 years when they need 5. ALAE is sometimes included, sometimes not. The "clean" loss run that the broker provided was sometimes compiled by an inexperienced assistant and never reviewed. Underwriters compensate by assuming higher severity and frequency than the data shows, which inflates pricing and loses good risks.
Exposure detail is scattered and inconsistent. The number of drivers varies between the certificate of insurance and the loss run and the application. The fleet size in the summary is different from the fleet detail in the spreadsheet. Class codes are applied unevenly. Some drivers are listed, others are not. Underwriters cannot classify or hazard-evaluate accurately because the data contradicts itself.
Risk context is implicit and often absent. Why did the account switch carriers three times in four years? What happened in that loss spike in 2022? Is the applicant hiding driver information, or did the broker simply fail to collect it? Underwriters develop instincts to read between the lines, but instinct is not data. Instinct also does not scale.
The result is a wide variance in underwriting outcomes. Two underwriters, looking at the same submission with the same data gaps, will price the risk differently because they have different thresholds for risk tolerance in the face of uncertainty. One underwriter takes the business at a lower rate and experiences a loss. Another walks away and misses the premium. Neither had good data.
The industry's AI paradox
90% of carriers were testing AI for underwriting in 2025. Only 22% had moved to full production deployment. The stall is not skepticism about AI. It is accuracy verification.
Underwriters understand that machine learning models are as good as the data they train on. Feed a model submission data with known gaps and inconsistencies, and the model learns those gaps. Accuracy verification in underwriting AI requires proof that the extracted data is reliable before it is used to inform pricing decisions. Carriers are moving slowly because the alternative to carefulness is a faster path to underwriting losses.
The carriers that have moved to full production are not using AI to automate underwriting judgment. They are using AI to automate intelligent document extraction and data standardization. They are using AI to improve submission data quality before it reaches the underwriter. The underwriter still makes the pricing decision. The AI ensures the underwriter has reliable inputs.
What accurate data looks like in practice
A well-prepared commercial auto submission contains verified loss run data going back 5 years with clear ALAE allocation. The fleet composition is explicit and consistent across all documents. Drivers are listed once with no duplication or omission. Classifications are applied using a standard taxonomy. Loss history is reconciled and flagged for context (new driver, one-time incident, pattern of exposure). Prior underwriting summaries are dated and attributed.
This level of accuracy requires work. It requires either broker discipline at submission time or underwriter effort to reconstruct the story after submission. The industry has largely chosen the second path, which is expensive and error-prone. The carriers moving to production AI are effectively choosing the first path with machine assistance.
Automated loss run processing with verified accuracy reduces the time for data assembly from 40 minutes to 6 minutes. More importantly, it eliminates the data interpretation variance. Two underwriters looking at the same cleaned, standardized dataset will disagree on pricing. They will not disagree on what the data says.
The math of better data quality
Carriers that have implemented template-agnostic extraction with accuracy verification report measurable improvements in underwriting economics. One major carrier using Pibit's CURE™ technology reported 85% faster underwriting cycles, 32% higher GWP per underwriter, and a 700 basis point improvement in loss ratio within the first year of full deployment.
These improvements are not coming from AI replacing underwriter judgment. They are coming from underwriters making faster, better decisions on better data. The underwriter has more time to think about hard problems because they are not spending time assembling basic facts. The underwriter approves more business because they are not walking away from opportunities they cannot see clearly due to data gaps. The underwriter prices more accurately because the underlying risk exposure is transparent.
At this scale, a 700 basis point loss ratio improvement is the difference between an underwriting profit and an underwriting loss on the entire book. For a $500 million commercial auto carrier, that is a swing from -$35 million to +$35 million in underwriting income.
The social inflation wildcard
The commercial auto loss trend is being accelerated by factors that are not in the carrier's control: social inflation, litigation funding, and nuclear verdicts. NCCI data shows medical severity growing roughly 6% in 2024. The frequency of six and seven-figure jury awards in auto liability cases has increased. Third-party litigation funding has made defense more expensive and settlement more expensive.
These are real cost drivers, and they are moving the needle. But they are also being used as an excuse for poor underwriting economics. A carrier with incomplete loss history will underestimate severity. A carrier with incomplete driver history will underestimate frequency. When social inflation accelerates the underlying severity trend, the carrier that was already underestimating gets hit twice.
Social inflation is real, but it does not eliminate the responsibility to underwrite with accurate data. The carriers that are managing through the social inflation trend are doing so with better data, tighter guidelines, and more selective underwriting. They are raising rates where needed, but they are backing those rate increases with improved data quality so that they are raising rates on risks they understand, not risks they are guessing on.
What better data looks like for yyour underwriting operation
If you are a Chief Underwriting Officer or VP of Underwriting at a carrier or MGA, the question is not whether AI can underwrite commercial auto. The question is whether you are ready to operationalize better data quality in your submission process.
Start with visibility. How much time does your average underwriter spend assembling and reconciling submission data? How often do they request clarifications that the broker cannot answer? How many submissions are marked "pending information" because a key data element is missing or contradictory? These are proxies for data quality problems.
Next, quantify the cost. If your underwriters spend an average of 35 minutes per submission on data assembly, and you process 50 submissions per week, that is 1,750 minutes of underwriter time per week. If you could reduce that to 6 minutes through intelligent extraction, that is 2,200 minutes freed up per week. At fully-loaded underwriter cost, that is meaningful productivity.
Then test with a subset. AI-enhanced underwriting workflows begin with a pilot on a specific line or geography where you can measure the accuracy and the impact. You are not betting the company on a new approach. You are validating that the data quality improvement translates to faster, more accurate underwriting decisions and better loss ratios.
The underwriting advantage goes to data quality
The carriers that will escape the commercial auto profitability trap will not do so through rate increases alone. They will do so through a deliberate shift in how they handle submission data. They will move from a "let the underwriter figure it out" model to a "make the data reliable before it reaches the underwriter" model.
This does not eliminate underwriting judgment. It augments it. The underwriter with clean, verified, standardized data is a more dangerous competitor than the underwriter with good instincts and bad data.
The 12-year loss streak in commercial auto will not end with rates. It will end with data.
A practical starting point
If you are looking to move beyond pilot programs and into production AI for underwriting, the first question is not "Can the system extract data accurately?" The first question is "How will we verify that it did?" Improving your combined ratio starts with a methodical approach to data quality validation, not with deploying a system and hoping it works.
The carriers ahead of this curve are already moving. The question is how quickly you can follow without repeating their mistakes or wasting time on false starts. That starts with honest clarity on where your submission data problems are costing you money today.
Frequently Asked Questions
Commercial auto combined ratios have exceeded 103% in 12 of the past 14 years, driven by social inflation, nuclear verdicts, litigation funding, and rising medical severity. S&P Global projects the combined ratio will reach 106.3% by 2029. Rate increases averaging 2.9% in Q4 2025 are insufficient to offset loss cost trends accelerating at 6% or more annually.
Incomplete or inaccurate submission data leads to misclassified exposures, understated fleet sizes, and missed loss history, all of which result in mispriced risks. Carriers using CURE™ for template-agnostic extraction report 700 basis point loss ratio improvements because underwriters price against verified data rather than reconstructed estimates.
A rate problem means premiums are too low for the known risk. A data problem means the risk itself is not accurately captured at submission. Rate increases applied across a book with poor data quality accelerate adverse selection because the best risks leave while underpriced risks stay. Accurate submission data extraction solves the underlying risk visibility gap that rate increases alone cannot fix.
.png)
Ready to optimize



.png)

.png)
.png)
.png)

