Skip to Content
Current Trends and Issues

In Search of Standards for Totaled Auto Claims Software

Tim Ryles | September 1, 2022

On This Page

Basit Mian sued Progressive County Mutual Insurance, J.D. Power & Company, and Mitchell International Company, Inc., in a Texas Federal District Court, charging that Progressive's use of a computer program called Work Center Total Loss (WCTL) undervalued his totaled vehicle by $2,073.24. See Basit Mian v. Progressive Cty. Mut. Ins. Co., No. 4:20-00536, 2020 U.S. Dist. LEXIS 255774 (S.D. Tex. June 11, 2020).

Progressive is a licensed insurance company operating in Texas, while J.D. Power and Mitchell International are third-party vendors that sell claims settlement software programs to insurers and other parties.

J.D. Power, a Troy, Michigan, company founded by James David Power III in 1968, is currently owned by a private equity firm, Thoma Bravo, with offices in Chicago, San Francisco, and Miami. In addition to software programs, the company also owns the National Automobile Dealers Association Guide for Used Vehicles. This guide is widely used to value vehicles, is available to the public, and is a useful reference in valuing used vehicles.

Mitchell International is a San Diego, California, software company formed in 1947. Mitchell recently joined two other companies, Genex and Coventry, to form Enlyte, a parent brand name "committed to simplifying and optimizing property, casualty, and disability claims processes and services."

Neither J.D. Power nor Mitchell is a licensed insurer, adjuster, insurance producer, or third-party administrator.

According to the Mitchell website, WCTL is a software program developed jointly by J.D. Power and Mitchell as "a statistically driven, fully automated, web-based, total loss valuation system that generates fair, market-driven values for loss vehicles." A visual representation of a four-step process followed by WCTL in calculating the total loss amount also appears on the company's website.

A Sample Valuation Report is available at Appraisal Engine, Total Loss Appraisals. Appraisal Engine is a fee-based company specializing in negotiating settlements of total losses on behalf of vehicle owners.

Mian's complaint charged that WCTL's valuation methodology lacked statistical validity. Accordingly, lack of validity support claims for breach of contract, bad faith, civil conspiracy, and tortious interference with the contract between Mian and Progressive resulted. Claims for breach of contract, bad faith, and civil conspiracy against Progressive were upheld, and, for defendants Mitchell and J.D. Power, the court upheld claims for tortious interference with contract and civil conspiracy.

In examining the statistical validity issue, Senior US District Court Judge Nancy F. Atlas applied primary jurisdiction rules and stayed all claims against the defendants until the Texas Department of Insurance (TDI) determined whether WCTL's methodology produced "actual cash value" in a statistically valid manner.

Primary jurisdiction is a judicially created doctrine invocable in the Fifth Circuit Court of Appeals if (a) it will promote even-handed treatment and uniformity in a highly regulated area or when sporadic action by federal courts would disrupt an agency's regulatory scheme, or (b) the agency possesses expertise in a specialized area with which the courts are relatively unfamiliar.

In summary, Judge Atlas opined the following under Texas law.

  • Determining the appropriate methodology to calculate the cash value of total loss vehicles is squarely within the regulatory authority delegated to the TDI.
  • The Texas legislature has established a detailed statutory scheme governing the insurance industry, including auto insurance.
  • The TDI is staffed with experts trained in handling complicated insurance problems that involve many more issues than contract interpretation.
  • Moreover, interests of uniformity weigh in favor of staying the case under the primary jurisdiction doctrine. Mian's claims ultimately turned on the validity of a complex method for adjusting and settling claims. In short, Judge Atlas thought it better to have one standard than run the risk of different courts developing multiple solutions since several insurers use WCTL.

She could have added that WCTL is just one of several products provided by third-party vendors to calculate total vehicle losses. The expanding use of these products reinforces a need for standards applicable to all vendors, not just Mitchell and J.D. Power.

Are Insurance Regulatory Experts Expert Enough?

That TDI is a competent party to ascertain the statistical validity of WCTL is a reasonable assumption. After all, insurance regulators have actuaries, insurance is numbers-driven, and regulators examine and either approve or disapprove complex rate filings. Moreover, there are several examples of insurance regulatory involvement with actions involving statistical operations and algorithms, including the following.

  • The principal Model Regulation Unfair Claims Practices in Property and Casualty Insurance contains a "statistical validity" requirement in settling total losses;
  • National Association of Insurance Commissioners (NAIC) conducted a multistate market conduct examination of Allstate's use of an Australian import software program (Colossus) for valuing soft tissue injuries; 1
  • There has been extensive litigation over both Colossus and Xactimate, 2 the latter being a software program used in property damage claims;
  • For over 3 decades, regulators and consumer advocates have faced off over the use of credit scores as an underwriting factor (I held several days of hearings on the issue);
  • Predictive computer models are widely used in the identification and prosecution of insurance fraud; 3 and
  • Attention given big data has been at center stage for several years.

Yet, when it comes to formulating statutes, regulations, or guidelines, insurance regulators as a governing authority are latecomers to the Big Algorithmic Dance. Consider, for example, the work of the NAIC, the standard-setting entity for the insurance industry.

Confronting big data's challenges, the organization's first major attempt to address artificial intelligence (AI) issues came almost 4 months to the day after Judge Atlas's June 11, 2020, order when, on October 12, 2020, state regulators adopted "Principles of Artificial Intelligence." In this document, the insurance industry standard-setters proposed that "insurance companies and all persons or entities facilitating the business of insurance that play an active role in the AI system life cycle, including third parties such as rating, data providers, and advisory organizations … promote, consider, monitor, and uphold the following principles according to their respective roles." 4

The five main principles are that the foregoing parties should always be (1) fair and ethical, (2) accountable, (3) compliant, (4) transparent, and (5) secure, safe, and robust.

To allay any misperceptions that the statement represents any major departure from regulatory custom and practice, the document makes clear that the principles are not intended to be law or enforceable standards.

Less than 2 months later on December 8, 2020, NAIC's Casualty Actuarial and Statistical Task Force issued a white paper, Regulatory Review of Predictive Models, 5 which conceded that "predictive analytic techniques are evolving rapidly and leaving many state insurance regulators, who must review these techniques, without the necessary tools to effectively review insurers' use of predictive models in insurance applications." One prediction, of course, is the valuation of totaled vehicles. The white paper's singular topic is rate filings, not policyholder claims, but the professional skills and experiences are much the same whether one is examining a rate structure or algorithms that determine how much an insurer is willing to pay for a totaled vehicle, replace a tornado-stricken home, or seek prosecution of a policyholder for insurance fraud.

So, having committed themselves to no new laws or enforceable standards, what did the organization actually do? According to NAIC, the time has come to adopt a menu of "Best Practices" 6 while making sure that everyone understands that these practices are "not intended to create standards for filing" rates that insurers intend to charge. Neither is there any indication that regulatory best practices are designed as a foundation for new model laws or regulations.

Best practices, the committee claims, "are used to maintain quality as an alternative to mandatory legislated standards and can be based on self-assessment or benchmarking." Somewhat surprisingly, however, the question addressed by the best practices approach is, "How can regulators determine whether predictive models, as used in rate filings, are compliant with state laws and/or regulations?" Yet, the white paper admits that regulators lack the skills and other resources to evaluate the complex, rapidly advancing world of algorithms. Bottom line: regulators don't have the ability to set standards for the use of computer technology in the insurance industry.

We don't know whether the TDI is an exception to the NAIC admissions regarding competency to address big data challenges. The parties settled the case instead of going back and forth between a federal court and Texas regulators.

Courts Are Filling the Void

In the absence of enforceable standards, courts are providing different answers to the questions posed by AI, just as Judge Atlas feared. Here are a few examples.

Florida deleted references to "statistical validity" in its version of the Unfair Claims Property and Casualty Statute. As in other cases involving WCTL's validity, 7 a federal court seemed preoccupied with the absence of that concept in the statute stating, "even assuming the WCTL is invalid, there is no requirement that it must be statistically valid to comply with the Florida statute." The court overlooked the fact that, first, WCTL is not an insurance policy, it is a product, and when anyone uses statistical methods to build a product, standards of statistical methodology apply whether a law or regulation requires compliance with them or not. An agency that does exercise some direct authority over AI products, the Federal Trade Commission, made a similar point in a recent report to the US Congress, in stating that the same companies "that benefit from the advantages and efficiencies of algorithms must bear the responsibility of (1) conducting regular audits and (2) facilitating appropriate redress for erroneous or unfair algorithmic decisions." 8

In a related case challenging the validity of WCTL's methodology, the insurance policy stipulated that the insurer "may use estimating, appraisal, or injury evaluation systems to assist in determining the amount of damages, expenses, or loss payable under the policy," adding further that "such systems may be developed by Progressive or a third party and may include computer software, data bases, and specialized technology." The court concluded that the language did not include an independent obligation on the insurer to "properly investigate and confirm the statistical validity of the methodology." Therefore, since the policy did not obligate the insurer to use statistically validated software, the insurer "could not have violated the policy by failing to investigate the validity of the WCTL valuation."

Theoretically, under this interpretation of the policy, an insurer may hire a butcher to evaluate a bodily injury claim and a faith healer to treat it.

What if a human adjuster is involved? There is some indication that if a human is involved and endowed with discretion to change an algorithm's valuation, that is a good thing for the insurer. Compare, for example, Slade v. Progressive Sec. Ins. Co., Civil Action No. 6:11-cv-2164, 2014 U.S. Dist. LEXIS 154713 (W.D. La. Oct. 30, 2014), in which a human adjuster apparently had no discretionary authority over WCTL's calculations, with Prudhomme v. Geico Ins. Co., No. 6:15-CV-00098, 2020 U.S. Dist. LEXIS 205798 (W.D. La. Nov. 3, 2020), where human adjusters not only had discretionary authority but "exercised it in a statistically significant number of cases to adjust the [care, custody, or control (CCC)] valuations." Slade was certified as a class action, and Prudhomme was dismissed. The court apparently accepted the "statistically significant" testimony without any direct examination to confirm its validity.

In the grand scheme of big data world, WCTL and its kin do not strike me as complicated examples of software. Rather, they appear to amass volumes of data and apply commonly used statistical methods to the valuation process. Nowhere have I seen any evidence that the software learns from its simple calculations and manipulations of millions of data bits. Indeed, the operations are so low level that it is stretching things to associate these products with AI. Nevertheless, I can think of several ways that WCTL and its competitors could be tested for both validity and reliability. 9 Perhaps at some point, Judge Atlas's desire for uniform standards can be satisfied, especially on what seems to me to be the easiest examples that regulators can select to initiate the process.

Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.


1 A copy of the "Multi-State Market Conduct Examination of Allstate" (2010) is available at several websites, including state insurance department locations. See for example
2 Melissa M. D'Alelio and Taylore Karpa Schollard," Colossus and Xactimate: A Tale of Two AI Insurance Software Programs," American Bar Association Journal, February 07, 2020.
3 For a summary and test of some of these models, see Matheus Kempa Severino and Yaohao Peng," Machine Learning Algorithms for Fraud Prediction in Property Insurance: Empirical Evidence Using Real-World Microdata," ScienceDirect, Vol. 5, September 15, 2021.
4 The Principles are available for free at the NAIC's searchable website.
5 My research shows that the idea of a white paper may have developed in 19th century Great Britain as a way of distinguishing less-serious parliamentary undertakings from the more serious blue books. The American use of the term generally refers to a background report on a topic or suggested guidelines on a specific issue.
6 The idea of best practices apparently grew out of the scientific management movement work of F.W. Taylor in the early 20th century. The committee referenced a more recent academic source, Eugene Bardach and Eric M. Patashnik, two political scientists, A Practical Guide for Policy Analysis, 6th Edition: Washington, D.C.: CQ Press, 2020. A more succinct treatment of the subject, available online, is Joe Osburn, Guy Caruso, and Wolf Wolfensberger, The Concept of "Best Practice": A Brief Overview of Its Meanings, Scope, Usage, and Shortcomings, September 2011.
7 Richardson v. Progressive Am. Ins. Co., No. 2:18-cv-715-Ftm-99MRM, 2022 U.S. Dist. LEXIS 8783 (M.D. Fla. Jan. 18, 2022).
8 Combatting Online Harms through Innovation, Federal Trade Commission, June 16, 2022, pp. 50–51.
9 See, for example, the judge's discussion of the methods used by CCC Information Services in Fortson v. Garrison Prop. & Cas. Ins. Co., No. 1:19-CV-294, 2022 U.S. Dist. LEXIS 48918 (M.D.N.C. Mar. 18, 2022). I take the court at its word that the methods used conform to acceptable methodologies.