Current Trends and Issues

Tim Ryles | August 11, 2022

When the writer of Ecclesiastes reminded us long ago that "there is nothing new under the sun," he could have foreseen today's big data and the widespread use of various statistical programs in the valuation of totaled vehicles by third-party vendors who sell their services to insurance companies.

As this commentary argues, the methods and practices used in these algorithms are neither new nor are they particularly innovative. Instead, this "Estimatics" industry, as the Federal Trade Commission (FTC) defines them,^{ 1} applies commonly available statistical formulas to analyze vehicle claims—methods that may seem arcane and complex to the public but which are easily recognized in various professions and areas of expertise.

Insurance regulatory provisions provided a gate for third-party vendors to enter the insurance claims process. In delineating methods of settling totaled vehicle claims in the National Association of Insurance Commissioners Model Regulation on Unfair Property/Casualty Claims Practices 902, Section 8. (A)(1) and (2)(a)(b) and (c) require that an insurer that decides against providing a replacement vehicle permissible in Section 8. (A)(1) may select a cash settlement option by using two or more automobiles as reference points or securing quotes from two or more dealers. The regulation then opens the door to alternative sources in Section 8(A)(2)(d) by recognizing the following.

(d) Any source for determining statistically valid fair market values that meet all of the following criteria:

- The source shall give primary consideration to the values of vehicles in the local market area and may consider data on vehicles outside the area;
- The sources data base shall produce values for at least eighty-five percent (85%) of all makes and models for the last fifteen (15) model years taking into account the values of all major options for such vehicles; and
- The source shall produce fair market values based on current data available from the area surrounding the location where the insured vehicle was principally garaged or a necessary expansion of parameters (such as time and area) to assure statistical validity.

Then in Section 8. (A)(3), the regulation states the following.

When a first-party automobile total loss is settled on a basis which deviates from the methods described in Subsection A (1) and A (2) of this section, the deviation must be supported by documentation giving particulars of the automobile condition. Any deductions from the cost, including deductions from salvage, must be measurable, discernible, itemized and specified as to dollar amount and shall be appropriate in amount. The basis for the settlement shall be fully explained to the first-party claimant.

To offer a personal perspective, as Georgia's commissioner of insurance, when we adopted the regulation, my thoughts were, first, this regulation is designed to establish that market price values charged by licensed vehicle dealers should be the benchmark for valuing totaled cars; second, the bar is set very high for any entrepreneur who wishes to compile sales data to compete with dealers; and third, a database ought to be robust enough that reliable samples of vehicles could be chosen for comparison vehicles. The advent of artificial intelligence and big data in framing insurer decisions in several quarters makes it even more important that the data applied be statistically valid.

But what does statistically valid mean? In our regulatory adoption decisions, as Georgia's commissioner, I thought in terms of what researchers call construct validity, meaning the extent to which a vendor's measurement actually measures what it purports to measure. Additionally, to achieve validity, a researcher must have an adequate sample size and apply the correct statistical test in analyzing the data.^{ 2}

I was first introduced to this methodology long ago in an undergraduate course covering tests and measurements of how to grade student performance. Think about that: Did your schoolteachers talk about grading on "the curve" or assigning different weights to a final exam than to other tests during a semester? That is an example of what this commentary is about. I delved deeper into the subject both as a graduate student and as a professor of political science in the university system of Georgia, and I still make use of statistical methods in my personal research.

In determining values of totaled vehicles, most major insurers use independent vendors who compile millions of bits of data on car sales values, analyze the data, perform statistical manipulations, and produce a valuation of the loss vehicle. Once produced, an adjuster may or may not have the authority to change what the algorithm produces. Examples of widely used Estimatics companies include CCC Information Services, Work Center Total Loss by Mitchell International, and Audatex Solutions.

In establishing a total vehicle's valuation, one must first establish precisely as possible what is the object of measurement. The state regulations, the insurance contract, or both usually identify what is being measured, with actual cash value (ACV) being the most common.

I place special value on the "actual" part of ACV. What an insurer promises in its contract is not a wild guess or what an adjuster or an algorithm may stipulate. It commits to pay ACV and any departure from that promise, whether intentional or not, compromises the promise. In research speak, ACV is our dependent variable, that which we are trying to establish. There are two categories of variables—independent variables being those that impact the dependent variable, which is the factor to be explained.

A vendor will use several independent variables to arrive at ACV, each variable will fall into one of four levels, and the level of measurement determines the type of statistic that is appropriate for each level. Simply stated, statistical formulas are appropriate only when used with the level of measurement to which they qualify. The four levels of measurement are nominal, ordinal, interval, and ratio.

Nominal means "name" such as Ford, Chevrolet, Dodge, or a list of the top 25 college football teams. Nominal is a head count. Nominal measurements may be counted as so many Fords, Chevrolets, SUVs, and so on in a study. But, beyond yielding the count, nominal levels of measurement are of limited importance. One may not compute averages, for example.

Ordinal (order) means a ranking, like the top 10 college basketball teams. In an ordinal ranking, distances between rankings are uncertain. For example, the distance between a team ranked number one and a team ranked number two is not the same as between a 9 and a 10 ranking, and so on. The intervals between rankings lack uniformity; therefore, in statistical language, ordinal measurements are not interpretable.

Ordinal rankings are widely used in determining a totaled vehicle's value by assigning numerical rankings to such features as seats, headlining, carpet, interior panels, paint, body, transmission, and tires. When consumers are asked to rank a product or a service, they are engaged in applying an ordinal measurement. Example in auto valuation: if the software program uses a ranking of 1 (lowest) to 10 (highest), each of the abovementioned vehicle parts will receive a number, typically representing an on-the-spot judgment call by an adjuster who enters their opinions into a computer format.

An overall ranking may be computed by summing the individual rankings and dividing by the number of rankings. To illustrate, assume the following rankings of the eight categories in my example.

Auto Features | Ordinal Ranking |
---|---|

Seats | 8 |

Headlining | 6 |

Carpet | 9 |

Interior panel | 6 |

Paint | 7 |

Body | 7 |

Transmission | 7 |

Tires | 6 |

Total |
56 |

The 56 total divided by 8 equals 7. Ergo, the average of the combined ratings is 7. This is simple, but establishing an average is wrong. As one of my old textbooks opines, "The kind of statistical operation that can be performed on ordinal scale data is limited. For example, neither a mean nor a standard deviation of data can be measured on an ordinal scale."^{ 3} Or, as a more recent authority states, "There are different types of levels of measurement … that determine how you can treat the measure when analyzing it. For instance, it makes sense to compute an average of an interval or ratio variable but does not for an ordinal one."^{ 4}

There is also an inherent problem with rankings involving vehicles. Recall that the initial ranking is typically an on-the-spot ranking by an adjuster. Insurers use many adjusters all over the country. So, an immediate concern is, to what extent would five adjusters looking at the same car come up with the same rankings? What studies have been conducted to validate the work of the adjusters to make sure inter-adjuster agreement is within acceptable margins of error? What adjustments are made to protect the insured's interests based on these studies? This kind of study is essential for validating the use of an algorithm.

Interval means that each interval represents the same increment of the thing one is measuring and the distance between numbers is consistent. Thermometers are often used to illustrate interval level measurement since the difference between 20 degrees and 50 degrees is the same as between 50 and 80. Computing averages and standard deviations, therefore, are appropriate, although we can't say 80 degrees is twice as warm as 40 degrees and vice versa. The ventilation system in a vehicle is an example of an interval scale.

Ratio measurements have an absolute zero, and the distance between numbers is interpretable. Weight, age, and income are examples. We can say a person weighing 150 pounds is twice the weight of another person weighing 75 pounds, and a vehicle valued at $50,000 is twice the value of a $25,000 vehicle. One might also state that a vehicle with an odometer reading of 80,000 miles has been driven twice as far as a car with 40,000 miles. Both ratio and interval level measurements permit the use of statistics that may not be used for analyzing nominal and ordinal measurements. Averages and standard deviations may be computed. So, as a practical matter, once a variable reaches interval level status, what many consider the better performing statistical methods may be applied, including correlation and regression formulas.

Despite these standards of practice, one may find references to averages and standard deviations in descriptions of ordinal level data by Estimatics vendors. A standard deviation, even with interval and ratio levels, is difficult to interpret without knowing the range of measurement from lowest to highest, the number of vehicles, or the difference between an average figure and a median (50 percent below the median and 50 percent above). This latter point is easily illustrated. Imagine 20 persons at a bar with an average income of $50,800. Then Bill Gates walks in and sits at the bar. An average income would now be meaningless, but a median number would be a more accurate portrayal. That is why the US Census Bureau lists median family income instead of average family income.

The concept of standard deviation is crucial to understanding a multipage display of an insurer's valuation of a totaled vehicle. Basically, a standard deviation is a measure of dispersion or how each vehicle is positioned in relationship to the average vehicle, either below the average or above the average.

To get a better understanding of it, think of the well-known bell curve. In a normal distribution of vehicle values, about two-thirds of the vehicles will fall within one standard deviation plus or minus, about 98 percent within two standard deviations, plus or minus. A low standard deviation means that the cars examined hover around the average number, whereas a high standard deviation underscores wide variation around the average number. So if an adjuster says an insured's car's value falls within one standard deviation, the adjuster needs to add further information to explain the valuation of the insured's vehicle. Insist on knowing the number of vehicles on which any number is based.

Computing averages and standard deviations from ordinal level measurements are inconsistent with research standards of practice. Interval or ratio measurements are necessary to make use of averages and standard deviations. Mileage, age of the vehicle, and price of a vehicle are all acceptable measures for averages, standard deviations, regression, correlation, and analysis of variance.

I like to think of statistical methods in the same way as I think of pharmaceutical products. If a particular drug is contaminated, it is no longer acceptable for use. Similarly, should an analyst contaminate data by introducing the wrong statistical technique, the results may be less life-threatening than the drug example, but product purity is corrupted.

Be sure, the use of the improper methodology will have different impacts on different insureds, and the harm will not be the same in dollar amount for all insureds. This, however, ought not to distract from the crucial point that the methodology is at fault. Just as an impure drug will have different effects on its users, a defective algorithm will have varying effects. Even if a consumer insists on invoking the appraisal clause of an insurance policy, the figure the parties begin with is a fictitious one attributable to a defective statistical operation incapable of producing ACV.

In addition, under present rules, if a data set incorporates data from wholesale transactions or from any source other than retail sales, the estimates of vehicle value are contaminated, and the insurance policy's promise to pay ACV is impaired.

Finally, I refer back to my example of how the techniques I am writing about are used by classroom teachers. Teachers often assign higher weights to projects and exams, some accounting for a higher percentage of a semester grade than others. That, too, is relevant to totaled vehicle valuation. For example, if engine condition, mileage, or some other factor accounts for more than other factors in the claim process, it is an error to assign equal weight to all factors. Failure to do so may also fail to produce an accurate ACV.

Footnotes

Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.