Skip to Content
Cyber and Privacy Risk and Insurance

Considerations on AI and Insurance

Mark Lanterman | December 20, 2023

On This Page
Digital brain representing artificial intelligence

Dubbed by many as the year of artificial intelligence (AI), 2023 saw a massive boom in common applications of AI. All sectors are having to grapple with how to best manage the inherent risks and undeniable benefits. From how to implement ChatGPT in a responsible manner to being aware of increasingly sophisticated phishing emails, AI is prompting many organizations to evaluate how they could improve, how they could be at risk, and how the accessibility of different AI applications may be impacting their bottom line. The insurance industry is no exception, as underwriting decisions and claims are impacted.

Automation and self-service policies only became more popular following the COVID pandemic: "Even prior to the pandemic, leading insurers were employing AI and automation to improve efficiency and accuracy in claims processing. They were ahead of the curve in customer service at a time when pandemic stress added to the urgency of fast processing and reimbursement." 1

Streamlining the claims process with tools such as mobile apps has allowed for a greater degree of convenience and efficiency; in many situations, clients are able to upload their own photos and documentation and keep the claims process moving forward quickly. However, as is often said, where we gain convenience from technology, we lose an aspect of our security.

Cyber Security and Privacy Issues

Though it has been seen that the strategic use of AI technologies has been pivotal in staying ahead of the curve, efficiency-boosting measures have also added opportunities for fraudulent claims and cyber attacks. Fraud is always a scourge of the insurance industry, and total losses amounting to billions of dollars yearly negatively impact customers and insurers alike.

Shallowfakes and Deepfakes

Though deepfakes, or digitally audio or visual content generated with deepfake technologies, are burgeoning, the insurance industry arguably faces a more pressing threat from its less sophisticated, albeit perhaps more damaging, counterpart—the shallowfake. Frequently grouped together, shallowfakes and deepfakes are distinct for a number of reasons.

While deepfakes require specialized software to create and are completely generated, shallowfakes are digitally altered media, sourced from "real" content. Using commonly available software, such as photoshop, an individual can make slight edits (such as slowing down or splicing a video or changing the date on a photo) to completely change the tone, perspective, or context of digital media.

The dangers of this ability are evident when thinking about self-service measures. Even though shallowfakes are not new and have been historically used to make false identity or evidence documentation, they are becoming more prevalent.

It's not uncommon to find the same document being reused tens or even hundreds of times with just name, account, and address altered, effectively creating as many fake identities from a single template. 2

Furthermore, self-service practices and automation "has increased dependency on customer-supplied photos for settling claims—an excellent opportunity for shallowfakes as the risk of fraud from altered, manipulated, or synthetic photos significantly increases." 3 Self-supplied claims materials, while eliminating the need for agents to make direct assessments and documentation, add to the potential for fraud.

While perhaps easier to identify than a deepfake, as they originate from original source materials, investigations can be time-consuming and many may evade detection entirely. AI tools are already typical in the fight against insurance fraud, but an evolution in AI technologies and an increase in deepfakes will likely render detection a more and more difficult task.

Though shallowfakes may be more of an immediate threat today, advancements in deepfake technology and its availability may make AI an even more severe problem moving forward. Scott Clayton, the head of claims fraud at Zurich Insurance Group, has stated, "I kind of half joke that when deepfake affects us significantly, it's probably about the time for me to get out.… Because at that point, I'm not sure that we'll be able to keep pace with it." 4

Conclusion

Deepfakes do not come with a label and even forensic investigations may not always result in a clear identification. Additionally, there are currently no tools that can perfectly "spot" AI-generated content. 5 Many believe that AI is the best tool against AI; the sooner insurers develop strategic plans for using AI, then the better positioned they will be to address advanced threats such as deepfakes once they become more pervasive.


Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.


Footnotes

2 Martin Rehak, "Shallowfakes Are the Real Threat to the Insurance Industry," InsurTech, July 3, 2022.
3 Rehak.
5 Tate Ryan-Mosley and Melissa Heikkilä, "Three Things To Know about the White House's Executive Order on AI," MIT Technology Review, October 30, 2023.