Yes, I am going to jump onto the bandwagon. This commentary will discuss everyone's favorite topic, artificial intelligence (AI), but in the context of how AI will affect reinsurance. In prior commentaries, I have discussed blockchain and reinsurance, so the segue to AI is not a big stretch.
Like all industries, the reinsurance industry is being and will be affected by AI and its related technologies (i.e., chatbots with generative AI like ChatGPT, big data, etc.). Mining large data sets for information to assist in underwriting and claims is nothing new to the reinsurance industry. Historically, industry actuaries have looked at 5- and 10-year (or more) loss triangles to predict ultimate loss costs for the treaty year being underwritten. Similarly, underwriters have examined large sets of premium and policy distribution data to help price reinsurance treaties and understand trends. So, big data and predictive analytics have been used by the insurance and reinsurance markets for years.
More recently, insurance and reinsurance companies, brokers, and producers have turned to technology to enhance the insurance production and risk transfer processes. Just look at all the InsurTech companies in the reinsurance and insurance space that tout their use of technology to provide a more accurate and more efficient approach to claims handling, contract formation, and risk transfer (okay, but let's put Vesttoo to the side for purposes of this commentary—the jury is still out on that one, considering it has commenced Chapter 11 proceedings).
Like other industries, many insurance companies, brokers, and producers use chatbots on their websites to enhance the customer experience. However, these chatbots are not generative AI chatbots. Additionally, aspects of AI are being used in the hiring process by many companies, including insurance and reinsurance companies.
What Is AI?
There are literally thousands of articles and websites about AI. Simply put, AI is machine learning used to produce a result that emulates a human response and learns over time what responses are more accurate. Essentially, computers are taught to respond to questions by searching massive sets of data, which then allows the computer to "learn" on its own and produce better and better responses. Think HAL 9000 in 2001: A Space Odyssey (for those of a certain age).
The type of AI that many are embracing is generative AI. There are generative AI programs that create written documents; draw images, pictures, or art; or identify people and anomalies in machinery or the human body. For example, AI programs are being used to look for and then create or generate art based on the style of a particular artist. Think of a near perfect reproduction of the style of Leonardo da Vinci.
Other programs will search through tens of millions of photos to identify particular people (law enforcement uses programs like this). Yet other programs can compose music or create songs in the style or voice of a particular artist (scam artists are using this technology to emulate grandchildren in desperate phone calls that sound realistic to rip off grandparents). Obviously, the intellectual property issues associated with AI are numerous and beyond the scope of this commentary.
The goal of AI initiatives is to create programs that will be smart enough to solve human-level problems. OpenAI is one of the most well-known organizations to embrace and create generative AI programs; ChatGPT is an OpenAI product (more about ChatGPT below). OpenAI describes its text-based models as advanced language processing tools that can generate, classify, and summarize text with high levels of coherence and accuracy.
The AI that is making all of the headlines is the type of AI that uses a chatbot to generate content in response to questions, and ChatGPT is the most well-known of these generative AI programs. ChatGPT is based on a system called GPT-4, the latest iteration of the GPT learning systems. Open AI describes GPT-4 as being "more creative and collaborative than ever before. It can generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user's writing style." GPT-4 is a deep learning model that leverages enormous amounts of data and computing power to create increasingly sophisticated and capable language models.
However, OpenAI recognizes the limitations of AI and GPT-4 in particular. Its website states that "GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts. We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models."
What OpenAI calls hallucinations is important for all generative AI systems. Recently, a law firm used ChatGPT to write a legal brief and filed it in court. Unfortunately for the lawyer who filed the brief, several of the cases cited in the brief were not real cases, and the lawyer was sanctioned. ChatGPT hallucinated these cases just as it hallucinates "facts" and other information based on flaws in the data sets GPT-4 was trained on and the immaturity of the learning model.
While generative AI programs like ChatGPT are good for creating drafts, correcting writing flaws, and speeding up research, their output must be revised and checked by humans to remove the hallucinations and other flaws in the product generated. This is an issue that professors and teachers are struggling with in every school district and institution of higher learning.
Systems like GPT-4 are trained on large data sets of information from public databases and data licensed from third-party providers. Right now, if you query ChatGPT, it will tell you that its information gathering ended with the year 2021. That means the results from ChatGPT and any generative AI program may be inaccurate if events after 2021 would change the response. Additionally, some aggregators and creators of public content, particularly news organizations, are beginning to restrict open access to generative AI training modules. These restrictions may affect the accuracy of generated responses.
AI and Reinsurance Contract Wording
How does generative AI benefit the reinsurance industry? The most obvious and most likely use of generative AI in the reinsurance industry will be for document generation. Clearly, programs like ChatGPT can be used (and likely are being used) to draft reinsurance contracts. If the generative AI system is trained properly on thousands and thousands of reinsurance contracts (as well as authoritative texts, statutes, and insurance regulations) and the right questions are asked, programs like ChatGPT should be able to significantly decrease the time it takes to generate a functional reinsurance contract. Brokers and companies (not to mention lawyers) already have electronic libraries of different types of reinsurance contracts and specialty clauses. Generative AI programs should be able to refine those libraries and create more consistent and accurate reinsurance contracts.
For example, if you ask ChatGPT to draft a casualty excess-of-loss reinsurance agreement between X company as the cedent and Y company as the reinsurer, where the attachment point is $5 million and the underlying business covers commercial automobile policies written in California with limits of $5 million, it should be able to produce a workable draft contract even today. But the results may not be fully accurate because of statutory and regulatory changes that have taken place since 2021 along with advances in underwriting and the development of additional clauses addressing data and privacy, for example.
The key, of course, is to train the generative AI system on a comprehensive data set of reinsurance and insurance materials, which include preexisting contracts, the relevant regulations, and the relevant statutes that affect the risk and the contract. By learning how reinsurance works and what clauses are necessary in what contracts, errors in contract drafting (using templates) will be reduced. Essentially, AI will help double-check that the right clauses are being used in the right reinsurance contract for the right purposes.
Large reinsurance intermediaries and reinsurers are in the best position to provide the data sets to accomplish the training that the generative AI systems need. Will an industry consortium be necessary to create and supply training data sets with contributions from across the industry? Is it anticompetitive to share these data sets? These are all questions that will affect how generative AI programs will be trained and used in the insurance and reinsurance industries.
I don't see generative AI as replacing contract wording professionals, but I do see these programs as a way for contract wording professionals to produce better contracts more efficiently and accurately over time. Moreover, programs like ChatGPT help improve the readability and clarity of reinsurance contracts. Readability and clarity are important issues because failing to say what you mean and mean what you say is an issue that has plagued the industry for years.
Other Potential Applications of AI for Reinsurance
Besides reinsurance contract wording, generative AI should be able to help underwriters and actuaries in pricing reinsurance risks. Predictive models and computerized actuarial analyses have been used for years in reinsurance underwriting. Generative AI should take those models to the next level by creating pricing outputs with more accuracy and efficiency. The data sets needed to train the AI systems already exist, and the large aggregators of underwriting and claims data, like the international brokers, already use this data to assist their clients in the pricing of reinsurance contracts. In the near future, generative AI will assist reinsurance underwriters in developing more accurate pricing models.
Generative AI also may be used by reinsurers in periodic postcontract auditing and preunderwriting audits. Auditers will be able to use generative AI to produce audit reports and better enable communications between cedents and reinsurers on issues that may arise because of the audit.
On the claims side, cedents should be able to use generative AI to create more accurate and consistent monthly or quarterly narrative claims reporting. This is especially important for excess-of-loss contracts, where individual loss reporting may be required for certain types of losses.
For reinsurers, generative AI may be useful to predict loss outcomes on larger claims in advance of individual loss reporting by the cedent.
AI is here to stay, and all industries are embracing AI in their business processes. For the reinsurance industry, generative AI offers the most utility by helping to produce reinsurance contracts and related documentation more efficiently and more accurately.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.