Today, insurance companies are focusing less on innovation and turning more of their attention toward solving real-world problems. They are focused on eliminating unnecessary delays in the claims process, improving their ability to detect fraud, reducing audit risk, and encouraging operational collaboration between brokers and underwriters. Fortunately, with the right data foundation, insurance companies can easily solve these problems and related ones. The right foundation can also lend a hand with innovation. Let's dive in.
The insurance industry has always been data intensive, but the goal is less
about having enough data and more about being able to access the right data at the right
time and in the right context. The available data is never at risk of being "not enough"
for insurance companies. It is massive and found in policies; administration systems;
claims; platforms; broker portals; environment, social, and governance (ESG) reports;
disclosures; the Internet of Things (IoT); and a number of specialized databases
optimized for storing data such as telemetry data and metadata that describes external
fees.
The Ongoing Challenge
The problem is that all of this data is siloed, to one degree or another, preventing it from being combined in real time, or even close to it. Despite the recent advances in cloud computing and advanced data management architectures like data lakehouses, data is stored in multiple different formats to serve the needs of different groups of users, as well as to address the technical requirements of different applications, many of which can even reside within the same data lakehouse. Self-service data access remains an elusive goal, as integrating the data still requires IT.
Insurance companies also struggle to take advantage of transitory data. This provides real-time, short-lived, and situational information, such as a telematics alert when a car crashes, a drone image captured right after a flood, or a mobile photo of a cracked window screen. If you can't act on that data instantly, its value is gone.
The Full AI Stack
To solve the above challenges, insurance companies are leaning
into the full artificial intelligence (AI) stack. In addition to generative AI
(GenAI) for summarizing documents or drafting responses, retrieval augmented
generation (RAG), or chatbots, they also rely on the following.
Edge AI for IoT-triggered incident detection and real-time pricing
Causal AI for explainable decisions in support of underwriting and compliance
Reinforcement learning for long-term policyholder value optimization
Computer vision to assist in damage appraisal and property inspections
Natural language processing (NLP) for documents such as claims, notes, emails, chat, and logs
Machine learning (ML) to enhance fraud
detection and behavioral scoring
Deep learning to assist in analysis across all insurance domains
Image, voice, and pattern-based analytics
Agentic AI for autonomous goal-driven
actions, auto claims triage, or renewals
However, none of these tools can perform adequately without
governed, contextual, and real-time data, including transitory data. Unfortunately,
traditional data management platforms are unable to meet this need.
Logical Data Management
This is where logical data management comes in. Logical data
management doesn't require you to copy your data or move it around. It provides
virtualized access to all data—whether in the cloud, on premises, or in the IoT and
whether it is structured, unstructured, or anywhere in between—and it is all served
through a governed, unified, and real-time data-access layer. With logical data
management, insurers do not have to replace any existing infrastructure. They simply
deploy a logical data management layer above their data lakehouses and/or supporting
data sources to make all data usable, trusted, and available to people,
applications, and AI alike.
Let's look at a few use cases. First, consider claims handling with real-time AI. Imagine there's a car accident, triggering a telematics alert, and the customer uploads mobile photos from the scene. The adjuster adds notes via a field app, and NLP kicks in, parsing the notes for urgency. Computer vision analyzes the damage and takes photos, a GenAI application drafts a summary, and an agentic AI module initiates the next steps, notifying the customer and even suggesting a nearby repair shop. This is transitory data meeting AI orchestration in real time. But this scenario only works with a logical data management layer, which can provide unified, trusted data in real time.
Similarly, consider the potential of real-time fraud detection. An
insured spots a sudden rise in water damage and submits a claim hours after a storm.
With logical data management, an insurance company can quickly combine transitory
weather feeds, policyholder history, and geographic risk overlays and run the data
though an ML-driven anomaly detection application. The result? Suspicious claims can
be flagged before payouts begin, preventing fraud without having to slow down any
customer-facing processes. This means genuine claims without excessive manual triage
or lags; just real-time, defensible decisions.
Finally, brokers and underwriters often operate in disconnected
systems. They rely on outdated assumptions or duplicative data entry. With logical
data management, they get a shared real-time view and one version of the truth,
which includes up-to-date quotes, transitory broker submissions, historical events,
ESG ratings, risk metrics, and pricing trends. This results in faster quotes,
smarter decisions about a client service, and zero friction.
Logical Data Management and the Future of Insurance
Insurance is becoming intelligent, not just digital. If you want
AI that is explainable, autonomous, and accountable; decisions driven by real-world
context; and a future-ready data platform without ripping and replacing any
infrastructure, then trusted, real-time, and logically managed data—including
transitory data—is the key.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.
Today, insurance companies are focusing less on innovation and turning more of their attention toward solving real-world problems. They are focused on eliminating unnecessary delays in the claims process, improving their ability to detect fraud, reducing audit risk, and encouraging operational collaboration between brokers and underwriters. Fortunately, with the right data foundation, insurance companies can easily solve these problems and related ones. The right foundation can also lend a hand with innovation. Let's dive in.
The insurance industry has always been data intensive, but the goal is less about having enough data and more about being able to access the right data at the right time and in the right context. The available data is never at risk of being "not enough" for insurance companies. It is massive and found in policies; administration systems; claims; platforms; broker portals; environment, social, and governance (ESG) reports; disclosures; the Internet of Things (IoT); and a number of specialized databases optimized for storing data such as telemetry data and metadata that describes external fees.
The Ongoing Challenge
The problem is that all of this data is siloed, to one degree or another, preventing it from being combined in real time, or even close to it. Despite the recent advances in cloud computing and advanced data management architectures like data lakehouses, data is stored in multiple different formats to serve the needs of different groups of users, as well as to address the technical requirements of different applications, many of which can even reside within the same data lakehouse. Self-service data access remains an elusive goal, as integrating the data still requires IT.
Insurance companies also struggle to take advantage of transitory data. This provides real-time, short-lived, and situational information, such as a telematics alert when a car crashes, a drone image captured right after a flood, or a mobile photo of a cracked window screen. If you can't act on that data instantly, its value is gone.
The Full AI Stack
To solve the above challenges, insurance companies are leaning into the full artificial intelligence (AI) stack. In addition to generative AI (GenAI) for summarizing documents or drafting responses, retrieval augmented generation (RAG), or chatbots, they also rely on the following.
However, none of these tools can perform adequately without governed, contextual, and real-time data, including transitory data. Unfortunately, traditional data management platforms are unable to meet this need.
Logical Data Management
This is where logical data management comes in. Logical data management doesn't require you to copy your data or move it around. It provides virtualized access to all data—whether in the cloud, on premises, or in the IoT and whether it is structured, unstructured, or anywhere in between—and it is all served through a governed, unified, and real-time data-access layer. With logical data management, insurers do not have to replace any existing infrastructure. They simply deploy a logical data management layer above their data lakehouses and/or supporting data sources to make all data usable, trusted, and available to people, applications, and AI alike.
Let's look at a few use cases. First, consider claims handling with real-time AI. Imagine there's a car accident, triggering a telematics alert, and the customer uploads mobile photos from the scene. The adjuster adds notes via a field app, and NLP kicks in, parsing the notes for urgency. Computer vision analyzes the damage and takes photos, a GenAI application drafts a summary, and an agentic AI module initiates the next steps, notifying the customer and even suggesting a nearby repair shop. This is transitory data meeting AI orchestration in real time. But this scenario only works with a logical data management layer, which can provide unified, trusted data in real time.
Similarly, consider the potential of real-time fraud detection. An insured spots a sudden rise in water damage and submits a claim hours after a storm. With logical data management, an insurance company can quickly combine transitory weather feeds, policyholder history, and geographic risk overlays and run the data though an ML-driven anomaly detection application. The result? Suspicious claims can be flagged before payouts begin, preventing fraud without having to slow down any customer-facing processes. This means genuine claims without excessive manual triage or lags; just real-time, defensible decisions.
Finally, brokers and underwriters often operate in disconnected systems. They rely on outdated assumptions or duplicative data entry. With logical data management, they get a shared real-time view and one version of the truth, which includes up-to-date quotes, transitory broker submissions, historical events, ESG ratings, risk metrics, and pricing trends. This results in faster quotes, smarter decisions about a client service, and zero friction.
Logical Data Management and the Future of Insurance
Insurance is becoming intelligent, not just digital. If you want AI that is explainable, autonomous, and accountable; decisions driven by real-world context; and a future-ready data platform without ripping and replacing any infrastructure, then trusted, real-time, and logically managed data—including transitory data—is the key.
Opinions expressed in Expert Commentary articles are those of the author and are not necessarily held by the author's employer or IRMI. Expert Commentary articles and other IRMI Online content do not purport to provide legal, accounting, or other professional advice or opinion. If such advice is needed, consult with your attorney, accountant, or other qualified adviser.