Close Close
Popular Financial Topics Discover relevant content from across the suite of ALM legal publications From the Industry More content from ThinkAdvisor and select sponsors Investment Advisor Issue Gallery Read digital editions of Investment Advisor Magazine Tax Facts Get clear, current, and reliable answers to pressing tax questions
Luminaries Awards
ThinkAdvisor
Patrick Sugent. Credit: Decisive Moment Event Photojournalism/LexisNexis Risk Solutions

Life Health > Life Insurance

LexisNexis Risk Solutions Feeds Life Insurers' Hungry AIs

X
Your article was successfully shared with the contacts you provided.

The new artificial intelligence systems that can chat with us — “large language models” — devour data.

LexisNexis Risk Solutions runs one of the AIs’ favorite cafeterias.

It helps life insurance and annuity issuers, and many other clients, use tens of billions of data records to verify people’s identities, underwrite applicants, screen for fraud, and detect and manage other types of risk.

The company’s corporate parent, RELX, estimated two years ago that it stores 12 petabytes of data, or enough data to fill 50,000 laptop computers.

Patrick Sugent, a vice president of insurance data science at LexisNexis Solutions, has been a data science executive there since 2005. He has a bachelor’s degree in economics from the University of Chicago and a master’s degree in predictive analytics from DePaul University.

He recently answered questions, via email, about the challenges of working with “big data.” The interview has been edited.

THINKADVISOR: How has insurers’ new focus on AI, machine learning and big data affected the amount of data being collected and used?

PATRICK SUGENT: We’re finding that data continues to grow rapidly, in multiple ways.

Over the past few years, clients have invested significantly in data science and compute capabilities.

Many are now seeing speed to market through advanced analytics as a true competitive advantage for new product launches and internal learnings.

We’re also seeing clients invest in a wider variety of third-party data sources, to provide further segmentation, increased prediction accuracy, and new risk indicators as the amount of data types that are collected on entities (people, cars, property, etc.) continues to grow.

The completeness of that data continues to grow, and, perhaps most significantly, the types of data that are becoming available are increasing and are more accessible through automated solutions such as AI and machine learning, or AI/ML.

As just one example, the dramatic improvements in the accessibility of electronic health records are new to the industry, contain incredibly complex and detailed data, and are much more accessible (and increasingly so) in recent years.

At LexisNexis Risk Solutions, we have always worked with large data sets, but the amount and types of data we’re working on is growing.

As we work with carriers on data appends and tests, we’re seeing an increase in the size of the data sets they are sending to us and want to work with. Files may have been thousands of records in the past, but now are exponentially larger as carriers look to better understand their customers and risk in general..

When you’re working with data sets in the life and annuity sector, how big is big?

The biggest AI/ML project we work with in the life and annuity sector is a core research and benchmarking database we utilize to, among other things, do most of our mortality research for the life insurance industry.

This data set contains data on over 400 million individuals in the United States, both living and deceased. It aggregates a wide variety of diverse data sources including a death master file that very closely matches U.S. Centers for Disease Control and Prevention data; Fair Credit Reporting Act-governed behavior data, including driving behavior, public records attributes and credit-based insurance attributes; and medical data, including electronic health records, payer claims data, prescription history data and clinical lab data.

We also work with transactional data sets where the data comes from operational decisions clients make across different decision points.

This data must be collected, cleaned and summarized into attributes that can drive the next generation of predictive solutions.

How has the nature of the data in the life and annuity sector data sets changed?

There has been rapid adoption of new types of data over the last several years, including new types of medical and non-medical data that are FCRA-governed and predictive of mortality. Existing sources of data are expanding in use and applicability as well.

Often, these data sources are entirely new to the life underwriting environment, but, even when the data source itself isn’t new, the depth of the fields (attributes) contained in the data is often significantly greater than has been used in the past.

We also see clients ask for multiple models and large sets of attributes transactionally and retrospectively.

Retrospective data is used to build new solutions, and often hundreds or thousands of attributes will be analyzed, while the additional models provide benchmarking performance against new solutions.

Transactional provides similar benchmarking capabilities against previous decision points, while attributes allow clients to support multiple decisions.

The types and sources of data we’re working with are also changing and growing.

We find ourselves working with more text-based data, which requires new capabilities around natural language processing. This will continue to grow as we use text-based data, including connecting to social media sites to understand more about risk and prevent fraud.

Where do life and annuity companies with AI/ML projects put the data?

In our role, we typically see carriers rely on external third parties to aggregate and normalize these data sets.

These large data sets are frequently aggregated from tens of thousands of data sources.

Organizing them into one, consistent, usable data set with easily accessible information, rather than noise, is a tremendous job, requiring many resources.

Therefore, carriers often use third parties specializing in this function to organize the data and provide it to them in a normalized and much more accessible and usable format.

Historically, carriers have housed their data, including big data, on their on-premises servers. Today, we see cloud solutions becoming the norm, with Microsoft Azure and Amazon AWS being the two primary cloud providers that are used.

We also see clients using more efficient data storage formats, like JSON.

How easy is it to find people who know how to work with the biggest data files?

Working with these giant files is a very specialized and difficult-to-find skill set.

The skill sets are essential in two areas.

First is the technical knowledge of the types of tools (and there are numerous tools, so this can vary) it takes to use large data sets.

Second is the domain knowledge to understand the data and therefore understand best how to organize it, so that it provides useful information rather than just being a giant, unorganized dump of information that is difficult to extract insights and value from.

We typically found that people with a variety of quantitative backgrounds and degrees can be effective in the role. So, it’s difficult to pinpoint just one.

But a degree that demonstrates expertise in solid programming logic, data science, or another quantitative background is the type of degree we typically find successful.

Data scientists are generally able to work with large data sets with vast amounts of data, regardless of the type of data, due to their specialized education and knowledge.

They have to always learn new languages and tools to work with growing data sets and types of data (such as imagery and text).

In addition, as cloud usage grows, they need to understand more data engineering techniques to use the technology effectively to perform their work.

It’s not just about coding and analyzing data; it’s how to make sure they can use and understand it.

How do you expect life and annuity companies’ use of big data to change in the coming years?

We expect the growth in the use of data to continue, as it adds tremendous value to the carrier and the consumer across all types of data sources, but one of the biggest issues will be the increased direct use of electronic health records.

Life carriers will have to become more comfortable with using AI/ML models in some capacity. It not only helps with efficiencies, but also customer satisfaction.

Carriers will have to figure out where they can use and optimize their processes, either internally or customer-facing, while still adhering to regulatory guidelines.

Why should life and annuity advisors think about the carriers’ use of artificial intelligence, machine learning and big data? 

We like to highlight just how much change the life industry has gone through over the last several years.

It’s gone from an industry where almost everyone was underwritten with an invasive process that typically took 45 to 60 days, to a process where a sizeable amount of applicants are underwritten without an invasive process in as little as a few minutes, and more commonly a day or two.

In that regard, these types of tools have had considerable consumer benefits.

Not only for the reasons listed above, but because these tools have allowed carriers to help reach the middle market, including historically underserved communities, at greater rates and numbers than prior use of these tools.

This has helped close the insurance gap.

The increased use of these tools has also meant that those who use the tools need to understand the regulatory environment, the guardrails that regulators expect users of these tools to have in place in using them, and how these tools benefit consumers.

Patrick Sugent. Credit: Decisive Moment Event Photojournalism/LexisNexis Risk Solutions


NOT FOR REPRINT

© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.