An android (Credit: Shutterstock)

If artificial intelligence (AI) systems are going to have anything to do with insurance, they ought to obey the same rules that the flesh-and-blood players follow.

A team of state insurance regulators at the National Association of Insurance Commissioners (NAIC) has put that goal in a new AI regulatory principles draft.

(Related: Regulators and Regulated Converge in Kansas City)

The NAIC’s new Artificial Intelligence Working Group has posted the draft on its section of the NAIC’s website. Comments are due Jan. 17.

Working group members agreed to seek comments on the draft Saturday, during an in-person session at the NAIC’s fall national meeting in Austin, Texas.

The group is part of the NAIC’s top-level Executive Committee. The committee formed the working group to “study the development of artificial intelligence, its use in the insurance sector, and its impact on consumer protection and privacy, marketplace dynamics, and the state-based insurance regulatory framework.”

Some consumer reps and others have argued that, in some cases, poorly designed financial services AI systems, or systems drawing on poor data sources, might lead to illegally arbitrary or discriminatory business decisions, or decisions based on reasoning that’s locked away in an AI system and not available for review by live humans.

The working group has considered two AI regulatory principles documents, according to a working group meeting summary report posted by the working group.

One is the Organisation for Economic Co-operation and Development’s Artificial Intelligence Principles.

The other is a draft document developed by the North Dakota Insurance Department.

In the North Dakota draft document, which is the basis for what the working group has posted on its website, the drafters start by declaring that AI systems efforts should be fair and ethical, and accountable.

“AI actors should respect the rule of law throughout the AI lifecycle,” according to the draft text. “This will include,but is not limited to, laws and regulations relating to trade practices,discrimination, promotion of fair access to insurance, underwriting and eligibility practices, ratemaking standards, advertising decisions, claims practices and solvency.”

In the section on accountability, the drafters state that, “AI actors should be accountable for the proper functioning of AI systems and compliance with all stated principles, consistent with the actors’ roles, the situational context, and evolving best practices…. Stakeholders should have access to resources which provide accurate information about their insurance data as well as a way to inquire or seek recourse for AI-driven decisions. This information should be plain, easy-to-understand and describe the factors that lead to the prediction, recommendation or decision.”

Resources

AI Working Group documents are available here, under the Meeting Materials and Exposure Drafts tabs.

—Read New York Regulators Roll Eyes at ‘Proprietary’ Accelerated Life Underwriting, on ThinkAdvisor.

— Connect with ThinkAdvisor Life/Health on Facebook and Twitter.