Skip to main content
Skip to footer

Amarnath Suggu

Artificial intelligence (AI) has become all-pervasive and an integral part of our lives that it is hard to imagine any new product or service that does not feature AI. Insurance is no different. Today, AI is increasingly used to identify prospective customers, assess risk, determine premiums, and improve claim experience. However, the usage of AI can have adverse effects if it exhibits bias or is misused. Therefore, insurers should be aware of the numerous legal and ethical problems they might face as a result.

Risk and Compliance Implications of AI

A model’s outcome is determined by the variables used, training data, and the extent of testing performed during its development. The risks can be broadly classified into:

Training risks: If the models are not trained comprehensively, their decisions could be biased and impact a select few based on protected attributes. As a result, the model could either reject claims or charge higher premiums incorrectly.

Security risks: Computer vision algorithms that assess claim damage, text classifiers that detect fraud in documents, and speech recognition systems in contact centers can be compromised if testing is not adequate.

Data privacy risks: Using personally identifiable information (PII) or sensitive attributes of customer data without consent violates data privacy laws. Similarly, facial recognition systems are considered a threat to individual privacy and a violation of fundamental rights.

These risks can be a legal nightmare and raise various questions about the usage of AI by insurers.

Regulation and guidelines on AI usage

While only a few AI applications have been regulated, most remain unregulated and are only driven by guidelines. The regulations also vary from one region to another.

Data related:

AI-relevant data privacy has the highest number of laws. The EU’s GDPR (2018), The California Consumer Privacy Act (2020), The Personal Information Protection Law (2020) in China, the Personal Data Protection Act (2020) in Singapore, and The Digital Charter Implementation Act (2020) of Canada are few of the regulations that protect the privacy and personal data of citizens and consumers.

Algorithm related:

Although there has been a lot of hue and cry on addressing AI bias, most countries only have draft regulation proposals.

In the US, the Algorithmic Accountability Act drafted in 2019 never passed the senate, while the Algorithmic Justice and Online Platform Transparency Act (May 2021) is still awaiting approval. The upcoming AI Bill of Rights aims to protect consumers and give them a right to transparency and explainability. The bill also aims at making AI-enabled systems more accountable. Unlike the Federal government, many states in the US have enacted AI laws. California’s Automated Decision Systems Accountability Act and Illinois’s Artificial Intelligence Video Interview Act, to name a few.

Industry bodies such as The National Association of Insurance Commissioners (NAIC) and the Federal Trade Commission released guidelines in August 2020 and April 2021, respectively, aiming for transparency, fairness, equity, accountability, and security in the use of AI by organizations.

Application related:

The European Union’s (EU) Artificial Intelligence Act (April 2021) is a comprehensive proposal that prohibits the use of AI for social scoring, facial recognition, manipulation, and dark patterns. The act mandates high-risk AI to conform to the EU standards for health and safety requirements. It also demands transparency and a code of conduct for low-risk categories.

In August 2021, the Cybersecurity Administration of China drafted more stringent regulations for the use of recommender systems. It exercises control over the innate workings of the model, mandates approval prior to usage, limits the usage to promote positive energy, and prohibits the spread of undesirable content.

In November 2021, all 193 members of UNESCO adopted a historic agreement on the ethics and usage of AI to promote human rights and address major global challenges.

Impact of AI regulation on the insurance industry

Due to increased regulation, we can expect a few changes in the insurance industry. Carriers will need consent to create risk profiles of consumers based on protected attributes and disclose customer interactions and business processes leveraging AI to ensure transparency. Additionally, insurers’ AI models may be audited frequently and certified as compliant with algorithmic accountability and security. Carriers would expect the same from their AI technology service providers.

What needs to be addressed is the cost viability of implementing these regulations and their impact on the combined ratios of carriers. If AI usage turns out to be more expensive and comes with legal hassles, it may deter AI adoption among insurers. Another challenge for insurers is compliance with multiple AI regulations across regions. Hence, an all-encompassing, global regulatory framework would facilitate the adoption of AI in the insurance industry.   

Regulation – An advocate of AI adoption in insurance

AI can undoubtedly make insurance better, but its usage can sometimes result in undesirable outcomes that are detrimental to insurers. Regulation on AI usage will bring in accountability and prevent misuse, which will, in turn, eliminate legal hurdles and build consumer confidence. Insurers should not consider this a deterrent to AI adoption because the same regulation mandated AI usage in road safety and healthcare. AI evolution is still at a nascent stage in the insurance industry, and the time is ideal for fostering its growth for the greater good of society and humanity.

About the author

Amarnath Suggu
Amarnath Suggu is a senior consultant with the BFSI Technology Unit. He has over two decades of experience in the insurance industry, pre-dominantly with P&C insurers. He is interested in emerging technologies especially Artificial Intelligence and their applications in insurance industry.
Contact Contact