The insurance industry has embraced Artificial Intelligence (AI) across the value chain including identifying potential customers all the way to assessing risks and settling claims. It has enhanced customer experience and ensured faster, better and cheaper business processes. While AI is considered a game changer, it does have its own shortcomings. Insurers should be aware of the possible bias in the decisions made by AI and its vulnerability to adversarial AI.
The Bias Factor: Need for Explainable AI
The bias in AI-made decisions has been observed across a few implementations in the industry. Some recent examples include Apple’s credit card1 discriminating against women, mortgage algorithms2 charging higher interest rates to black and Latino borrowers, and a healthcare group’s algorithm3 favoring white patients over black patients. Research has shown that computer vision algorithms4, speech recognition systems and text classifiers5 can be compromised. These bad decisions and inherent weakness have led to questioning AI’s performance and adoption in the industry.
The visible AI bias has meant the customers have begun doubting the accuracy and fairness of the decisions and businesses need the means to audit these decisions. Explainable AI (XAI) can come to the rescue. While XAI cannot unravel the black box nature of models it can offer a set of tools and techniques to help better understand the inner workings of the model or the potential influences for a particular decision.
Sales and Marketing, Underwriting and Claims have a large number of AI use cases in the insurance value chain. These business functions have direct interactions with the customers. Hence, it’s imperative that the models used here perform as intended. Any evidence of bias or incorrect decisions can lead to losing customers, reputation damage and possible legal and regulatory actions.
The Problem: AI bias in Insurance Industry
A rating algorithm leverages AI to assess the risk features and quotes a premium. The algorithm could be quoting a higher premium for a specific gender or a race or worse it could be declining coverage based on the protected attributes (race, color, gender, religion, disability). Given several carriers deploy AI to evaluate claims raised by individuals, this model could potentially flag a valid claim as fraud based on the said attributes and deny payment.
The general bias in any model is instituted because of the inherent bias in the data used to train the model. XAI techniques like Shapely Additive Explanations (SHAP)6 or Local Interpretable Model Agnostic Explanations (LIME)7, highlight the precise input features that played a key role in influencing the algorithm to make a particular decision. The data scientists and actuaries can then determine if these features are protected attributes or contributing to the bias by acting as proxy to protected attributes8. For example, if the model was trained using data from a specific geographical region as input with a higher population of one color or race and few consumers, these attributes would have translated by the model to induce unintended bias.
Vulnerability to Adversarial AI
Post COVID-19, carriers are using images and videos to carry out business including claim inspections. The customers have taken to posting pictures or videos of the damaged asset. The model assesses the damage to determine the amount to be paid. However, criminals can trick the model by manipulating pixels on the image (known as Adversarial AI) to make false claims. Similarly claims may be rejected by the model due to unwanted objects or noise in the image. Using XAI techniques like Saliency Maps9 and Occlusion Maps along with GANs (Generative Adversarial Network)10 one can strengthen the AI models to improve accuracy and thwart claim fraud.
The Bonus: Business Insights
XAI can also generate insights and make the insurance business more profitable. Insurers can build AI models with the vast data and apply XAI techniques like SHAP to identify the highly influencing attributes11, which can be used to make the right decisions. This is particularly useful during the launch of new products. Policy data can be used to identify the needs of potential customers, which will help improve conversion ratios and reduce churn. Claim data can also highlight features that contribute to heavy losses thus classifying a good risk from bad ones. This can help the insurer with competitive pricing or avoiding the product.
Conclusion
Insurance is a heavily regulated industry. Article 22 of GDPR, Algorithmic Accountability Act of 2019 and many other US State House Bills mandate transparency and fairness in the automated decision making process. XAI offers much-needed transparency, security and auditability to the intelligence used in business processes. It can also help eliminate bias and fight AI fraud prevalent in the industry. Above all, it can ensure trust, which is a must for AI to thrive and grow in the Insurance business.
In essence, simple AI is no longer adequate, and the insurers need to leverage XAI to strengthen the use of AI as well as safeguard themselves from fraud, which in turn will ensure a well-rounded adoption of AI.
References
1 The New York Times, "Apple Card Investigated After Gender Discrimination Complaints," November 2019. https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html
2 Berkeley News, "Mortgage algorithms perpetuate racial bias in lending, study finds," November 2018. https://news.berkeley.edu/story_jump/mortgage-algorithms-perpetuate-racial-bias-in-lending-study-finds/
3 The Wall Street Journal, "New York Regulator Probes UnitedHealth Algorithm for Racial Bias," October 2019. https://www.wsj.com/articles/new-york-regulator-probes-unitedhealth-algorithm-for-racial-bias-11572087601
4 Wired, "Researchers Fooled a Google AI Into Thinking a Rifle Was a Helicopter," December 2017. https://www.wired.com/story/researcher-fooled-a-google-ai-into-thinking-a-rifle-was-a-helicopter/
5 TechTalks, "If AI can read, then plain text can be weaponized," April 2019. https://bdtechtalks.com/2019/04/02/ai-nlp-paraphrasing-adversarial-attacks/
6 Towards Data Science, "SHAP values explained exactly how you wished someone explained to you," January 2020. https://towardsdatascience.com/shap-explained-the-way-i-wish-someone-explained-it-to-me-ab81cc69ef30
7 Medium.com, "Explain Your ML Model Predictions With Local Interpretable Model-Agnostic Explanations (LIME)," March 2020. https://medium.com/xebia-france/explain-your-ml-model-predictions-with-local-interpretable-model-agnostic-explanations-lime-82343c5689db
8 Towards Data Science, "How Discrimination occurs in Data Analytics and Machine Learning: Proxy Variables," February 2020. https://towardsdatascience.com/how-discrimination-occurs-in-data-analytics-and-machine-learning-proxy-variables-7c22ff20792
9 Analytics India Magazine, "What Are Saliency Maps In Deep Learning," July 2018. https://analyticsindiamag.com/what-are-saliency-maps-in-deep-learning/
10 Medium.com, "Making AI Interpretable with Generative Adversarial Networks," April 2018. https://medium.com/square-corner-blog/making-ai-interpretable-with-generative-adversarial-networks-766abc953edf
11 Medium.com, "Push the limits of explainability- an ultimate guide to SHAP library," June 2020. https://medium.com/swlh/push-the-limits-of-explainability-an-ultimate-guide-to-shap-library-a110af566a02