7 MINS READ
Chatbots are coming of age in the banking, financial services, and insurance industry.
We now have chatbots that not only answer customer queries and service requests but also engage users in general conversations. This can be attributed to the increasing maturity of machine learning (ML) and natural language processing (NLP) technologies.
The current generation of chatbots has the capability to hold nuanced conversations, characteristic of humans. Persuasion is one such human-specific skill that insurers are now experimenting with while designing chatbots. In an earlier article, we examined how to design chatbots for reactive persuasion. Here, we will discuss how to design a chatbot for proactive persuasion so that it begins and leads meaningful business conversations that require user action.
Traditionally, a chatbot is designed to respond to a trigger from the user.
Proactive persuasion goes a step further—the chatbot proactively initiates a conversation (without a trigger from the user), driven by an event suitable for persuasion, to influence the user to perform a business action. Proactive initiation can occur in two ways. First, a chatbot window may pop up after the user has been active on the website for a specific time and start a persuasive conversation. Second, a user may initiate a conversation with the chatbot for a query or a service request, and the chatbot proactively switches the context to initiate a persuasive conversation. Ecommerce firms effectively deploy proactive chatbots to recommend additional products or services depending on what the consumer is searching for and send push notifications and reminders.
A proactive chatbot for proactive persuasion in insurance may not yield the same results as in retail. This is understandable given the vast difference in the nature of these two businesses. However, though limited in scope, proactive chatbots can be used for prospective and existing insurance customers based on their browsing habits and personalized triggers. A word of caution here: users passively browsing the website without a distinct objective in mind may not respond to the persuasive gambits employed by the chatbot.
An attempt to persuade could result in three scenarios: i) a customer may converse with the chatbot and act on the suggestion; ii) immediately close the chat window to avoid distraction, or iii) get annoyed at the intrusion and disassociate with the chatbot. Despite this, it would still be worthwhile to proactively engage with the customer or prospect. Customer irritation can be mitigated by limiting the use of proactive chatbots to important business events. However, the success rate of proactively persuading a user already in conversation with a chatbot can be comparatively higher (see Figure 1).
In Insurance, there are many events suitable for proactive persuasion by chatbots.
These could be either company events or policy events that are specific to individual customers (see Figure 2). Company events are typically generic and meant to prompt all customers, both existing and prospects. However, they can also be reasonably personalized based on insights drawn from customer data. For example, a new product prompt could be personally directed at customers lacking specific protection or having inadequate coverage.
In contrast, policy events are specific to individual customers and need higher intelligence to integrate them into the conversation flow. Depending on whether an all-inclusive, holistic view of customers is available, events suitable for persuasion could either be limited to the policy that forms the basis of the conversation or extend to other policies in the customer’s portfolio. Two scenarios could emerge when the event pertains to the same policy. First, where the customer intent is about the same event and a normal dialogue is adequate to handle it. For example, when a customer initiates a query on the reinstatement of a lapsed policy. Second, where the customer initiates a different query, and the event suitable for persuasion must be introduced by the chatbot. Taking the above example, if the customer inquires about a change of address, the chatbot will be required to proactively introduce the event of policy reinstatement into the conversation.
The efficacy of introducing a proactive persuasive gambit will have to be reassessed when an ongoing conversation is about an event requiring reactive persuasion. The underlying implication of events suitable for persuasion—positive or negative—and the logic for pairing them must also be factored in. For example, a customer inquires about surrendering a policy for its cash value, which is a negative event necessitating reactive persuasion. Introducing a proactive persuasive element into the conversation by trying to sell a new product may be inappropriate and expose the insurer to moral hazard risk. Similarly, proactively persuading a customer to reinstate a lapsed policy or repay a long-pending loan after reactively persuading against withdrawal would be unwise.
Where a holistic customer view is available, many scenarios arise. Customers’ policies could have various events amenable for persuasion, which could be the same or different from the intent of the ongoing conversation. For example, a customer may have more than one lapsed policy but inquires about changing the beneficiary for an in-force policy. In contrast, the customer may inquire about reinstating only one of the lapsed policies. The chatbot will need to be designed to handle such scenarios as sub-optimal pairings will be inappropriate, ineffective, and may expose the insurer to conduct risk and consequent regulatory scrutiny.
Persuasion is a niche human skill and a value-laden activity.
To equip a chatbot with soft persuasion skills is a challenging task that requires elaborate design and iterative improvisations. While reactive persuasion could be technologically attempted even with rule-based chatbots, retrieval-based chatbots will be required for proactive persuasion. Besides, chatbots must integrate with the existing enterprise infrastructure and interface with business rules, back-office data, policy and customer databases, and content management systems.
The persuasive approach adopted by reactive and proactive chatbots once the conversation is initiated is similar. The design framework of a chatbot for proactive persuasion has the same components as a chatbot for reactive persuasion. An important aspect to be kept in mind is that users do not converse with chatbots to be persuaded. Chatbots should therefore be designed to be sensitive and subtle. Before building a chatbot for proactive persuasion, insurers must consider some critical aspects (see Figure 3), including the technologies to be used and their integration into the existing IT environment of an organization, to ensure seamless functioning of the chatbots.
Event identification
Employ a recommendation-cum-personalization engine to identify, sort, and prioritize events open for persuasion based on value, impact, urgency, and benefit.
Equip the chatbot with the capability to decide whether it would be appropriate to initiate a persuasive dialogue in an ongoing conversation.
Interception moment
Identify the most opportune moment to intercept and proactively initiate a persuasive conversation.
Avoid persuasion resistance—during user-initiated conversations, a random interjection could cause persuasion resistance.
Complete the event for which the conversation was initiated before prompting the conversation toward a different event for the same policy.
Persuasion exceptions
Equip the recommendation engine to deprioritize the persuasion notification if the situation warrants. For example, the chatbot should not initiate a policy reinstatement conversation when the customer is raising a critical illness claim.
Analyze historical transaction data to determine if a persuasive conversation is appropriate. For example, if the historical data reveals a pending grievance, then initiating a persuasive conversation to sell a new policy is not advisable.
Pairing of events
Design the system to consider a whole gamut of scenarios and select the appropriate intent to lead the conversation.
Define broader principles to select the appropriate intent and switch or conjoin the context depending on the intent of the ongoing conversation and the event identified for persuasion.
Initiate a combined conversation for identical events across different policies to avoid repetition. For example, if the customer has three lapsed policies, then all three must be covered in the same conversation.
Initiate conversations pertaining to different events across disparate policies only after addressing the event for which the conversation was started.
Logical closure
Conclude the conversation when the customer is disinclined to engage in a persuasive conversation—switch back to the original context, summarize the action items, and close the conversation gracefully.
As chatbots become more mature, insurers must redesign their existing chatbots to expand their scope beyond answering queries and handling service requests.
Persuasion is a function that could prove to be mutually beneficial to insurers and their customers. The journey could start with trials and proofs-of-concept for reactive persuasion before progressing toward proactive persuasion.
Proactive persuasion is currently either a human-intensive activity or a routine reminder email or message. The former is more effective but involves humongous effort and investment, while the latter is impersonal, unengaging, and less effective. In our view, insurers must strike a middle ground and employ persuasive chatbots as an essential component of their digital engagement strategy.