Business and Technology Insights

Why Integrating Performance Management with Your Chatbot Strategy Is Critical

 
March 11, 2019

In 2018, the world witnessed the debut of Google Duplex.  In an awe inspiring demonstration, Google unveiled the capability of a machine using voice simulation to successfully book a table at a restaurant and schedule a visit to the hairdresser.  In the world of chatbots, this was the pinnacle of performance.  The person on the other end of each interaction was unaware that they were interacting with a machine and responded naturally, and in both instances, the Google chatbot successfully concluded the transaction.  In a machine-first world where initial interactions are frequently carried out by chatbots, how do we ensure that these interactions remain relevant, and more importantly, how do we intervene when things don’t go quite as planned?

Here are some performance management strategies that can be deployed as a part of the development as well as operational process.

Maintaining relevance

Imagine an in-house customer service chatbot. Typical interactions with such a chatbot could revolve around topics such as password management, ordering a new laptop, and so on.  When a new service becomes available within the organization – for example, the provisioning of a mobile phone – the bot will need to be retrained with all the necessary details to handle the different types of objects and intent behind each request.  Driving the timeline on this requires not only the ability to train and update the chatbot’s model, but also maintain the flexibility to integrate with different and new backend systems.

A similar challenge occurs when emergency notifications or actions need to be rolled out quickly. For example, an application may be reporting adverse performance and the chatbot-driven help desk may be required to handle a new situation quickly to prevent hand-off from the bot to human technical support. 

Several situations arise due to the lack of user experience with the new services, when launching new applications in an organization. Consequently, FAQs and hints have to be rapidly extended and updated, as well as the models need to be retrained and consequently rolled out for the chatbots to utilize. In such instances, a key consideration for the bot is how quickly its models can be trained on new conditions. The ability to add new and updated content, services, problems, and FAQs as appropriate should therefore be a key criterion when selecting a chatbot platform or toolkit.

Enabling interventions

The second challenge is what to do when the bot-led interaction is not going well and how to ensure smooth transfer to human support.  Let’s consider the following situations:

A typical service desk measure is customer satisfaction. On completion of a call, the customer or user is generally asked to provide feedback. In a traditional human-led environment, these calls can be analyzed and fed into the training plans of the staff concerned. In more sophisticated environments, these details are monitored in real time and using machine learning techniques, calls are dynamically escalated to a team leader or supervisor in the event of customer dissatisfaction. A similar real-time capability for chatbots interactions must also be able to smoothly stop and escalate an interaction to human support, when required.

Similarly, most service desks have strict service level agreements (SLAs) to benchmark the ability of support teams to achieve agreed First Call Resolution (FCR) levels.  Much like the dissatisfaction challenge, in the event that a call is not completed in one go, the human operator escalates to the second level, or hands over to a technician. The expectation from the bots will be no different. Consequently, when deploying a new bot, the capability to intervene based on different factors should be included in the selection criteria. In all instances, the ability to monitor performance in real time and to intervene seamlessly, when required, should be particularly considered when selecting chatbots.

Above all, the method of intervention must be flexible and contextual. The ability to capture data and integrate it in real time with other systems in order to drive analytics and monitoring is therefore critical. The conversation between a bot and a user must explain what is happening, as well as capture sufficient information to integrate with the company’s CRM system to update the human agent on what the user has already tried to resolve.  Finally, relevant updates must be made to the training models for the bots to handle such situations in the future.

Manage the performance of your chatbots as you would your human resources

Conversational automation is here to stay as organizations increasingly leverage chatbots to deliver more and more front-line services. However, the need for contextualized, flexible, and responsive customer service will always be a top priority.  Selecting and deploying your chatbot platform is simply the first step toward delivering on that promise. Continuous monitoring, evaluation, and performance management will be as applicable to chatbots, as to humans. 

Ged Roberts is the Global Head of Operations and Delivery Excellence for TCS’ HiTech business unit. Based out of Amsterdam, he is responsible for ensuring one global service standard across the unit and assuring clients an unmatched service experience.