Skip to main content
Skip to footer
Contact Us
We are taking you to another website now.
May 28, 2018

Part 3: Building a NLP-driven Interface

In our last conversation, we broke down the build plan for the next generation of Immersive Analytics (IA) interfaces. When it came to input modalities, natural language in the form of text, voice, and gestures seem to be the most obvious choice. No longer an emerging technology, Natural Language Processing (NLP) is being widely adopted by major institutions. Swedbank’s chatbot tool Nina handles as many as 40,000 conversations in a month, and resolves 81% of customer issues.

In its current state, NLP is essentially deployed to translate natural language into machine language. But, as the technology matures, especially the Artificial Intelligence (AI) component, the system will get better at “understanding” the query and start to deliver answers rather than just search for results.

This will bring us closer to the vision we discussed at the outset—you won’t just be asking questions in natural language, you’ll be receiving the answers in the same format. NLP expands the scope of what a Business Intelligence (BI) tool can deliver in terms of insights by making unstructured human inputs understandable to a machine.

Why NLP Matters to IA

For a truly interactive IA, the interface should be able to accept user inputs as both pre-defined and free format text and voice commands or queries. If text is the preferred modality, a typical virtual or physical QWERTY keyboard can be used, which can be further enhanced with a ‘type ahead’ feature. Voice inputs can be captured through a standard microphone.

With a speech and speaker recognition as well as human noise detection system in place, pronunciation, speed, acoustic, and grammatical nuances can be translated to text. The key components for this layer will be automatic speech recognition and natural language understanding. In terms of accurately converting voice to text, the underlying Deep Neural Network (DNN) based algorithm will need to be extensively trained.

Developing Natural Language Translation Engine

To build such a voice-driven interface, an interaction manager will form the top-most layer of the IA interface’s translation engine and operate like a chatbot. This intent management layer will use an NLP-based knowledge synthesis framework to interpret the context and discern the action necessary for fulfilling the end user’s request. These can be classified as dialog, information, and action tasks, using knowledge graphs. By inferring the context of the interaction, the intent manager can enable the ‘type ahead’ assistance feature. Next, a state management layer will be useful for contextually linking content in an interactive and iterative mode of conversation.

For a text-only interface, a query builder layer can manage both pre-defined and free format text inputs to either create the ‘queries’ for SQL and NO-SQL databases or search strings for document-based data sources. With pre-define text, the input can be mapped to available canned reports and the corresponding analytics Application Programming Interface (API). Free format texts, however, is a little more complicated to interpret since the system will need to understand the user’s intent while developing a search string or query. It will need to validate the text input against available dictionaries, understand the taxonomy and relationships between each element of the input, and semantically parse it. Once the query string is ready, it can be used to retrieve information from SQL and NO-SQL or document-oriented databases. The expected text output can be a summary report—optionally converted to audio and played back to the end user.

What the Future Holds

A step forward for IAs would be to enable it to not just understand inputs in the form of text and speech, but also images and gestures. Eye tracking technologies driven by AI and machine learning algorithms are already turning consumer-level mobiles and web cameras into such powerful tools. Amazon’s cashier-less retail store is certainly a step in that direction. The final episode of in this series will take on visual technologies such as Mixed Reality (MR) and Virtual Reality (VR), which are slated to revolutionize the way man interacts with machines.

In the meantime, do share your views on how an NLP powered IA can make analytics intuitive.

Mahesh has a vast experience of about 24 years across various technologies. He has played a key role in addressing ever-changing demands from the business by architecting state-of-the-art IT solutions. As a technology leader, he has successfully incubated several high impact, high revenue potential IT service lines and technology centers of excellence (CoE). His expertise spans various technology and architecture domains, and while he has been associated with various industry verticals, he has worked most extensively in Banking and Financial Services.


Thank you for downloading

Your opinion counts! Let us know what you think by choosing one option below.