Part 4: Towards a More Immersive Analytics Interface
In our penultimate episode, we concluded on how Natural Language Processing (NLP) capabilities can be integrated within the Immersive Analytics (IA) interface. But, what’s next?
Perhaps the answer lies with the same technologies that are predominantly harnessed by the gaming industry—Virtual Reality (VR) and Mixed Reality (MR). The same functionalities that Google Cardboard and Oculus Rift offers can easily be introduced to Business Intelligence (BI) applications. As complex information is best presented and understood visually, why can’t we import the same concept to data analytics and make the experience even more interactive and immersive? The necessary input hardware is already available in the form of computer web cameras and infrared sensors.
The next generation of IA interfaces should enable business leaders to simply pull up charts and graphs, and manipulate them with just a few gestures.
Reading Gestures, Executing Commands
The most common gesture input device is likely going to be a typical VR or MR headset. The unit’s in-built camera will capture information from the first-person perspective of the user. Web cameras mounted on the computer will capture a second-person view of the user making gestures and translate them into performable tasks. To take this a notch further, a third view will also be integrated wherein a camera possibly fitted into a projector will have the ability to understand the human operator’s interaction with the application and project this on to a whiteboard.
This will further open up the scope to interact with the virtual objects on the board itself. Such a VR or MR powered IA environment will be able to read the user’s hand gestures—flip, zoom, and click—when interacting with the application. The system’s camera can capture images and analyze them, factoring in distance and the hand’s motion relative to the camera. Hundreds and thousands of these images can be used to train the Deep Neural Network (DNN) for building a robust recognition model that ensures accuracy. For the gestures-based interaction to work, analytics application’s output has to be made available for the end users to interact with.
Making Analytics Immersive
As is the case with any VR/MR setup, the IA interface needs to be connected to a Head-mounted Device (HMD) and a position sensor. While the HMD will allow the user to visualize the output, the position sensor will detect the gestures, enabling him to interact with the information he can see. The key technology drivers for such a setup will be an analytics engine that draws data from various channels and an immersive visualization server which translates this data into content viewable through the HMD.
For VR-enabled IAs, the user will need interaction devices to manipulate the data successfully—a role typically reserved for the HMD. In case of MR-enabled IAs, the user will be able to interact with not just the data that is being displayed, but also allow other users wearing HMDs to interact and collaborate in real-time. To enhance usability and provision for additional modalities, both types of IAs should ideally be able to support and accept commands from conventional input devices such as keyboards and mouses.
If we are too truly reengineer the IA, we might want to take a page from Facebook’s playbook. The social media giant recently unveiled one of its projects that used electrodes directly implanted into a human brain to interpret thoughts into actions. While still in its infancy, this technology will allow business leaders to make decisions at the speed of thought.
Where do you think we are heading with IA in analytics? Tell us in the comments section below.