Skip to main content
Skip to footer
Contact Us
We are taking you to another website now.
March 14, 2018

We might often wonder why traditional enterprises do not deploy artificial intelligence (AI) technologies as easily or as prolifically as web economy firms. There are reasons behind this – some technological and some cultural. Let me try to outline these challenges in three segments.

Making the Case for AI

First off, there’s always the question of the business case for deploying anything new in the enterprise. While deploying a new payroll system, for instance, one could make the argument, “We are currently paying people who are no longer with the organization; the new system will block these loopholes and save so much money.”

In principle, one could also do this for a new campaign management system that utilizes AI technologies like deep learning. This system could look at past data and do back-testing to see whether one is able to predict if people given discounts actually end up becoming loyal customers. One could use such statistics to justify on-field deployment.

Unfortunately, the low accuracy and false positives that usually accompany such back-testing are not strong enough for convincing the business that one could actually gain significantly by deploying predictive systems online on the field. For example, a false positive ratio of even 15% might be very costly for an enterprise; giving so many offers and discounts is not worthwhile when only 1% of the audience actually gets converted into regular customers.

The catch is that AI systems need to first be deployed so that they can continuously learn from their mistakes and get better. But the business cost of deploying learning systems merely to learn is often not easy to justify. It requires a significant mindset change for the business to appreciate that deploying a system to a small set of customers and allowing it to learn is the only way you’re going to get an AI system to get accurate enough to be profitable and justify scaling it. So, this is the first challenge.

Whither Proof of Concept?

The next big challenge is more fundamental. Suppose one were to conceive of a system for conversational advertising. There is no historical data on which to test any such system because no one has ever attempted conversational advertising ever before! One could develop a prototype and test it in-house, but conversion ratios or lift numbers from such a test typically fail to inspire confidence, as these are not ‘real’ tests.

This is a classic counterfactual problem: we don’t know what would have happened if we had implemented something in the past because we never actually went through with it. Again, the only way to get out of this trap is to actually deploy a new initiative, allow it to learn and get better, and then evaluate how it has performed. The traditional distrust between IT and business also comes in the way of such discussions. At the same time, it is also the lack of understanding on the part of IT as well as business as to what it takes to do field experimentation in an AI context.

Finally, there are operational issues related to the nature of AI technology. There are three scenarios where one can actually deploy a predictive AI system. The first is a closed-loop system that controls its environment. For example, a system that intelligently controls the air conditioning in a building in order to maintain an optimal temperature while minimizing energy used. Such a system knows when it is wrong, since it has been given the goal of maintaining the temperature in a certain range while trying to minimize the energy consumed. In short, it knows when it’s going wrong and can self-correct.

Similarly, a next-best-offer system could be deployed to maximize the number of conversions given an inventory of discounts and offers from past. This will then turn into a closed-loop AI deployment that can be self-optimized using techniques such as ‘reinforcement learning’. If the enterprise is amenable to business experimentation and conducts field deployments, such systems can work well and also get better over time.

The problem is that not many scenarios lend themselves to such closed-loop solutions easily. More often than not, we have open-loop AI systems. For example, consider a question answering system that allows users to give feedback when they think its answer is wrong so as to close the loop.

In this case, one still needs to put in place a human workflow to figure out what the right answer should be for every such negative feedback, which can then be used to create better training data so as to make the system improve over time.

This requires an organizational mindset change, that is, that the nature of an AI deployment requires continuous hand-holding for retraining by providing the right labels. Traditional enterprises don’t really understand or appreciate that such workflows are needed to successfully deploy most AI systems effectively.

Need for Testing Protocols

The third case is an inherently open-loop scenario. For example, a document image recognition system that translates complex document images, such as invoices or even engineering drawings of various types into spreadsheets by extracting information such as text, relationships between pieces of text, or hand-noted markings from them.

While such extraction in itself is a challenge, the system will obviously go wrong at some point while it processes a large number of documents. To test and validate the system’s work, one would have to employ testers, which is as good as doing the conversion manually. On the other hand, if one did only spot tests, costly mistakes might happen and flow through forever.

One really needs to follow calibrated protocol: for example, take a reasonably sized but small corpus and manually verify that the system is doing well, and if not, actually provide it the right re-training data so it gets better. So, you incur this cost while preparing the system. Then you start deploying it in the field and doing spot tests initially for a high percentage of the output.

Gradually, as one gets more confident, the spot tests can go down in percentage terms, while the volume of data that one processes gets bigger and bigger. The idea that you need a training protocol to deploy a technical system is also new to both enterprise IT and business, and this is something which needs to be understood.

So, there are a variety of reasons, some cultural, some technological, and some operational that have so far prevented widespread deployment of AI in traditional enterprises, as compared to, say, the new economy, where everything is online and easy to process right there. For example, A/B testing, with continuous feedback available easily just by looking at the logs of everyday use.

What other challenges do you foresee? Let us know in the comments!

As the Vice-President, Chief Scientist and Head of TCS Research and a member of TCS’ Corporate Technology Council, Dr. Gautam Shroff is involved in recommending directions to existing R&D, spawning new R&D efforts, sponsoring external research, and proliferating the resulting technology and intellectual property across TCS’ units. He is also part of the AI task force set up by the Government of India.


Thank you for downloading

Your opinion counts! Let us know what you think by choosing one option below.