We are taking you to another website now.

Why AI Systems Need More Data to Reduce Bias and Boost Results

 

Dinanath Kholkar
Vice President & Global Head, Analytics & Insights
15 July 2019

Enterprises are adopting artificial intelligence (AI) at an amazingly rapid pace to automate both manual and knowledge work. The number of businesses implementing AI has grown 270% in the past four years, and tripled in the past year alone, according to Gartner.

To improve the accuracy and efficacy of AI systems, organizations are collecting more and more data; big volumes of data are essential to reducing machine bias. But all this data needs to be managed rigorously and thoughtfully. It also must be updated continuously to biased recommendations, taking incorrect actions, or (in the security sector) generating time-wasting and de-motivating false alarms and skewed problem prioritization.

All this is especially important when using AI to monitor a company’s operations, the customer experience, and security. For example, the state of Michigan’s child welfare system recently made mistakes in tracking the status of neglected children due to poor data quality, including data entry errors.

Companies are quickly recognizing the importance of supplying their intelligent systems with updated, accurate data. According to decision makers at companies adopting AI, 60% cited ensuring “data quality as either challenging or very challenging,” and it is their top barrier to delivering AI capabilities effectively, according to a recent Forrester survey.

This challenge is critical, especially in delivering superior customer experiences. Forrester predicts that cognitive systems and robotic process automation (RPA) will eliminate 20% of all service desk interactions by the end of this year. Companies don’t want their AI-enabled service desks giving customers wrong information or bad recommendations, due to outdated or incorrect data on which those recommendations are based.

Refreshing Data for Monitoring Operations and Customer Experiences

Systems that monitor operations to determine the next-best action (whether the system performs the action itself or alerts a person to act) depend upon a range of data inputs. Oil and gas companies have automated preventative maintenance for years, monitoring the wear on their equipment through sensors. That lets them fix that equipment before it shuts down a factory or disrupts a supply chain. Corporate procurement departments use these systems to flag anomalies in purchasing behaviors to ensure compliance with company policies. And the retail sector is making the largest investments in AI systems to automate customer service and provide product recommendations, according to IDC.

The signals these systems use can arrive in the form of text, voice, images, video, transaction streams, and sensor data. They can also come from social media, emails or online chats. As noted, the more data a system has, and the greater its variety, the more accurate its outputs, and the less potential for bias.

Unfortunately, most companies are ignoring data that could help track the quality of the customer experience: data from digital interactions or calls, chats, and images shared on social media, according to Forrester.

This makes it essential for companies continuously to seek out new forms of relevant data to inform their systems’ analyses.

Adding Algorithms to Security Systems

The way security works at banks and other financial institutions is that security professionals sift through inputs from various channels to track threats, identify risks, and spot anomalies. It is difficult, time-consuming work. However, increasingly AI systems are doing that work with greater speed and accuracy, freeing up those security people, in both the private and public sectors, for more deliberative, value-added work.

However, identifying potentially illegal or sanctioned transactions or spotting new threats in the cyber landscape requires algorithms that can access a large volume of high-quality data to guide them. Lacking that, organizations run the risk of being led astray by a torrent of false-positives and other suboptimal results.

Three Steps to Optimizing Your AI Systems

To make sure a company’s AI systems are operating with the best possible data, organizations must shift their software quality assurance practices from examining lines of code looking for bugs to examining data quality and the algorithms intelligent systems use to make meaning of that data.

Three practices are critical to doing this:

1.    Ensuring that data sets are complete and understood by implementing a mature data management capability, reviewing the AI system’s data inputs, and vetting the data sources for accuracy and completeness.

2.    Employing experts to validate AI system outputs. For example, a customer experience expert could review a system that is producing automated responses to customer requests.

3.    Updating data to give the AI algorithm more evidence from which to draw insights. This will mean taking advantage of new technologies such as vision-enabled systems that can interpret text and images, and which can supply new data to the system.

AI has unlimited potential to improve work and create value. But will only happen if the data informing an AI system is continually refreshed, expanded, and validated.

Dinanath Kholkar is Vice President and Global Head of Analytics at TCS. He is the author of the article “Building the Unbiased and Continually Self-Improving Machine,” in TCS Perspectives Volume 12.

About the author(s)
Dinanath Kholkar
Vice President & Global Head, Analytics & Insights

Dinanath (Dina) Kholkar is Vice President & Global Head, Analytics & Insights at Tata Consultancy Services (TCS). In this role, Dina guides some of the world’s best companies in their journeys to unlock the potential of their data through the power of analytics and artificial intelligence (AI) to uncover new business opportunities, drive business growth and increase revenue streams.

Dina advocates ‘data centricity’ as a strategic lever for business growth and transformation and believes that the need of the hour is ‘evangelization’ of data & analytics. His thought leadership in addition to his team’s expertise and collaborative working with customer organizations is empowering them to realize the power of their data in real-time decision making and ensuring success in their Business 4.0 transformation journeys.

With nearly 30 years of industry experience, Dina has held diverse leadership roles across the organization and amplified business value to customer organizations covering all major industries. He was responsible for building TCS’ data warehousing and data mining expertise and laying the foundation for the organization’s Business Intelligence practice. Dina has also led TCS’ Business Process Services (BPS) & Business Analytics units and served as the CEO & Managing Director of TCS eServe.

Dina is a member of the Board of Governors of his alma mater Veermata Jeejabai Technological Institute (VJTI), Mumbai and actively involved in the institute’s modernization journey.

Dina currently resides in Pune, India with his wife, son and parents. Outside of work, Dina currently serves as IEEE Pune Section Chair and provides leadership guidance in the areas of NextGen engineering education, agriculture and open data. He is a sports enthusiast who loves playing badminton and running long-distances. He loves to travel to explore new places and is a passionate photographer.

×