Concept image to demonstrate the idea of artificial intelligence and machine learning>

Research And Innovation

Rules of Engagement: What Enterprises Need to Teach AI Systems

 
February 7, 2019

With automation becoming the new mantra, AI is the one tool that enterprises seem to be eager to use – time now to look for all the objects that can be fixed with it! The success of Siri and Alexa have raised the expectations from video, speech, and text analytics and face recognition, whereas Google and Facebook have upped the ante for conversation systems. There is thus an increasing demand to use these technologies for enterprise projects as well.

While there is indeed enough scope to employ such technologies for automation of tasks within an enterprise, especially those that involve handling voice calls, emails, videos, and scanned copies of forms and invoices, companies need to address several issues other than just the accuracies of their underlying analytics algorithms before deploying the technologies.

Accuracy is indeed an issue, but it also helps to understand that such accuracies are not transferable. The performance of a well-established algorithm cannot be guaranteed to be the same in a new scenario, since it depends on a whole lot of things – from data quality to the available resources. However, it is highly unlikely that any amount of scientific labor can ensure that these technologies work with 100% accuracy in all possible circumstances. 

Of course, the fundamental objective of such automation projects is not to establish the accuracy of the algorithms, but something else entirely. The objectives could be anything from a desire to achieve higher productivity or better turnaround times for support tasks to improving employee satisfaction by relieving them from routine tasks and engaging them in tasks that require more cognitive skills.

Once the objectives are established, it is time to focus on other key challenges that are very likely to come up. Now that many of the tasks are going to be handled in an automated way, there may be a need to redesign the entire process.

-How would inputs come to the new system? 

-How to ensure privacy and security for data?

-How to ensure that system performance remains steady despite changing input scenarios? 

-How would erroneous cases be detected and dealt with

-How to ensure regulatory compliance, if any? 

-How would disputes be dealt with?

It is to address such concerns and ensure that there is some method to the madness that the Association for Computing Machinery (ACM) Public Policy Council has come out with the following codified requirements for systems that take over decision-making from humans:

·         Awareness – Educating the public about the degree to which decision-making is automated
·         Access and Redress – Ways to investigate and correct erroneous decisions
·         Accountability – Organizations cannot eschew their responsibility by deflecting the blame to the algorithm
·         Explainability – Logic of the algorithm, however complex, must be communicable in human terms[1]
·         Data Provenance – The trustworthiness of data sources used for training algorithms based on statistical analyses should be made available
·         Auditability – Logging and recordkeeping needed for dispute resolution and regulatory compliance[2]
·         Validation and Testing – Should be done on an ongoing basis. This should cover regression tests and vetting of corner cases. Red-teaming strategies used in computer security should be used to increase confidence in automated systems

As organizations deploy complex systems for automated decision-making, it is imperative that system designers build in these principles into their systems. Obviously there is still some research needed to ensure that principles like explainability and auditability are adhered to at all times. But there has to be a beginning.

The above principles are fairly generic in nature. Implementing these will need laying down of a set of relevant standards that also incorporate regional requirements. The ACM US Public Policy Council and the ACM Europe Council Policy Committee are two of the earliest organizations that provide guidance on policies related to algorithmic transparency and accountability.  

Even as these principles are being formalized, another interesting topic that is keeping system designers busy is trustability. Consumers trusted the human decision-making process that guided their lives till now, though that was definitely not foolproof. There is also documented evidence to show that more often than not, even when presented with the same set of facts, two humans could disagree on the course of action. The question remains: will consumer trust be automatically transferred to automated systems? But that is a story for another day!

[1] Communications of the ACM; Toward Algorithmic Transparency and Accountability; Sep. 2017; Jan. 2019; https://cacm.acm.org/magazines/2017/9/220423-toward-algorithmic-transparency-and-accountability/fulltext

[2] Communications of the ACM; Toward Algorithmic Transparency and Accountability; Sep. 2017; Jan. 2019; https://cacm.acm.org/magazines/2017/9/220423-toward-algorithmic-transparency-and-accountability/fulltext

Lipika is a chief scientist at TCS Research and Innovation and heads analytics and insights practices. Lipika holds a PhD in computer science and engineering from IIT Kharagpur. Her research interests are in the areas of NLP, text and data mining, machine learning, and semantic search.