Despite ongoing progress in the field of explainable artificial intelligence (AI) models, most of the proposed techniques are extremely limited in terms of their practical utility and the nature of explanations they provide. Given the inherent risks associated with incorrect explanations, some researchers have proposed to stop explaining opaque machine learning (ML) models entirely, and instead rely solely on interpretable ML. This raises an important question about the end goal of explainable AI – “Is it simply assisting the AI developer who is trying to debug an ML model?”
For explainable AI to help in deployment, the objective must be to provide explanations to the end user of the AI system, who is most likely a domain expert such as a doctor, engineer, etc.
In this webinar, held on December 16, 2021, scientists from TCS Research, along with ACM India, seek to understand the ways in which explainable models can aid human-AI collaboration by enabling domain experts to speak the same language. This involves explainable models incorporating domain constraints, protocols, causal relationships, and concepts unique to the domain in order to generate meaningful explanations that are easily verifiable.
Prof. Ashwin Srinivasan, Department of Computer Science, BITS Goa and Head of APPCAIR, BITS delivered a keynote on “Two Understandability Axioms for ML with Humans-in-the-Loop”.
A fireside chat on “The Role of Domain Knowledge in Modern Understandable AI” followed. Moderated by Lovekesh Vig, the dialogue witnessed an enthralling exchange of perspectives on the use of AI techniques to make models more explainable by Prof. Srinivasan, Prof. Srikanta Behadur (IIT Delhi), Tanuja Ganu, Principal Research Engineering Manager at Microsoft Research, India and Indrajit Bhattacharya, Head of Knowledge Representation and Reasoning Group at TCS Research.