Tune in to this TCS Research webinar, in collaboration with ACM India, where experts from industry and academia discuss the efficacy of explainable AI in assisting the end user.
Despite ongoing progress in the field of explainable artificial intelligence (AI) models, most proposed techniques are extremely limited in terms of their practical utility and the nature of explanations they provide. Given the inherent risks associated with incorrect explanations, some researchers now rely solely on interpretable machine learning (ML). This raises an important question about the end goal of explainable AI – is it simply assisting the AI developer who is trying to debug an ML model?
For explainable AI to help in deployment, the objective must be to provide explanations to the end user of the AI system, who is most likely a domain expert such as a doctor, engineer, etc.
In this webinar, that was held on December 16, 2021, scientists from TCS Research, and ACM India, sought to understand how explainable stands to aid the human-AI collaboration by way of enabling domain experts to speak the same language. They looked at how this essentially involves explainable models incorporating domain constraints, protocols, causal relationships, and concepts unique to the domain to generate meaningful explanations that are easily verifiable.
Prof. Ashwin Srinivasan, Department of Computer Science, BITS Goa and Head of APPCAIR, BITS, delivered a keynote on 'Two understandability axioms for ML with humans-in-the-loop.'
A fireside chat on 'The role of domain knowledge in modern understandable AI', followed. Moderated by Lovekesh Vig, the dialogue shed perspectives on the use of AI techniques to make models more explainable by Prof. Srinivasan, Prof. Srikanta Behadur (IIT Delhi), Tanuja Ganu, Principal Research Engineering Manager at Microsoft Research, India and Indrajit Bhattacharya, Head of Knowledge Representation and Reasoning Group at TCS Research.