Writing the code of AI ethics
4 MINS READ
AI for humanity’s good
AI has the potential to transform the quality of life and enable humans to achieve their highest potential.
It’s no longer just the stuff of science fiction, and not somewhere in the future. It’s here and now. With AI-powered breakthroughs changing the way we live and do business, designing ethical, trustworthy, and explainable AI is an urgent imperative.
How can we minimize unfair bias in areas such as healthcare, recruitment, and education? How can we design an AI model to overcome the blackbox conundrum of opaque algorithms that throw up verdicts everyone is supposed to simply accept as correct? What can we do to ensure AI does not widen global inequality?
In this TCS-sponsored WSJ article, two innovators in cloud transformation dive into these topics and more in a wide-ranging conversation on AI.
Nidhi Srivastava, TCS’ vice president and global head for Google business unit, and Madeleine Elish, head of Responsible AI for Google Cloud, explore how to ensure the AI revolution transforms business for the benefit of humanity.
“AI” on equality
What excites you and worries you most about humanity’s AI future?
Nidhi Srivastava: I’m inspired by today’s conversation around ethical AI. It’s only with consciousness of how bias can sneak into AI, and be amplified by AI, that we can take action to minimize it. If we don’t get this right, it can create complications from widening inequalities to entrenching stereotypes. It’s deeply positive that we are grappling today, ahead of the transformations ahead, with building the responsible AI framework.
Madeleine Elish: One of the things that excites me is that we are today very seriously considering the social ramifications of the AI revolution. One thing that worries me is overreliance on technology. AI is great at many things. But it also needs humans to make a difference.
Ensuring responsible AI
How can we create a practical roadmap for a future of responsible AI?
Nidhi Srivastava: You need a C-suite level officer who is directly responsible for achieving responsible outcomes. Whether it’s a chief digital officer or chief AI officer, it needs to be someone empowered to make sure that technology doesn’t go haywire in terms of risks. Another key factor is the need for education and training across the organization, and also across society.
Madeleine Elish: While we’ve focused a lot on bias, it’s important to raise other dimensions of responsible AI that are just as critical. We need some mechanism for accountability when the product isn’t working, when it’s being used unfairly or when the performance is biased.
Making AI explainable
How can we foster explainable AI to protect society from a “blackbox conundrum”?
Nidhi Srivastava: One of the positives I’m seeing is more cloud-native development of AI-ML applications, baking better explainability into the algorithms. That means less hand-coding that sometimes brings about the opacity about how the solution was built, or how decisions are made.
Madeleine Elish: What explainable AI means depends on the context in which the AI is being used. We must first think about, ‘who is using this technology?’ What do they need to understand? Ultimately, explainable AI is not necessarily about what the development team originally thought, but rather what the end user needs to know.