Leading the way in innovation for over 55 years, we build greater futures for businesses across multiple industries and 55 countries.
Our expert, committed team put our shared beliefs into action – every day. Together, we combine innovation and collective knowledge to create the extraordinary.
We share news, insights, analysis and research – tailored to your unique interests – to help you deepen your knowledge and impact.
At TCS, we believe exceptional work begins with hiring, celebrating and nurturing the best people — from all walks of life.
Get access to a catalog of the latest news stories from across TCS. Discover our press releases, reports, and company announcements.
Blog
Ged Roberts
You have these already downloaded
We have sent you a copy of the report to your email again.
Artificial intelligence (AI) and machine learning (ML) have increasingly embedded automated decision-making in our everyday lives, be it in devices or institutional services. The use of our data to determine outcomes is now a given. How data is used to arrive at a decision is at best guesswork, if not completely unknown.
Yet, the impact of algorithmic decision-making is significant. From the trivial result of recommending what next to see at the movies, to loan approvals, interview selections, or even determining criminal sentencing guidelines, the uses of black box algorithms are life changing.
If we are ready to embrace AI-based decisions, then we need to trust the companies that build them. This puts the onus of AI-based outcomes with organizations and how they deploy them. Therefore, it is essential that companies establish transparent guidelines on how their solutions work, substantiated by auditable development and deployment mechanisms.
We do not claim to have a solution for all ills, but in this blog, we explore strategies companies can employ to enable traceability, predictability, and accountability of decisions.
A General Approach - Start with the End in Mind
The first step in approaching the challenge of explainable AI is for enterprises to tackle potential risk areas by identifying the positive and negative consequences of outcomes, and how negative outcomes will be redressed. For example, negative outcomes could be gender or ethnic bias. Further, when looking at outcomes and consequences, firms will face the next level of challenges—How can each outcome be tested? Is each potential outcome a fair result? How can firms collect data (with due consent) to generate these outcomes?
Simply highlighting the negative consequences of an outcome creates a different thought process on managing results.
Consider the example of recruitment prediction—a model can be developed to receive a resume and predict whether a candidate is suitable or not. The outcome could be a simple binary choice of yes or no.
Establishing clear criteria at the conceptualization stage drives two behaviors. First, the scope of a project now encompasses not just building the AI models and producing them, but also includes validation and verification criteria for both development and production. Second, it establishes traceability of goals and decisions, thereby providing the first level of accountability.
Testing and Monitoring Outcomes for Enhanced Transparency
When the scope of a project is defined not just as AI/ML model development, but as one where outcomes are operationally validated and verified, enterprises can develop the project along three highly interrelated streams:
Stream 1: Proceed with model development through the usual activities of clustering, classification, training, and testing. There is specific focus on removing features and their proxies that lead to biased outcomes.
Stream 2: Develop mechanisms to demonstrate the explainability of the model. The many methods for this activity range from decision trees to using model-agnostic techniques such as LIME and Shapely values. Based on the problem and the advantages and disadvantages of each model, a single or a combination of models can be used.
Stream 3: Establish and test the outcomes and methods which will monitor AI models. As mentioned above, when defining the problem, the result of fair outcomes needs to be constantly monitored. If required, an organizational process to redress a raised query on a deployed model can be designed, developed, and institutionalized. If the number of results from the model is so large that enterprise teams are unable to manage them manually, they will need to develop effective sampling methods.
The results of all three streams are baselined and deployed into production. Once in production, clear governance and reporting ensures auditability and visibility for outcomes. The reporting covers all aspects of the model, operational performance, explainability values, external queries, and their results. Models that do not yield results with planned outcomes will be reconfigured or even withdrawn.
Deploying Explainable AI Organization-Wide
Organizations implicitly believe that explainable AI is only applicable for human-related outcomes. However, if they establish a policy of making all model-based decisions explainable, then they can deploy those models with rigor and discipline.
For example, an organization’s vendor management team may cancel contracts with a supplier based on predicted component failures. That supplier would need to understand how that decision was reached, which an explainable AI model can determine.
When explainable AI/ML becomes a way of life for model development, enterprises can understand its necessity, and consequently, they can optimize and automate development processes.
Maximizing Explainable AI with Governance
Transparency is necessary irrespective of whether decisions are people-based or machine components-based. As organizations roll out automated solutions at scale, they can ensure AI transparency by explaining what an AI model does, identifying how it achieves these outcomes, and redressing any unjustifiable results. Clear governance ensures accountability and transparency, which is key to developing trust for an organization’s end-customers.
Turn Customers into Advocates with Your Sustainability Initiatives
Talent Transformation: A Priority for a Competitive Edge
Impact of Generative AI on Application Development & LCNC Platform
SAP Systems Monitoring: Best Practices for Peak Performance