Skip to main content
Skip to footer
We're taking you to another TCS website now.


 

 

The biggest questions need the boldest answers. That's why we're using our global scale, technology expertise and collaborative spirit to move towards a better today and a brighter tomorrow together.

In this article, Ajay Atreya, General Manager at TCS ANZ, explores the challenges faced by financial institutions when implementing Artificial Technologies (AI) such as Machine Learning (ML) and Natural Language Processing (NLP); and how to overcome those challenges.

In the previous article, Ajay discussed how financial institutions use AI, ML, and NLP technologies to establish a data-driven approach to managing controls and delivering better risk outcomes.

 

 

What are the main questions before the C-suite when implementing these changes?

Strategic discussions amongst the C-Suite and regulators include forward-looking themes such as how AI-driven technology and analytics can strengthen risk management. Or how AI can help an organisation transition from a cost centre to delivering business value and how customer experience can be enhanced, i.e., better NPS scores by transforming risk processes.

First and foremost, identification of the top risk priorities is critical. It will vary from one financial institution to another based on the sector, required regulation, customer segments and the organisation’s business priorities and objectives. Ultimately, the bank’s strategy will drive its risk appetite. Once that is determined, appropriate tools and techniques can be identified after reviewing available data and building algorithms to gauge the proper set of models. Understanding how quickly these interventions can be implemented within an organisation requires a thorough knowledge of how data gets generated, stored, and used.

Diagnostic tools can then alert users on unexpected events or metrics using anomaly detection algorithms. Dashboards incorporating machine learning model outputs can help predict risk outcomes and NLP to generate narratives to understand risk profile and underlying exposure changes.

There are several ways in which businesses can benefit from an efficient application of technology in risk management:

  • Improved capital management to optimise profits: Building, testing, and managing better risk and capital models to deploy monies more efficiently. AI can help manage model performance, calibration, validation, and analytics.
  • Automation of credit decisions with the ability to compute data from multiple sources, including social media, leads to better pricing for low-risk customers and better management of collections based on data patterns.
  • Climate Risk Assessment: APRA had recently initiated an exercise to design and deliver Climate Vulnerability Assessment (CVA) with prominent Australian Authorised Deposit-Taking Institutions (ADIs). The CVA will quantify aspects of the banks’ exposure to climate risk and provide insights into the analytical and capacity challenges facing entities engaging in climate risk scenario analysis.
  • Digital Farming Solutions: AI models use remote sensing data to derive expected crop yields based on climate monitoring, grading, and acreage.

What are the main challenges associated with using and implementing these technologies, and how can the C-suite overcome these?

Financial institutions find it challenging to deal with existing risks — becoming harder to identify effectively and timely since they manifest themselves in unfamiliar ways.

It is crucial to start with a clear definition of the problem statement, specific end-state objectives, and time-bound milestones along the journey. A completely new process is not necessarily essential. Instead, fine-tuning the existing methods can help realise the benefit of new technology interventions. Defining and tracking KPIs linked to business imperatives to measure outcomes is highly recommended—such as increasing the size of control testing samples, making better and faster credit decisions, and reducing false positives.

Adopting an AI-driven approach also requires a change in the mindset of the teams involved, comfort with experimentation and acceptance of the occasional failures, adoption of learnings which emanate from such trials and a change in nature of the roles and skills of personnel involved. Considering this, we are seeing increasing numbers of data scientists recruited by risk units across financial institutions in Australia, changing the overall focus of the workforce. Some organisations have up to 60% of the workload of a risk team focused on regulatory reporting and risk operations. Using new technologies and introducing new skill sets can give more time for data analysis, which creates new value streams and a wide-reaching shift change for an organisation more broadly.

Finally, the importance of strong governance over the deployment of AI cannot be overstated. The knowledge that these technologies depend on a continuously evolving dataset puts the onus on firms to validate and calibrate the underlying algorithms. Increased risk of inappropriate feedback going undetected in those AI solutions that allow for continuous feedback and learning may compromise the solution’s ability to produce accurate results.

 

 

Conclusion

AI will become an integral part of the risk strategy of many financial institutions to deliver better customer service, oversight over business operations and gain a competitive advantage. Since AI in financial services is still in its early stages, there will be a learning curve wherein stakeholders such as risk management teams and business and control functions will need to increase their technical understanding. Organisations that identify such cross-functional teams and incentivise them to collaborate will exploit the benefits of AI better.

The emergence of Regtech players who can curate niche solutions catering to specific industry needs is another channel for financial institutions to realise these objectives.

While adopting AI, firms will need to evaluate how to encapsulate specific interventions into the Risk Management Framework. 

They must ensure that the solutions remain fit-for-purpose and give businesses the confidence that AI can function within the boundaries set by the firm’s culture and risk appetite.