Much has already been said about Generative Artificial Intelligence (GenAI). While different forms of AI have been around for longer than many realize, the last couple of years have seen it become much more mainstream.
And with that, we see many more people commenting on it – often with very little expertise on the subject.
This article does not seek to assess or replay those warnings, nor is it designed to educate on the details of GenAI. Instead, it starts from the position of it being a significant gamechanger, with huge unquantified potential when in the right hands with the right intent.
In the context of risk functions within the financial services industry, the question then becomes how best to control it. How should control frameworks consider GenAI risks and do they need to evolve from their current state to do so? This article develops the thinking across these considerations and opens with some key insights:
The industry is already making advanced use of AI tools – from credit scoring and underwriting to fraud detection; from customer onboarding to transaction monitoring, algorithmic trading, data analysis, cyber threat identification; and the chatbot/virtual assistants that many of us have encountered are becoming increasingly sophisticated and often are a first point of contact into an organization.
While the cost of inaction is high, there’s growing awareness of the unknowns. Ethical concerns, operational risks, and shifting regulatory expectations- necessitate a balanced approach.
Current investments in GenAI are largely skewed toward development and deployment. However, with risk management lagging behind, there is a clear need to elevate focus on governance, control, and a deeper understanding of GenAI-driven risk. Without this recalibration, the risk governance gap will widen as adoption accelerates.
A feature in the learnings from the 2008 financial crisis was that organizations didn’t understand the risks they were taking, either because of a lack of data or because the complexities of the products had evolved to such an extent that the risks inherent in them were not understood.
Given the learnings from that crisis and subsequent controls, the likelihood of that scenario recurring is reduced, but the factors that contributed to it could recur – history doesn’t repeat, but it rhymes! A similar challenge on understanding of inherent risk can be placed at the door of GenAI.
Risk functions have dual consideration in this. They will look to embrace new technologies to enhance their own oversight and need to ensure the application of those technologies across organizations is effectively controlled within risk appetite.
We already have clear examples where AI tools can be a force for good in risk mitigation and building operational efficiency. But naturally there are also risks, and when some commentators talk in terms of existential threats, there needs to be a strong understanding of those risks. The nature of AI related risks cut across virtually all the established non-financial risk themes – conduct, data privacy, security, models, reputation, etc.
That requires assessment both individually and holistically and is made more challenging by the concerns regarding grasp and knowledge of the tools at many levels. Not only is the expertise to understand the technology necessary, this is compounded by the pace at which GenAI is evolving and the nature of AI models, where the internal workings driving decisions are opaque. First and second line teams will face questions on how they can properly assess the risk if they don’t fully understand how technology works.
GenAI risk identification is not straightforward when the risks are diverse and evolving, the understanding is incomplete and requires new expertise, and the regulatory position is developing
Regulation will be a key enabler, but not a complete answer. The European Union’s Artificial Intelligence Act, the first major attempt at horizontal AI regulation, lays out risk-based classifications and obligations for high-risk AI systems. While it offers a much-needed foundation, it leaves several GenAI specific risks such as prompt injection and misuse of open models only partially addressed.
In the UK, the Financial Conduct Authority (FCA) has adopted a principles-based approach, emphasizing transparency, accountability and ethical use of AI. However, its guidance remains non-binding and technology-agnostic, leaving room for interpretation and inconsistency in implementation across firms.
Most existing risk and control frameworks within financial institutions are not designed with GenAI’s scale, autonomy or unpredictability in mind. Waiting for detailed, prescriptive regulation is neither realistic nor prudent. Institutions must take the lead in integrating AI-specific controls (that spann model governance, data lineage, third-party oversight and ethical risk) into existing non-financial risk structures.
Also, there is an important political nuance as governments seek to maximize the competitive potential of GenAI within their own markets, at a time when geo-political risks are heightened. That gives regulators an additional challenge as they balance this against their traditional responsibilities of financial market stability and consumer protection.
Collaboration between industry and regulators will therefore be key in shaping the future landscape, e.g. sharing of plans and sandbox development, to ensure responsible use.
As precise approaches take shape, regulators have signaled where they will focus, and their scrutiny will be intensive
Regulators will expect organizations to implement policies and practices that protect against these, aligning with existing regulations. Organizations will need to explain how they understand risks and secure themselves.
GenAI is clearly a gamechanger for all industries. It is also subject to rapid evolution, which adds to the complexities and challenges faced by risk functions, as they grapple with embracing the capability to enhance oversight approaches, alongside providing the guardrails essential for organizations.
It is clear that the regulatory position will also be one of evolution, with no silver bullet per se, that brings the rigidity sometimes seen in other regulations. As organizations develop their GenAI strategies and implementation plans, the role of the risk function will be critical – in shaping the design, influencing risk appetite, ensuring accountabilities are understood and adapting risk frameworks to bring the appropriate level of control.
Achieving that will require nimbleness and agility across risk functions. Moving ahead we will develop thinking and recommendations specifically aligned to the approach for leveraging the potential of GenAI.