The rise of generative artificial intelligence (GenAI) in the financial services industry set the foundation for the rapid creation of humanlike content and quick, intelligent decision-making.
AI agents are a natural progression from applications backed by GenAI or large language models (LLMs), which are primarily informational in nature. AI agents integrate with external tools to execute multi-step tasks and achieve goals. Using the reason-act-observe-learn paradigm, as well as building blocks such as models, tools, instructions and memory, AI agents can personalize interactions, automate complex repetitive tasks, and continuously improve over time to deliver more relevant and effective outcomes. AI agents thus have the potential to change the game given their ability to independently perform tasks that require human judgement, without the need for explicit prompts or human guidance.
While the terms agentic AI and AI agents are often used interchangeably, agentic AI is a broader capability framework that enables AI agents to act with greater autonomy and adaptability, often by orchestrating multiple agents to achieve complex goals. In our view, it will take a little longer for agentic AI to become mainstream in the financial services industry, given the technology is still evolving. Consequently, we will limit this point of view to the role of AI agents in transforming the financial services industry.
In our experience, the financial services industry abounds with transformation opportunities using AI agents. Such agents may use LLMs such as GPT-4 (where general reasoning and understanding of language is essential) or small language models (SLMs like Phi 3) (for low-compute or edge devices or where cost, speed, and privacy are important). Additionally, industry or domain specific models (ILMs) such as FinGPT can be leveraged for highly regulated and accuracy-critical use cases. We use the terms LLMs, SLMs, and ILMs interchangeably since any of these can be used as the ‘brain’ of AI agents.
Despite the tremendous transformative potential of AI agents, widespread adoption in the financial services industry is a way off. This can be attributed to the absence of regulatory guidance on the use of AI agents and concerns around their ability to consistently deliver trusted outcomes. Additionally, financial institutions need guidance on areas where AI agents can add value and be deployed safely. However, to avoid being disrupted by fintech or big tech, financial institutions must experiment with AI agents, remain abreast of developments, and nimbly pivot as they start seeing value.
Identifying the right areas in financial services to deploy AI agents is critical.
To accomplish this, it may be prudent to objectively review the entire business process for transformation opportunities using AI as well as non-AI levers. Financial institutions may find that adopting process simplification or deterministic workflows, based on traditional workflow engines or business process management (BPM) suites or technologies such as robotic process automation (RPA) work equally well.
While agents are ubiquitous in the software landscape, AI agents use AI models as their ‘brain’ to think, reason, plan, and act, and can play a significant role for processes with:
We believe that financial services organizations will create an ecosystem of wide-ranging purposive agents for different functions. While these agents will vary across different dimensions including technologies used (from ‘low code, no code’ frameworks to manually written code), types of agents (goal-based agents, utility agents and so on), scalability, and risks, they will coexist and seamlessly work together. Many areas will benefit tremendously from the adoption of AI agents (see Table 1).
Area |
Scope |
Agent complexity |
Use case complexity |
Business impact |
Business decision-making |
Real-time data analysis, monitoring, summarization Business analytics, presentations, and dashboards |
Medium |
Medium |
Productivity gains and enhanced employee experience |
Contact center operations |
Upsell/cross sell recommendations Personalized communications Actionable insights |
Medium |
High |
Productivity gains and elevated customer experience |
Investment banking |
Analysis of market volatility Suggestions on trade booking and hedge positions |
High |
High |
On-demand provision of quantitative computation and actionable insights for business decision-making |
Customer due diligence (CDD) |
Real-time customer profiling Risk assessment and scoring Verification and watchlist screening using real-time data |
High |
Medium |
Productivity gains; human intervention limited to exception management, freeing up human resources for higher value adding tasks |
Risk management |
Real-time tracking of competitors, news, price changes, and market buzz to generate early warning signals and briefs with upgrade or downgrade recommendations Real-time assessment of creditworthiness based on alternative data analysis Disaster management or stress testing simulation Real-time fraud detection and remediation |
High |
High |
Productivity gains, provision of actionable insights in real-time, autonomous decisions with human intervention to validate agent decisions |
Compliance |
Analysis of structured and unstructured documents, contracts Risk identification in legal documents and risk summaries |
Medium |
Medium |
Efficiency gains and human supervision for validating results |
Investment management |
Financial research and strategy intelligence Buy and/or sell recommendations with commentary Customized investment plans and advisory for retail and institutional investors |
Medium |
High |
Real-time provision of actionable insights for business decision-making |
Sales and marketing |
Customer lifecycle optimization Spend analytics and next-best action recommendations |
Medium |
Low |
Better customer experience and retention, higher precision campaigns |
Table 1: Areas in the financial services sector where AI agents can add value
For implementing AI agents, we recommend that financial institutions adopt a phased approach.
Phase 2 represents the ideal target state (see Figure 1).
The evolution of business use cases and technology along with necessary guardrails, observability, and evaluation occur incrementally and in parallel through the phases. Since AI advances are occurring rapidly, it is extremely important to ensure that AI agents are implemented using an evolutionary architecture. An enterprise level platform can be built to implement enterprise approved architecture patterns and archetypes, promote reuse, implement use cases in a federated way, and assimilate evolutionary patterns as they emerge.
As financial institutions explore and experiment with AI agents, they must keep in mind that adopting AI agents comes with its own risks. Designing AI agents for the financial services industry requires balancing autonomy with compliance, precision, and control. Let us examine the key risks.
Budget: High costs and misaligned business objectives are big issues which can result in heavy capital expenditure as well as operational expenses.
Operational: A malfunctioning AI agent system that frequently leaks information or fabricates incorrect content or makes unreliable decisions, when scaled up, can become a massive operational risk. This can cause systemic failure and reputational loss.
Security and compliance: AI agents that are not adequately robust to withstand adversarial attacks or are not interpretable can lead to regulatory violations. Regulators demand clear rationale and logic for AI-driven decisions in the financial services industry—an AI system that lacks transparency and auditability, or is not sufficiently aware of regulations can result in serious compliance challenges. Critical regulatory mandates include:
This will require banks to put in place checks and balances to ensure safe, secure, explainable, privacy-enhanced, fair, accountable, and reliable AI systems.
Financial institutions will need to take the following steps to address the risks associated with incorporating AI agents into their operations.
Cost control
Implement a hybrid model comprising AI agents, robotic process automation (RPA), and LLMs to contain operational expenses (OpEx) related with training and maintenance. In addition, BFSI firms must exercise care while identifying use cases—only complex, multi-step use cases requiring proactive decision-making should be chosen for deploying AI agents.
Outcome assessment
Measure the quality of tasks completed by AI agents. This will entail testing fault tolerance and communication protocols, checking authentication and authorization across agent interactions, maintaining SOC 2 Type I and Type II compliance, and performing independent audits.
Impact assessment
Assess the impact of incorporating AI agents on product safety, liability, and security among others. Evaluate and adopt measures needed to ensure AI systems are accountable and capable of combating harmful bias. The assessment must be carried out by techno-functional people with the capability to evaluate from multiple dimensions—technical, human, socio-cultural, and legal.
Control assurance
Use control evaluations that test the safety of AI agents. AI systems can potentially cause systemic failure resulting in loss of data, malfunction of business systems, and stalled processes. Robust testing is therefore imperative. This can be achieved by engaging two teams—one stages control failure while the other takes action to prevent it.
Data quality and privacy
Ensure AI agents have access to accurate enterprise data. This is key as agents are only as good as the data they act upon. If banks use data from third parties, it is even more important to ensure the reliability of the data source. Additionally, banks must deploy data masking and privacy controls to avoid breach of sensitive proprietary information and to comply with privacy regulations. Tools used by AI agents should be finely permissioned based on role, task, and sensitivity.
Orchestration controls
Define and set guardrails on design aspects such as hierarchy, communication protocols, specificity of tasks assigned to agents, and mode of collaboration when multiple agents collaborate to achieve business outcomes.
AI agents are set for astounding growth in the coming years.
AI agents can set the stage for expanding financial institutions’ portfolio of products and services in innovative ways, in turn unlocking immense value. However, given the potential flipside of large-scale adoption of AI agents in financial services, we recommend a cautious approach spanning a three-year horizon.
The way forward lies in defining a phased adoption strategy traversing the critical steps of controlled pilots, larger scale trials, and extended trials with an ecosystem of AI agents performing a variety of activities. Only after navigating these phases must financial institutions consider adoption-at-scale, which will depend on the behavior of autonomous AI agents and regulatory guidance. That said, human supervision is a must. AI agents cannot be allowed to act autonomously, an AI agent steward should be ultimately responsible and accountable.