The advancement of Artificial Intelligence (AI) technologies holds the promise of substantial benefits, provided they are managed with due diligence.
This technological leap is a double-edged sword, potentially empowering less sophisticated actors to venture into activities once beyond their capabilities. From cyber-attacks to fraud, scams, identity theft, and the disturbing proliferation of illicit content, the risks are palpable.
For compliance professionals, the evolution of AI sparks a dynamic mix of enthusiasm and trepidation. To effectively harness Al's potential for productive ends, a deep understanding of its functioning seems to be crucial.
The article unravels the revolutionary potential and how to navigate the challenges inherent in the ever-expanding landscape of Al in the domain of economic crime. Additionally, it explores the compliance attempts aimed at regulating AI.
As fraudsters get more sophisticated, financial crime compliance professionals are finding new ways to make technology work to stop them from targeting financial institutions on a global scale. This also holds true in the deployment of AI.
AI has advanced to a stage where large volumes of data can be pulled, structured, consolidated, analyzed, and risk-scored so that financial crime investigators can make more accurate determinations of suspicious activities. It has the power to not only carry out AML and other economic crime prevention measures much faster than humans, but it also has the potential to do this in real-time and promptly react to help financial services organizations stay ahead of fraudsters.
There are numerous ways that AI can fight economic crime. We explore a few examples below:
Streamlined Know Your Customer (KYC) and Customer Due Diligence (CDD) checks: AI can automate the onboarding and ongoing review processes by verifying customer identities against various internal and external databases and accurately flagging discrepancies and suspicious cases for further review.
Risk-driven and dynamic Customer Retention Rate (CRR) and Enhanced Due Diligence (EDD): AI can help in assessing customer risk by continuously analyzing multiple factors such as transaction history, business relationships, and adverse media flags and identifying patterns and combinations of customer risk attributes requiring enhanced due diligence.
Sanctions screening and adverse media review - AI can efficiently screen customer profiles against a large number of records maintained in multiple watchlists, sanctions lists and negative media, and automatically classify, score, define and risk rate matching names.
Enhanced preventative capabilities: AI can learn normal transaction behavior and detect deviations that might signal fraudulent or potentially suspicious activities.
Improved alert handling processes: AI can process vast amounts of data from various sources, including structured transactional data or unstructured data coming from messengers (WhatsApp, Signal, Telegram), social media (Facebook, Instagram or TikTok), voice recordings and link seemingly unrelated parties by creating patterns that produce an alert that is meaningful. It can also score and prioritize alerts based on risk and mark higher-risk alerts, helping investigators focus on high-priority cases.
Automated narrative generation: AI can create coherent and structured narratives, which can be utilized when conducting KYC, screening and alerts resolutions, case investigations and Suspicious Activity/ Transaction Report s (SAR/STR) where explanation, summarization and justification are required.
Continuously optimizing risk models: Combining AI with machine learning models helps recognize evolving financial crime risk patterns and enables continuous adaptation to new tactics used by criminals making alerts more accurate over time.
In conclusion, AI enables the automation of many routine tasks requiring analysis and processing of large data resources. It helps compliance teams focus on critical items – and provides significant leverage in fighting economic crime.
AI is having a transformative impact on the way organizations do business. Adopting a risk-based approach when launching AI initiatives will help organizations gain trust at an enterprise-wide level, and from their customers and regulators.
Those that use AI to automate, scale, and improve business processes stand to benefit. However, its adoption involves a certain degree of risks and challenges and managing these is vital to ensuring success. Risk professionals within organizations should help with using AI safely, securely, and resiliently. Understanding the technology’s potential to deliver value, and how to safeguard against risks will help shape critical business decisions.
As discussed in the previous section, AI can boost efficiency across economic crime controls. It can contribute to the reduction of costs by leveraging data analysis and summary generation capabilities. Enabling the automation of onerous and complex tasks can free up resources for other more meaningful tasks.
More interestingly, AI can generate new growth opportunities. Fraud detection models can be customized to individual customer profiles, enhancing the personalized banking experience, while reducing the scalability of criminal activities and limiting vulnerabilities in the system. AI will eventually be able to produce insights to improve products and services portfolio, and bolster economic crime controls, helping organizations gain market advantage and stay one step ahead when detecting and preventing fraud.
While the benefits of AI can be great, there are wide-ranging risks and challenges associated with it, which still need to be overcome.
As the demand for AI technology continues to grow, so do criminal capabilities. Fraudsters and other criminals are also using AI technology to launch highly personalized attacks on their victims. For example, they can analyze huge amounts of publicly available information and simulate fake identities, emails, and calls.
While adopting AI, financial services organizations must consider reputational risk. Privacy concerns are another important element, especially when handling sensitive data – and organizations should actively ensure compliance with respective regulatory obligations.
The ethical risk of AI applications must also be managed, as the potential to perpetrate bias hidden in training data is a major challenge for organizations. AI has the propensity to hallucinate and generate inaccurate information and circulating misinformation can be dangerous. Quality data and conducting due diligence on data used to train AI models are important to help mitigate hallucination risk.
As AI gathers data, there is a risk that infringed copyrighted, trademarked, patented, or otherwise legally protected materials might be used. It is critical to regularly monitor and validate the reliability and performance of AI models, ensuring the quality of data captured and used to train those models.
Having an effective AI governance strategy, and a clear and actionable framework for AI application at the organizational level is important. AI activities must be continually reviewed, measured, and audited. Organizations should have robust legal and regulatory frameworks in place and keep abreast of global changes – and continually explore and assess risks and ethical concerns to protect their reputation and compliance duties as they continue to adopt AI.
The advent of AI presents a myriad of challenges in the regulatory landscape, ranging from potential innovation conflicts with copyright holders, to opaque automated decisions impacting individuals. Issues like biased training datasets, deepfake proliferation, and the spread of misinformation further complicate this landscape. Balancing these concerns against potential benefits becomes a critical consideration.
Governments globally are struggling with the complicated task of regulating AI to harness its benefits while safeguarding rights and ensuring safety. The European Union has taken a proactive stance, developing a comprehensive regulatory framework. This approach adopts a risk-based methodology, which includes an outright ban on high-risk AI applications such as cognitive behavioral manipulation and real-time biometric identification. The ratification of the AI Act contributes to setting a global standard, given the EU's substantial market influence.
China, too, is embracing AI regulation, addressing recommendation algorithms, machine learning, and generative AI with detailed, prescriptive regulations designed to oversee information control. Most AI-powered applications require notification to multiple regulatory bodies, such as the Cyberspace Administration of China.
In the UK, the government outlines proposals for a regulatory framework fostering a pro-innovation environment for foundational AI companies, all while upholding safety and privacy. The UK government's voluntary AI White Paper and guidance from the Digital Cooperation Regulatory Forum emphasizes the need for robust AI governance. The UK's AI safety summit in November 2023 engaged politicians and industry experts to explore opportunities and concerns. Simultaneously, the UK's Financial Conduct Authority explores indirect AI regulation through mechanisms like the Consumer Duty and the Senior Managers & Certification Regime.
In the United States, efforts to regulate AI focus on addressing risks without stifling innovation. The US administration recently announced an executive order reshaping the federal government's approach to AI.
Navigating the evolving landscape of generative AI regulation – while addressing technology potential and issues with their internal controls – demands a balanced approach from the financial services organizations, ensuring responsible utilization while reaping the technology’s immense benefits.
AI has enormous potential to revolutionize the fight against economic crime. The latest trends in AI, such as GenAI, have enabled financial services organizations to detect and prevent economic crime more effectively. However, there are also some significant challenges on the horizon associated with the use of AI, such as the lack of transparency in algorithms, the need for skilled personnel to develop and maintain AI systems and the evolving regulatory landscape. Financial services organizations should consider these imperatives while improving operational effectiveness to understand the potential benefits and impact of AI – and at the same time do their best to ensure compliance.