The pros and cons of generative AI in AML compliance

Image related to The pros and cons of generative AI in AML compliance

By Dr Mario Menz, 26 February 2024

In an era where the digital transformation of the financial services sector has accelerated at an unprecedented pace, generative artificial intelligence (AI) for anti money laundering (AML) compliance has been hailed as a cure-all.

Streamlining tasks

Generative AI providers often promote their technologies as revolutionary tools that can enhance AML processes. The technology is touted as a solution to automate and streamline the labour-intensive and time-consuming tasks in AML operations – i.e., transaction monitoring, customer due diligence, and the generation of intelligence and suspicious activity reports – that traditionally require significant human intervention and are prone to errors.

A major challenge in transaction monitoring and screening, for example, is the high number of false positive alerts generated by existing systems. Generative AI providers assert their solutions can drastically reduce these false positives by more accurately distinguishing between legitimate and suspicious transactions, thereby saving time and resources.

By reducing the need for extensive manual reviews and investigations, generative AI is marketed as a way to cut operational costs. Providers highlight the potential for long-term cost savings, despite the initial investment in adopting generative AI solutions.

The technology is also pushed as a tool to ensure compliance with regulatory requirements. By automating the generation of intelligence reports and ensuring the system is updated with the latest regulatory changes, generative AI is presented as a means to reduce the risk of non-compliance.

Enhanced detection

Providers claim generative AI can significantly improve the detection of suspicious activities. By analysing vast amounts of data and recognising complex patterns, the systems can potentially identify risks and illicit activities that might be missed by human analysts or traditional software systems. Some providers even automate reporting to national intelligence agencies.

Generative AI is often presented as being highly adaptable and capable of learning from new data and evolving threats. This aspect is emphasised to illustrate how the technology can stay ahead of sophisticated money laundering techniques that continually change to evade detection.

While generative AI indeed holds the potential to transform AML operations significantly, the issues and shortcomings associated with this technology are rarely acknowledged.

Recognising risks

Generative AI presents a labyrinth of risks and regulatory complexities that institutions must navigate with caution and foresight. The adoption of AI in financial services, especially in relation to AML, is nothing short of opening Pandora’s box – a vessel of untold potential but not without its dangers.

The effectiveness of the technology is heavily dependent on the quality of the underlying data, the sophistication of the algorithms, and the continual oversight and tuning of these systems. Big data technologies and distributed data processing, for example, have not yet been widely implemented in the AML community.

Unlike traditional AI, which primarily analyses and interprets existing data, generative AI can produce new content, simulate scenarios, and predict outcomes with – at times – startling accuracy. This capability, when applied to AML, could potentially revolutionise the way financial services institutions monitor transactions, assess risks, and interact with their customers.

Data concerns

But AI systems, particularly those that learn and adapt, are only as good as the data they are fed. The issue of data privacy and security becomes paramount as these systems could inadvertently integrate sensitive customer information into broader datasets, leading to potential data breaches and privacy violations.

The training datasets themselves can be biased, reflecting societal prejudices and leading to systematic discrimination. This raises ethical, legal, and social concerns.

Similar issues have been observed [1] where algorithms were used by law enforcement agencies to predict crime hotspots and profile potential offenders. Tools like PredPol, a policing technology aimed at helping law enforcement predict and prevent crime, have faced scrutiny for the biases in their predictive algorithms. While aiming to improve resource allocation in police patrol deployment, these tools do not specify how data is used, calculated, or applied, raising concerns [2] about their transparency and fairness.

Recurring bias

There is also a fear of ‘runaway feedback loops,’ where historical biases reflected in an algorithm’s results lead to more biased outcomes.[3] This cycle can potentially deepen marginalisation in over-represented minority groups.

In a sector as sensitive as financial services, such biases could lead to unfair targeting of customers, which could further expand the debanking crisis.[4]

Explainability has been another issue not fully addressed. This is because generative AI models, like other advanced AI systems, often operate as ‘black boxes,’ where the processes that lead to a particular decision or prediction are non-transparent and not easily understood.

The opacity of generative AI systems makes it difficult to ascertain whether decisions are made fairly, without bias, and in compliance with regulatory standards. This lack of transparency can pose significant challenges, especially in regulated industries where regulators expect decision-making to be explainable and auditable.

Generative AI systems use a wide variety of data types and algorithmic models. The nature of the explanation for a decision or prediction by a system can vary greatly, depending on the specifics of the data and the algorithms used. For example, a model trained on financial transactions might use different criteria and processes compared to one trained on textual data, leading to different types and levels of explainability.

AI hallucinations

Performance risk is another area of concern. Generative AI models, though sophisticated, are not immune to generating inaccurate or misleading information—a phenomenon often referred to as ‘hallucinations.’ Last year, for example, two New York lawyers were fined after submitting a legal brief with ‘hallucinated’ case citations generated by ChatGPT. These issues are more widespread [5] than people might think.

For financial institutions, reliance on AI-generated content or decisions that later turn out to be false – i.e., transaction analysis, articulation of suspicion of money laundering, or terrorist financing – could easily lead to legal repercussions and regulatory censures.

The regulation of AI is still in a state of flux, with new guidelines and standards emerging as the technology evolves. Financial institutions must remain vigilant and adaptable to ensure compliance with these changing regulations while harnessing the power of AI in AML.

Despite these challenges, the potential of AI in transforming AML processes is undeniable. It offers the promise of greater efficiency, accuracy, and speed in identifying and preventing financial crimes. The key lies in striking a balance: leveraging AI’s capabilities while developing robust frameworks to manage its risks and valuing the input of trained and knowledgeable staff.

Taming the AI beast is not about stifling innovation but about steering it in a direction that maximises benefits while minimising potential harms. In doing so, we can not only comply with regulatory standards but also lead the way in ethical, responsible AI use that sets a precedent for other industries to follow.

 




About the author 

Dr Mario Menz has nearly 20 years of experience working in financial services compliance.
He is currently global head of risk and compliance for BVNK and an advisory board member at the Institute of Money Laundering Prevention Officers.


This article has been republished with permission from Compliance Week, a US-based information service on corporate governance, risk, and compliance. Compliance Week is a sister company to the International Compliance Association. Both organisations are under the umbrella of Wilmington plc. To read more visit www.complianceweek.com