By Neil Hodge, 21 October 2024
The proliferation of artificial intelligence (AI) – as well as the promised business cases promoting its use – has led companies around the world to quickly invest in the technology. Companies say they’re testing and deploying systems ranging from off-the-shelf chatbots to bespoke databases and writing aids. Executives hope these AI tools will improve efficiencies, reduce costs, and help them stay competitive. But it could lead to just the opposite.
Industry watchers say that as a broader range of companies buy in, an increasing number of organisations are unaware whether the AI technologies they have deployed are legally safe; where the data these systems are trained on has come from; or whether they as users – and not necessarily the firms that have developed the technology – may find themselves in the firing line with regulators for any incidence of non-compliance.
Added to that, companies lack awareness that AI misuse can result in penalties from several different regulators at the same time – and not just in one jurisdiction.
Legal landscape
‘The legal landscape is constantly evolving,’ said Jason Raeburn, Intellectual Property and Technology Litigation Partner at law firm Paul Hastings. And the potential legal threats appear to be as varied as the different AI applications companies are creating. ‘It is crucial that compliance is a top priority for businesses looking to avoid multi-jurisdictional investigations, litigation, and severe sanctions,’ he added.
Imagine that a large UK bank with a European presence implements an AI-driven credit scoring system to streamline loan approvals and personalise financial product offerings. The AI system is designed to assess whether an applicant is creditworthy, predict future financial behaviours, and tailor marketing campaigns for different customer segments. So far, this is all routine stuff, and perhaps the bank implementing it comes to rely on the AI more and more.
Now, consider the moment when the AI might fail because, after all, machines are as fallible as their creators. Maybe it inadvertently introduces biases, discriminating against demographic groups, whose mistreatment – offset in some instances by law – exists in the training data that could help teach an AI the basics of its job.
Risk of breaches
Or maybe the AI breaches privacy rules, such as the EU and UK General Data Protection Regulation (GDPR), by using extensive customer data without obtaining explicit consent for some data processing activities.
There is also the risk of breaching UK Financial Conduct Authority (FCA) regulations if AI profiling leads to mis-selling and unfair customer treatment (for example, targeting high-risk customers with complex financial products unsuitable for their needs, leading to financial losses and complaints).
Other potential pitfalls – involving other regulators – exist in such scenarios, too. For example, misleading claims about the capabilities of AI in marketing communications could fall foul of broadcasting, advertising, and marketing watchdogs, while competition and consumer protection authorities may examine how firms use algorithms and AI systems to set prices, target consumers, or make personalised offers.
Regardless of the reason for failure, the result would ‘lead to several regulatory breaches and draw scrutiny from multiple agencies,’ said Steve Neat, Chief Revenue Officer at data solutions provider Solidatus.
The dangers of adding AI to the mix are not merely theoretical. There has already been one instance of a bank that has been caught out by relying too heavily on an AI algorithm to conduct ‘routine’ customer checks. In May 2023, the Berlin Data Protection Authority fined an unnamed Berlin-based bank €300,000 [1] (then-US $325,000) under the EU GDPR for not transparently informing a candidate of the reasons behind an automated rejection for their online credit card application. Without this specific information, the candidate could not meaningfully challenge the decision.
In August, the EU’s AI Act entered into force, allowing regulators to impose fines of up to €35 million (US $39 million) or 7% of worldwide group revenue (whichever is higher), which raises the stakes for those companies found responsible for potential AI misuse and abuse.
Tangible, concurrent risk
The legal framework surrounding AI is still in its infancy and has not been standardised, adding to the many unknowns surrounding the technology.
This threat from multiple regulators is ‘tangible,’ said Alexander Roussanov, Partner at law firm Arnold & Porter. Worse, he said, the enforcement mechanisms associated with these regulations may apply in parallel and may lead to concurrent enforcement.
Lawmakers so far have attempted to keep the AI Act from being overly punitive, but Roussanov said there would need to be further alignment at EU and national level to agree on regulatory guidance, effective enforcement guidelines, and possibly even revisions of existing legislation.
‘The experience at EU level to date suggests this would be rather challenging to achieve in practice,’ he added.
Problematic overlap
Other experts agree regulatory and legislative overlap regarding AI could prove problematic for companies. Steve Marshall, Director of Advisory Services at AML compliance technology provider FinScan, said while ‘being hit by different regulators is not something new,’ he believes the key lookout for companies is more a question of how each regulator will interpret and determine non-compliance, especially when any breach of rules could be multi-jurisdictional.
The EU approach is to classify AI systems based on the levels of risk and harm. The UK approach, however, is to identify high-level AI risks (rather than AI systems) and then give industry regulators autonomy to frame AI rules themselves. Both regulatory paths rely on companies to self-assess their compliance.
AI oversight and enforcement are still very much at an early stage even though data misuse has been a cornerstone of data privacy legislation such as the GDPR for some time. Nevertheless, said Robert Grosvenor, Managing Director at professional services firm Alvarez & Marsal, companies that choose to adopt AI as a service through platforms and plug-ins provided by a myriad of vendors and service providers ‘will still find themselves responsible for ensuring that any AI-enabled activities impacting on their customer and employee base meet the rules appropriate for their organisation.’
[1] https://www.datenschutz-berlin.de/pressemitteilung/computer-sagt-nein
This article has been republished with permission from Compliance Week, a US-based information service on corporate governance, risk, and compliance. Compliance Week is a sister company to the International Compliance Association. Both organisations are under the umbrella of Wilmington plc. To read more visit www.complianceweek.com