By Judith Hawkins, 13 January 2025
A recent survey by the Bank of England (BoE) and Financial Conduct Authority (FCA) has indicated that depth of understanding around artificial intelligence (AI) is not keeping pace with adoption of AI within UK financial services.
The BoE and FCA’s third joint survey on artificial intelligence and machine learning in UK financial services found that while 75% of respondent firms have adopted AI, nearly half (46%) reported having only ‘partial understanding’ of the AI technologies they use. Indeed only 34% of firms replied that they have ‘complete understanding’, signalling a pressing need to provide additional training and education in AI.[1]
This need has become more urgent given that the findings, published in November, also showed a notable increase in AI adoption when compared to the previous 2022 survey, when 58% of firms said they were using the technology, compared to three quarters of firms now.
The recent uptake aligns with the trend seen by McKinsey in its global survey on AI, which found that while in the past six years AI adoption by respondents’ firms had hovered at around 50%, there was a sizeable jump to 72% in its 2024 survey.[2]
Use cases
As identified by the BoE and FCA in their research, a key factor behind lower levels of understanding of AI tools is that a third of all use cases are third-party implementations, rather than models developed internally. Inevitably this raises questions over increased third-party exposure risks, particularly with a further 10% of respondent firms planning to use AI over the next three years.
The survey also detailed the areas in which AI is currently being used the most, with the highest percentage of respondents using AI to optimise internal processes (41%), followed by use of AI within cybersecurity (37%) and fraud detection (33%).
Over the next three years, an additional 36% of respondents expect to use AI for customer support (including chatbots), 32% for regulatory compliance and reporting, 31% for fraud detection, and 31% to optimise internal processes.
Benefits and risks
Respondents shared their thoughts on what they see as the biggest benefits of using AI, as well as the top perceived risks. Identified benefits mainly related to data and gaining analytical insights, alongside the use of AI within anti money laundering (AML), to combat fraud, and for cybersecurity.
The areas with the largest expected increase in benefits over the next three years are operational efficiency, productivity, and cost base.
Of the top five perceived current risks, four are related to data: data privacy and protection, data quality, data security, and data bias and representativeness.
In addition to the BoE and FCA’s findings, ICA’s sister company Compliance Week has conducted a survey with GAN Integrity which has revealed a danger that governance procedures are also not keeping up with the rapid rate of AI adoption. Their ‘AI Governance Benchmarking Survey’ was conducted between October and November 2024, with 50% of respondents working in compliance and risk. When asked whether their organisation used AI tools, 95% of respondents replied affirmatively, but with less than half (38%) stating that they had any AI governance in place.[3]
How ICA can help
Conscious of this increasing need to provide additional training and support on AI and its associated risks, at ICA we have developed a variety of courses and resources on this exact topic.
Our ICA Specialist Certificate in AI for Compliance Professionals delves into all you need to know about this emerging technology and how it is being applied within RegTech, providing a mix of case studies and practical knowledge.
Informed by industry expertise, the course also considers the ethical dilemmas around AI and how to mitigate them, as well as exploring ongoing developments in this space to help you prepare for the future.
Robin Lee, General Manager, APAC, at Hawk, and Adam Khan, Head of Ethical Technology at Xperientia, worked with ICA to develop this specialist qualification, and shared their thoughts on how these latest industry findings further highlight the importance of governance and ethics when it comes to AI.
‘AI adoption without governance is like sailing into uncharted waters without a compass. We must balance innovation with accountability to ensure technology serves both the business and society responsibly,’ said Robin.
He added: ‘Ethical AI isn’t just a regulatory requirement; it’s a business imperative. Companies that prioritise transparency, fairness, and accountability will lead the way in fostering trust and long-term value.’
Reacting to the recent surveys, Adam adds: ‘AI governance needs to evolve at the speed of AI itself. Lagging behind in this area is not just risky; it’s a disservice to customers, stakeholders, and society at large.
‘AI is redefining what’s possible, but without strong governance, it risks redefining what’s ethical. Our role as leaders is to ensure that the technology empowers rather than exploits.’
Ongoing insights
Delving into the issues around the rapid rollout of AI tools, at ICA we have also collaborated with thought leaders to produce two dedicated ICA Perspectives Reports on AI.
Compliance and AI: Balancing the risks and opportunities provides insights into AI from those with an intimate knowledge of both its potential and its drawbacks. Wider ethical questions surrounding the technology are also explored in another ICA report focused on this area: AI and ethics: Why does it matter for compliance?
Don’t forget to keep an eye on our ICA Insight page, where we share weekly articles on key industry topics, including regular pieces considering the impact of AI. These include Amii Barnard-Bahn’s article, Are you keeping up with adoption of AI in compliance?, which provides advice on key actions for compliance professionals to stay up-to-date with developments. You can also find a helpful summary of our ICA Member webinar, Decoding the EU AI Act: A framework for compliance?
ICA members also have access to our digital bi-monthly magazine inCOMPLIANCE, where industry experts regularly write on the subject of AI and its implementation within compliance and financial crime prevention.
For instance, in our December 2024 issue, Head of Compliance at Basware Markus Hornburg explores how firms should consider using AI to quell the invoice fraud threat. You can also find out more from Kay Chand (Partner, Browne Jacobson) on how to approach procurement and onboarding of AI systems, (October 2024, Issue 73), as well as guidance from Will Reddie and Bob Haken (Partners, HFW) on the best ways to keep on top of emerging regulation in this space (June 2024, Issue 71).
On top of all this there are numerous resources ready to help you further your understanding of AI which are available to members on our ICA Learning Hub.
Find out more about ICA member benefits and how to sign up ->