Image related to Why human behaviour is irreplaceable in compliance

Why human behaviour is irreplaceable in compliance

This article is a free excerpt from inCOMPLIANCE, ICA's bi-monthly, member exclusive magazine. To gain access to more articles like this, sign in to the Learning Hub or become a member of ICA.

The ability to understand and influence ethical behaviour within organisations is an increasingly important attribute for the compliance professional, writes Roz Dixon-Burnett.

There is a question that keeps surfacing in conversations across the compliance profession, in corridors, in team meetings, and increasingly in those quiet moments of self-reflection.

‘What exactly is our purpose, now that the machines are so much faster than us?’

It’s not a comfortable question and it cannot be ignored.

A recent Moody’s survey found that 96% of compliance professionals expect AI to have an impact on their role. What might this impact mean in practice? EY, for one, has observed that AI deployment will create compliance professionals who look less like rule-keepers and more like strategic advisors: people who set up AI systems, monitor their performance, and apply human judgement in the cases that matter most.

The prevailing response to all of this has been a call to be ‘more human’. And that instinct is right, as far as it goes. But I think it does not go far enough, and it risks framing the future of compliance as a managed retreat rather than a confident advance. Because the more precise and more powerful question is not ‘how do we become more human alongside AI? It is: ‘What can we do that AI cannot?’

This distinction matters. ‘More human’ implies that compliance professionals and AI are doing roughly the same thing, with the person perhaps doing it with more warmth and context. But the capabilities that define our professional value are not just harder for AI to replicate, they are categorically different. They require judgement, lived experience, moral agency, and the kind of social and emotional intelligence that automated systems are simply not designed to deliver. Together, these capabilities are known as ‘behavioural intelligence’. And developing them in the age of AI is, I believe, a professional imperative.

The human factor

AI can identify applicable regulations faster than most of us can locate the relevant handbook. RegTech platforms monitor transactions at a scale and speed no human team could match. Automated risk-scoring tools process thousands of data points before we have finished our morning coffee. The efficiency argument for technology is, frankly, unanswerable. And if our professional value rests only on our ability to know things, to process information, to apply rules then yes, we are entirely replaceable.

However, these technical abilities do not define us when we are operating at our most effective.

Compliance professionals who make a genuine difference are readers of people, of cultures, of unspoken dynamics. We are the person in the room who notices that questions are being deflected rather than answered. We are the ones who understand why a team that scores well on every audit metric is somehow producing more near misses than it should. We are, as it turns out, expert practitioners of behavioural intelligence, even if most of us have never realised it.

speech marks

Compliance professionals who make a genuine difference are readers of people, of cultures, of unspoken dynamics. We are the person in the room who notices that questions are being deflected rather than answered.

What is behavioural intelligence?

Behavioural intelligence, in a compliance context, is the capacity to understand, assess, and influence the human factors that drive ethical and unethical conduct in organisations. It draws on social psychology, behavioural economics, and organisational research into why people behave as they do.

To be clear, behavioural intelligence is not soft skills by another name, and it is not an assertion that all humans are the same, or that behaviour is entirely predictable. It is a rigorous, evidence-based body of knowledge, grounded in experimental research, replicated across cultures and contexts, and applied with measurable effect in fields from organisational psychology to public policy to financial regulation. When we talk about the mechanisms that cause good people to cross ethical lines, that prevent colleagues from speaking up even when they know something is wrong, or that make whole teams defer to authority in ways that no individual would endorse alone, we are drawing on some of the most robustly validated findings in the social sciences. This stuff is real. And it’s entirely possible to learn, develop and apply these capabilities with the same discipline and precision that we bring to our technical and regulatory expertise.

Think about misconduct. The standard assumption in compliance is that people break rules because they do not know what the rules are, or because the consequences are not sufficiently deterrent. The research tells a different story. Most misconduct is not perpetrated by bad people who set out to cause harm. It is committed by ordinary, ‘decent’ people who have been exposed to conditions that gradually erode their ethical decision-making.

Two mechanisms are particularly well-evidenced and relevant. The first is moral disengagement. This is the psychological process where people justify their own behaviour so that it no longer feels unethical to them. They do this by reframing (‘this is just how business works’), displacing responsibility (‘I was following instructions’), or minimising the harm (‘nobody was really hurt’). These are not the rationalisations of bad people. Rather they are the entirely predictable responses of normal human cognition under pressure, responses that we as compliance professionals are uniquely positioned to identify and disrupt.

The second mechanism is normalisation of deviance. This is the gradual process whereby minor steps away from expected standards come to be seen as acceptable because they haven’t (yet) produced negative consequences. Collectively these variations can move the cultural baseline of what is considered normal and allow firms to simply drift into an unacceptable position, yet the warning signs were there, visible to anyone who knew what to look for.

Understanding and being able to identify these mechanisms in action highlights the kind of human-based intelligence that we as compliance professionals can bring to our work and deliver meaningful value to the organisations we support.

The signals that technology cannot read

One of the most powerful capabilities that behavioural intelligence develops is the ability to conduct a cultural risk assessment.

Data tells you what happened. The signs of escalating misconduct risk are often visible long before an incident occurs. This can look like, for example, a leadership team in which challenge is visibly unwelcome, a culture in which success is celebrated without curiosity about how it was achieved, middle managers who have learned to manage upwards rather than lead downwards, or as described earlier, a creeping normalisation of small shortcuts that nobody explicitly approves and nobody explicitly stops.

This is where the bystander effect becomes critical, that is the gap between what people believe they would do and what they actually do. Ask almost anyone in a room whether they would speak up if they witnessed serious misconduct in their organisation and almost all of them will say yes. Yet research tells a very different story.

In a now-classic series of experiments by Darley and Latané, participants who believed they were alone when they witnessed an emergency helped 85% of the time. When they believed others were also present, that figure dropped to 31%. This lack of response was not because the participants stopped caring, but because the presence of others diffuses the sense of personal responsibility. Each person assumes that someone else will act, or already has, or is better placed to do so with the result being collective inaction.

In businesses, this dynamic can play out at scale and with consequences that you may recognise immediately. Research consistently shows that workplaces are particularly vulnerable to what one study called the ‘open secret’ phenomenon. This is where issues are widely known and widely observed, and yet not reported, because everyone is waiting for someone else to take responsibility. The larger the organisation, and the more diffuse the accountability, the stronger the effect can be.

We don’t need to look to psychology laboratories for evidence. We need only look at the headlines, at firms such as Boeing, Volkswagen, and Wirecard as major examples of misconduct that were internally well-known and yet took years to surface and deal with.

These are not primarily stories of governance failure, or regulatory inadequacy, or even individual moral corruption. They are stories of organisations where the bystander effect (along with potential whistleblowing failures) operated at industrial scale. Compliance professionals who understand this dynamic are better equipped to recognise it while it is still developing, long before the data confirms what the culture already knew.

Roz Dixon-Burnett discusses driving behavioural change in others with Paul Asare-Archer, in our Head of Compliance inDEPTH series. Watch the full video here

Influence without the luxury of authority

Another dimension to behavioural intelligence that deserves particular attention – and is something that all successful compliance professionals begin to develop early on in their careers – is the ability to drive ethical behaviour and compliance in environments where you have no formal power over the people whose conduct you are trying to shape.

This is, of course, the reality of most compliance roles. We advise. We challenge. We recommend. We do not (or should not…) instruct. And yet we are often held accountable for outcomes that depend entirely on the decisions of people who outrank us, outearn us, and sometimes outmanoeuvre us.

The temptation is to meet this challenge with better data, more evidence, a cleaner risk quantification. We might think that if we can just make the facts compelling enough, the argument disappears and the right decision will follow.

The problem is that this is not how decisions are made. Nobel laureate Daniel Kahneman’s research established that human decision-making operates across two very different systems. The first system is fast, intuitive, and emotionally driven, while the second is slower and analytical. We like to believe that our judgements are the product of careful, rational analysis, but the evidence is unequivocal. It’s the first system that’s doing the heavy lifting, shaping our perceptions and often delivering a conclusion before the analytical mind has even been consulted.

This is not a character flaw or even a flaw at all. It’s how we’re all built. For compliance professionals, understanding this can be transformative. It means that the question ‘how do I present the facts more clearly?’ is often completely the wrong question. The better question is ‘what does this person care about, and how does that connect to what I need them to care about?’

An algorithm can produce a risk score. It can surface a data point, generate a report, flag an anomaly. But it cannot sit with a resistant executive and decipher in real time whether the hesitation is about resource, reputation, fear of scrutiny, or something else entirely. It cannot adjust its approach in the moment, find the framing that lands, build the relationship that makes the conversation possible in the first place, or exercise the judgement to know when to push and when to wait. These capabilities are not supplements to technical expertise. In an AI-augmented compliance function, they are the core of it.

speech marks

'How do I present the facts more clearly?’ is often completely the wrong question. The better question is ‘what does this person care about, and how does that connect to what I need them to care about?’

So, there’s still a place for us?

Where does this leave compliance professionals, as AI continues to reshape the compliance landscape?

It is worth being honest about the trajectory. AI will keep developing. The tools will become more sophisticated, more autonomous, more embedded in the fabric of how compliance functions operate. Some of what we do today will unquestionably be done better, faster, and more cheaply by technology tomorrow. It’s not a possibility sometime in the future, it’s a reality to work with in the here and now.

And there is an equally important truth on the other side of that coin. The more that routine, repetitive tasks such as monitoring, risk scoring, and regulatory reporting can be automated, the more visible and the more valuable our distinctly human capabilities become. Behavioural intelligence does not compete with AI. It occupies a different plane entirely, a plane that technology, however sophisticated, is not designed to reach.

For organisations, this has a commercial dimension that is easy to understate. A compliance function staffed by professionals who can read cultural risk before it becomes conduct risk, who can influence behaviour in environments where data alone produces no change, who have the courage and the capability to stand up for what’s right when it matters is not only a cost centre. It is a genuine risk management asset, and in a landscape where behavioural and cultural failures continue to generate the most damaging and most expensive regulatory outcomes, it is an increasingly consequential one. The organisations that understand this and invest accordingly will be better governed, more resilient, and better placed to demonstrate to regulators that their compliance culture is real rather than performed.

Behavioural intelligence is not a response to AI. It is the answer that we have implicitly known to the question that AI brings to the fore and requires a clear and direct response:

‘What is the compliance professional’s unique and enduring value?’

My belief is that the answer is simple: ‘We are human’.

speech marks

Behavioural intelligence does not compete with AI. It occupies a different plane entirely, a plane that technology, however sophisticated, is not designed to reach.

About the author

Rosalind Dixon-Burnett

Roz Dixon-Burnett is ICA Course Director, Governance, Risk and Compliance