Written by Paul Eccleson on Tuesday 19 January, 2021
‘The psychology of compliance’ blog series:
Governance, Risk and Compliance (GRC) is, at its heart, about managing human behaviours. In our quest to create risk aware and compliant cultures, we need to design GRC frameworks that encourage integrity and ethical choices in our people. However, insights from the science of human behaviour – psychology – seem to play, at most, only a minor role in the design of such frameworks.
In this series of articles, Paul Eccleson will aim to show how psychology can offer solutions to major challenges that compliance professionals face in their careers. He will explore a range of important topics; from how to challenge people who rationalise their non-compliance, to the perils of AI repeating human biases and dealing with human errors. The series will describe perspectives that compliance professionals can use to understand the behaviours they encounter, and so be better armed to deliver the ethical and controlled culture that is their ultimate goal.
Compliance, bias and illogical decision-making
Most people understand and accept that human decision-making is tainted by bias. But what is less well-understood is how these human biases can creep into technology intended to streamline, improve or make fairer decisions made by machines. Amazon’s 2017 machine-led recruitment fiasco is an instructive example.
Bias and machine learning
In 2017, Amazon was forced to abandoned plans to apply artificial intelligence (AI) to the recruitment of new engineers after its computers picked up on the sexist biases of the company’s recruitment processes.
The recruitment project had promised to deliver a step-change in Amazon’s talent acquisition, applying cold reasoning to the CVs it received and selecting those candidates that, based on historical analysis, would mature into the world-beating employees of the future. The team looked at the successful engineers it had recruited in the past and fed their CVs into the algorithm. The machine learning software would then extract the parameters that mattered from those successful CVs and look for those attributes in the resumes of new applicants. There was, however, one crucial oversight: 75% of engineering management at Amazon was male. The decision tree that the software had extracted from Amazon’s past recruitment practices thus penalised any mention of the word ‘women’.
Such biases lead human beings to make decisions that are irrational, whilst having no self-awareness of that irrationality. It is a vulnerability that can affect the most innocuous areas of decision making. As an example, let’s say you have the choice between two beers – a lower quality one for £2 and a medium quality one at £2.30. About 20% of people choose the lower quality beer – the rest splash out the extra 30p on the better one. If, however, we introduce a premium quality beer at £2.60 into the choice, no-one buys the lower quality beer. The choices shift upwards in this ‘decoy effect’, with pretty much everyone now buying the medium quality beer.
How does this affect compliance?
From a compliance officer’s perspective, this has clear customer outcome implications. Consider the add-on insurance market. After having answered numerous questions connected with their needs for a primary insurance product, the customer is presented with a quote. In the case of motor insurance, this could run into hundreds of pounds. At that point, adding an extra protection product at around £20 seems, in comparison to the main product, good value. Such purchases can occur without the customer really understanding what it is they have purchased. In extremis, this can lead to major mis-selling, like that which occurred with the payment protection insurance (PPI) scandal.
To complicate things further, humans also select information – and look for patterns in data – that confirm existing beliefs, known as confirmation bias. Let’s say that I have a rule that I am using to generate a series of numbers – 2:4:6 – and I ask you to guess this rule. Your guesses must be another three-number series and I will tell you if that series agrees with the rule or not. What tends to happen in this test is that people come up with a theory of what that rule is, then generate number sequences to prove that rule (8:10:12, for example). Very few people discover the rule – it being any sequence of ascending numbers (e.g. 1:132:1035). The reason? People tend to generate guesses that prove their theory – they don’t try sequences that would disprove it.
Organisations need to take this irrationality and bias into account when considering their tone-from-the-top. In a recent survey of ethics amongst Swedish companies,[1] 67% of senior managers agreed with the statement ‘unethical behaviour is disciplined in my organisation’. Only 39% of employees below those senior managers agreed. If senior managers feel themselves responsible for the ethical culture below them, it is in their interests to believe that it is being enforced, even when it isn’t.
Confirmation bias will result in our executive teams only looking for data that supports what they already believe. At best, this leads to an incomplete survey of alternatives – even replicated in AI systems, as the Amazon example reveals. At worst, it results in a culture where challenge is deemed unacceptable and those who question received wisdom are considered heretics.
Challenging the narrative
As knowledge of biases inherent in decision making grows, many firms are taking concrete steps to counteract it within their organisation. One effective way is by creating a ‘risk and compliance function mandate’, placing an explicit expectation on the second line functions to play a ‘critical friend’ role. A mandate, agreed by the board, can confer rights such as ‘unconditional access to information’, ‘right of veto’ and ‘the right to appoint external expert review’. Such a clear and unambiguous statement – that the second line are there to challenge – can act as a counterbalance to bias. This can be particularly powerful when combined with a remit to collect information and data in a way which is free from the objectives and remuneration of the rest of the executive team.
As risk and compliance professionals, it falls upon us, as individuals, to hold firm to our independent roles. This can mean challenging accepted norms and offering alternative views in the face of prized, and widely held, beliefs. It requires bravery in the face of opposition, a willingness to walk towards issues when others are walking away and the courage to hold our ground with CEOs and boards. Counteracting bias is a part of our job description, and, as the Amazon example shows, it will be up to us to point out when we spot it in technology. Only then will the decisions our firms make be fair, equitable and truly free of bias.
About the author:
Paul Eccleson, MSc, studied Psychology and Artificial Intelligence at Manchester University. He began his career in the manufacturing industry, delivering AI solutions first with Lucas Industries research, and latterly within Hewlett-Packard’s European Research Laboratories. Whilst at HP Labs, Paul delivered one of the UK’s first electronic commerce sites and an award winning e-commerce solution for the BBC. Paul has held Board level risk and compliance positions with AXA, Royal London and Munich Re Group. In his current role, Paul has overseen the governance and compliance re-generation of Munich Re’s legal expenses insurer, DAS, and been a key witness in one of the UK’s largest Private Prosecutions for Corporate Fraud. He lectures at post graduate level in The Psychology of Financial Crime for the International Compliance Association, of which he is a Fellow. Paul is also a Trustee of National Museums Liverpool and the Natural Theatre Company, Bath. He lives in Bristol with his wife Gail, and labradoodle Islay and is a life-long Tranmere Rovers supporter.
_________________________________________________________________
References:
[1] Nordic Business Ethics Survey 2020