By Dr Martina Dove, 23 October 2023
In recent years, there has been an explosion of AI powered products and services, some of which are amazing. Generative AI has helped us with content generation, code writing and reviewing, data analysis, generating professional headshots from a small sample of existing photos, and so much more. Everyone seems to be in love with what generative AI can do. There is certainly no denying that generative AI has made some things easier, but with that convenience, there is a hefty price to pay, because AI use is still largely unregulated.[1] As a result, we are facing novel legal challenges [2] and harms.[3] One of these harms is AI-powered fraud.
We are not the only ones benefitting from the convenience of generative AI: scammers are too. Bad phishing emails used to be easily identifiable through their poor spelling and grammar structure. But now, scammers can use ChatGPT to generate eloquently written content, which will undeniably make them appear more credible. In fact, my research into sextortion scams [4] found that many scammers gave an excuse for poor grammar and spelling, aware that poor spelling will serve as a warning that a correspondence is a scam. However, recent sextortion emails appear to be more sophisticated, with no spelling or grammar mistakes, making them feel a lot more believable and sinister.
This is only the tip of the iceberg. Generative AI has also made it extremely easy to fake a real human, and enabled fraudsters to create very clever scams that are impossible to tell from real situations. There has been an explosion of scams in which fraudsters trick parents into thinking their child is in distress, by cloning their child’s voice. Some parents have been told that their child has been kidnapped [5] and will be harmed if they don’t pay the ransom.
Such situations are incredibly harmful, evoking intense fear, which encourages quick compliance with requests. Fear affects our rational thinking, enforcing the fight of flight response. Those with low tolerance for fear will likely want to comply immediately, hoping the issue will be resolved. This primal process is intensified even more because hearing your child in distress can be gut-wrenching for any parent.
Even if we could think rationally, telling deep fakes from reality can be difficult. Whilst use of deep fakes for fraud is not a novel concept,[6] it has definitely become more mainstream with easy and free access to generative AI. Romance scammers are now routinely relying on it to help them establish trust, and support elaborate scenarios [7] which are designed to encourage victims to part with very large sums of money.
Fraud is an ever-growing problem, but with the use of deep fakes, fraudsters are likely to profit even further, because they are tapping into our reality. Reality is defined by what we hear and see; but now, with the use of deep fakes, the lines between what is real and not real are blurred, playing into the hands of bad actors.
Deep fakes have been connected to fake news,[8] the creation of harmful synthetic media [9] used for revenge or extortion,[10] and bank account takeovers.[11] They are also getting more advanced and becoming difficult to tell apart [12] from the genuine content, without the use of detection tools. As such, they have the potential to irreparably affect people’s lives and reputation.
In his book, Tools and Weapons: The Promise and the Peril of the Digital Age, Brad Smith said: ‘The time has come to recognize a basic but vital tenet: when your technology changes the world, you bear a responsibility to help address the world that you have helped create.’
However, governments have been slow in addressing AI harms, possibly because the effects of AI are not immediately apparent or easily recognisable. Whilst big companies talk about responsible AI use and acknowledge the dangers, many don’t follow [13] best practices, and laws are lagging behind technology by several years. Even with the new legislation coming in, it is unclear what recourse victims may have, since companies are not always transparent when it comes to their AI algorithms.
Generative AI needs regulation, due to the amount of harm it can cause to unsuspecting individuals. This is especially the case with the use of deep fakes, which have the potential to harm individuals as well as society, through erosion of trust and privacy, and distortion of reality. As AI technology moves forward, it is imperative that we follow swiftly with education on potential and existing harms, as well as with effective legislation to safeguard the integrity of information, and to punish the misuse of biometric data accordingly.
About the author
Dr Martina Dove is a researcher with a fervent passion for fraud prevention. Her research concentrates on individual characteristics that make people susceptible to fraud, scam and persuasion techniques used by fraudsters and human factors that make us vulnerable to fraud. Martina is especially passionate about de-stigmatising fraud victimisation and fighting fraud by teaching people how to spot scammers’ cunning techniques. She currently lives in Seattle and works in the cybersecurity/observability space.
Martina has also recently published a book on fraud psychology which talks about scam techniques and human factors that make people susceptible to social engineering attacks.
[1] Foreign Affairs, The AI power paradox, August 2023: https://www.foreignaffairs.com/world/artificial-intelligence-power-paradox – accessed September 2023
[2] MIT Sloan School of Management, The legal issues presented by generative AI, August 2023: https://mitsloan.mit.edu/ideas-made-to-matter/legal-issues-presented-generative-ai#:~:text=The%20case%20alleges%20that%20the,train%20models%20infringes%20on%20copyrights – accessed September 2023
[3] 404 Media, ‘Life or death’: AI-generated mushroom foraging books are all over Amazon, August 2023: https://www.404media.co/ai-generated-mushroom-foraging-books-amazon/ – accessed September 2023
[4] SSRN, ‘Persuasive elements in (s)extortion correspondence demanding cryptocurrency’, June 2020: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3616205 – accessed September 2023
[5] The Guardian, ‘US mother gets call from ‘kidnapped daughter’ – but it’s really and AI scam’, June 2023: https://www.theguardian.com/us-news/2023/jun/14/ai-kidnapping-scam-senate-hearing-jennifer-destefano – accessed September 2023
[6] The Wall Street Journal, ‘Fraudsters used AI to mimic CEO’s voice in unusual cybercrime case’, August 2019: https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402 – accessed September 2023
[7] I News, ‘AI deepfake images increasingly used by romance scammers to trick victims, expert warns’, March 2023: https://inews.co.uk/news/ai-deepfake-images-romance-scammers-trick-victims-warning-2235080 – accessed September 2023
[8] I News, ‘AI image of Pope Francis in a puffer jacket fooled the internet and experts fear that’s worse to come’, March 2023: https://inews.co.uk/news/technology/ai-image-pope-francis-puffer-jacket-coat-fooled-internet-experts-fear-theres-worse-come-2234247?ico=in-line_link – accessed September 2023
[9] MIT Technology Review, ‘A horrifying new AI app swaps women into porn videos with a click’, September 2021: https://www.technologyreview.com/2021/09/13/1035449/ai-deepfake-app-face-swaps-women-into-porn/ – accessed September 2023
[10] MIT Technology Review, ‘Deepfake porn is ruining women’s lives. Now the law may finally ban it’, February 2023: https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/ – accessed September 2023
[11] NY Times, ‘Voice deepfakes are coming for your bank balance’, August 2023: https://www.nytimes.com/2023/08/30/business/voice-deepfakes-bank-scams.html – accessed September 2023
[12] Congressional Research Service, ‘Deep fakes and national security’, April 2023: https://crsreports.congress.gov/product/pdf/IF/IF11333 – accessed September 2023
[13] Vice, ‘OpenAi and Microsoft sued for $3 billion over alleged ChatGPT ‘privacy violations’, June 2023: https://www.vice.com/en/article/wxjxgx/openai-and-microsoft-sued-for-dollar3-billion-over-alleged-chatgpt-privacy-violations – accessed September 2023