Published by Shameela Gonzalez, FSI Industry Lead, CyberCX on 18 December 2024
Will Artificial Intelligence (AI) do more to mitigate or enable economic crime?
Recently at the AusPayNet Summit’s Big Debate I was asked to argue in favour of ‘enable’. Against a formidable opponent in James Roberts, Commonwealth Bank of Australia’s General Manager of Group Fraud Management Services, I looked at how criminals are harnessing AI to create synthetic identities, automated phishing campaigns, and deepfakes to target financial organisations and hurt their customers.
Now a quick disclaimer – the Big Debate is the fun part of the Summit that put a group of experts on either side of the broad question of whether AI presents more threats than opportunities in payments. While we’re locked into one side for the purposes of the debate, reality is obviously much murkier!
AI has enormous potential to transform how we use technology in our daily lives. And while a lot of that has already been realised, we’re only scraping the surface. There’s no putting this toothpaste back in the tube and nor should we – from healthcare, to education, to financial services the way AI is being utilised to streamline business processes and improve our daily lives is already plain to see.
Critically, however, AI is not without its risks, and the financial services sector needs to be alive to this. Here are the points I made in the Big Debate which remind us that, in the wrong hands, AI can enable economic crime.
Synthetic identities
AI has opened a pandoras box for technology-enabled, sophisticated impersonations. Cybercriminals are using generative AI – like deepfakes – to create fake identities so lifelike that they can pass for real people. These identities are being used to open fraudulent bank accounts, apply for loans, and commit large-scale fraud.
Consider this: according to the US TransUnion’s H2 2024 Update of the State of Omnichannel Fraud report, there is USD $3.2 billion in lender exposure to suspected synthetic identities.
As these figures show, the scale of this issue is simply staggering, and AI is giving criminals the tools to execute it faster and more effectively than ever before.
Automated phishing campaigns
We’re all familiar with dodgy phishing emails or text messages urgently prompting us to click on an obviously malicious link. Most people are actually pretty good at spotting these now, but AI is introducing a new – and frankly terrifying – layer of risk.
Scammers are using AI tools like ChatGPT to craft more believable and personalised emails at unprecedented scale. These tools mean that the common telltale signs of a phishing email – riddled with spelling errors and other basic mistakes – are no longer there.
AI tools also help criminals scrape your social media for unique details that can be included in sophisticated, personalised phishing attacks – your birthday, your workplace, your hobbies, even your pet’s name.
This means that not only are AI phishing scams harder to spot, they are also more likely to include individualised details that are very convincing. We have spent more than a decade focused on educating people how to spot phishing emails and texts. This has been largely a success. But in AI criminals have found a new tool that will mean this process of educating people about how to spot a scam in the age of AI is starting all over again.
- Read more: CyberCX – Scammers in the age of AI
Deepfakes
In February this year, Hong Kong police alleged that a finance worker at a multinational firm in the city was deceived by scammers using deepfake technology to impersonate the company’s CFO.
The worker would dial into video calls and see what looked and sounded like several of their colleagues. They were deepfakes too. Convinced they were on regular calls with their colleagues, the staff member made 15 transfers to five bank accounts totalling at more than USD $25 million.
AI can now create hyper-realistic videos and voice recordings that can mimic anyone – especially high-profile individuals. We’ve even seen that in Australia where criminals have impersonated bank CEOs in elaborate phishing campaigns targeting customers.
AI scams-as-a-service
While these may sound like the complicated tools of sophisticated criminals, the reality is much scarier.
AI-powered tools like FraudGPT are available on the darkweb for as little as $200 a month. With these tools, even low-skilled criminals can create phishing emails, malware and scam websites. These tools have no guardrails, no ethical considerations – just pure, unregulated criminal potential.
What can we do?
Now let’s be clear – none of this is to say I am anti-AI. Far from it. These are the points and examples I turned to when asked to argue that AI will do more to enable economic crime.
As I told James during the debate – I have extraordinary empathy for him and his colleagues in the banking and financials services sector. They face tight budgets, complex regulations, and the relentless pace of technology. Meanwhile cybercriminals have none of these restrictions. They use AI with complete freedom—no laws to obey, no budgets to balance, no ethics committees to consult.
Legitimate organisations are scrambling to keep up. Detection tools lag behind the sophistication of these AI-driven attacks – criminals simply do not have to play by the same rules.
They innovate faster than we can respond, exploiting every loophole our systems leave open. Added to this, organisations are in a never-ending juggle of regulatory compliance, upskilling people and budget cuts. It’s like trying to fix your car while it’s still driving at 100km/h.
The reality is AI is a double-edged sword. On one side, it’s a transformative force for good, driving progress in healthcare, business, and education. But on the other side, it’s a tool of unprecedented criminal potential, enabling fraud, money laundering, and the erosion of trust in financial systems.
And AI is not going away. Like with any other new and transformative technology we need to know and understand how it is being misused in order to minimise the risks.
If you have any questions about the risks associated with AI and how to protect yourself and your organisation from these, I encourage you to reach to an expert at CyberCX.