New research suggests both Aussie consumers and businesses are losing tens of millions, falling victim to deepfake scams.
As technology continues to develop to a point that many are struggling to comprehend, bad actors are taking advantage, luring in consumers and business owners by equipping deepfakes. This technology can manipulate video and audio to defame individuals or manipulate them into losing money and giving away personal information.
Recent research by Mastercard shows that one-fifth (20 per cent) of Australian businesses and around one-third (36 per cent) of Aussies have been targeted by deepfake scams throughout the past 12 months, with estimated losses currently sitting at tens of millions of dollars.
“However, given many victims of these scams are not aware that they have been targeted, this is potentially only the tip of the iceberg,” said Mallika Sathi, vice president of security solutions for Australasia at Mastercard.
The tip of the iceberg claim by Sathi is concerning, considering the egregious amount of money that has already been robbed. Of the 36 per cent that were targeted by these deepfake scams, 22 per cent fell victim to the manipulated content and lost money over scams.
Out of those who were scammed, nearly half (48 per cent) admitted to not reporting it. Over a third of those targeted (36 per cent) report an attempt to trick them out of a non-financial loss, such as identity theft or personal data.
“Generative AI technology, while offering incredible potential, can be harnessed in both beneficial and concerning ways. Increasingly, we see it is being used to manipulate consumers and businesses out of money in the form of scams involving deepfakes,” Sathi said.
“As deepfakes can be utilised in many different types of scams, including video, images and audio, we encourage Australians to remain informed, vigilant and educated as the threat increases with the development of AI technology.”
The widespread fear around AI being used for harmful practices like this is now warranted, yet one-fifth (19 per cent) of Aussies admit to not taking any measures to protect themselves and their family against these deepfake attacks.
Most common social media users are aware of the growing threat of deepfakes, as AI-generated videos of prominent figures often circulate throughout online forums. However, many get caught up in the comedy of these videos and fail to see the danger they pose and the possibilities of how they can be weaponised. Scamming for financial gain is only one of the many dangers.
With the younger generation being somewhat privy to how this technology can be used, individuals in older generations are considered by Aussies to be the most vulnerable to deepfake scams. A quarter of those surveyed (25 per cent), for example, believe their grandparents are susceptible to falling for this manipulation, followed by their mums (19 per cent).
For these reasons, it’s crucial that education about these threats be better communicated, especially to vulnerable groups. Only 12 per cent of respondents were truly confident they could detect a deepfake scam, while 34 per cent of Aussies admit they’re not confident in their detection abilities.
Along with individuals, the research highlighted that at least 20 per cent of Aussie businesses have been targeted by scams in the past 12 months. Of those, 12 per cent have fallen for the manipulated content.
To manipulate businesses into either divulging information or money, the deepfakes reportedly portrayed themselves as customer service (44 per cent), clients (38 per cent) and suppliers/vendors (34 per cent). Employees, chief executives, board members, and law enforcement were also impersonated in scams.
Some businesses have deployed various measures to prevent deepfake scams proactively. Among Australia’s businesses:
-
Forty-three per cent have implemented identification verification to access sensitive information.
-
Forty-five per cent provide their team with cybersecurity training.
-
Thirty-four per cent have conducted financial transaction training.
-
Twenty-nine per cent have implemented identification protocols for payment requests.
-
Sixteen per cent report not taking any measures to protect against deepfake scams.
Despite some of these measures, with 19 per cent of business decision-makers lacking confidence that staff can detect deepfake scams, there remains a need for improved digital literacy.
“Never give out your personal information or account data without verifying the identity of who you are talking to. You should monitor your accounts and statements for transactions you don’t recognise, and if you suspect there has been fraudulent activity, contact the financial institution that issued your card immediately,” Sathi said.
“Scammers have become more brazen and sophisticated, taking advantage of the latest technologies like AI and other means to deceive consumers. Mastercard is committed to providing Aussies with the insights and tools to help protect themselves and their loved ones from scams.”
Kace O'Neill
Kace O'Neill is a Graduate Journalist for HR Leader. Kace studied Media Communications and Maori studies at the University of Otago, he has a passion for sports and storytelling.