Artificial Intelligence A New Face of Corporate Fraud

Artificial Intelligence A New Face of Corporate Fraud

Phil Muncaster, guest writer at ESET explains that the malicious use of AI is reshaping the fraud landscape, creating major new risks for businesses.

Artificial intelligence (AI) is doing wonderful things for many businesses. It’s helping to automate repetitive tasks for efficiency and cost savings. It’s supercharging customer service and coding. And it’s helping to unearth insight to drive improved business decision-making. Way back in October 2023, Gartner estimated that 55% of organizations were in pilot or production mode with generative AI (GenAI). That figure will surely be higher today.

Yet criminal enterprises are also innovating with the technology, and that spells bad news for IT and business leaders everywhere. To tackle this mounting fraud threat, you need a layered response that focuses on people, process and technology.

What are the latest AI and deepfake threats?
Cybercriminals are harnessing the power of AI and deepfakes in several ways. They include:

Fake employees: Hundreds of companies have reportedly been infiltrated by North Koreans posing as remote working IT freelancers. They use AI tools to compile fake resumes and forged documents, including AI-manipulated images, in order to pass background checks. The end goal is to earn money to send back to the North Korean regime as well as data theft, espionage and even ransomware.

  • A new breed of BEC scams: Deepfake audio and video clips are being used to amplify business email compromise (BEC)-type fraud where finance workers are tricked into transferring corporate funds to accounts under control of the scammer. In one recent infamous case, a finance worker was persuaded to transfer $25 million to fraudsters who leveraged deepfakes to pose as the company’s CFO and other members of staff in a video conference call. This is by no means new, however – as far back as 2019, a UK energy executive was tricked into wiring £200,000 to scammers after speaking to a deepfake version of his boss on the phone.
  • Authentication bypass: Deepfakes are also being used to help fraudsters impersonate legitimate customers, create new personas and bypass authentication checks for account creation and log-ins. One particularly sophisticated piece of malware, GoldPickaxe, is designed to harvest facial recognition data, which is then used to create deepfake videos. According to one report, 13.5% of all global digital account openings were suspected of fraudulent activity last year.
  • Deepfake scams: Cybercriminals can also use deepfakes in less targeted ways, such as impersonating company CEOs and other high-profile figures on social media, to further investment and other scams. As ESET’s Jake Moore has demonstrated, theoretically any corporate leader could be victimized in the same way. On a similar note, as ESET’s latest Threat Report describes, cybercriminals are leveraging deepfakes and company-branded social media posts to lure victims as part of a new type of investment fraud called Nomani.
  • Password cracking: AI algorithms can be set to work cracking the passwords of customers and employees, enabling data theft, ransomware and mass identity fraud. One such example, PassGAN, can reportedly crack passwords in less than half a minute.
  • Document forgeries: AI-generated or altered documents are another way to bypass know your customer (KYC) checks at banks and other companies. They can also be used for insurance fraud. Nearly all (94%) claims handlers suspect at least 5% of claims are being manipulated with AI, especially lower value claims.
  • Phishing and reconnaissance: The UK’s National Cyber Security Centre (NCSC) has warned of the uplift cybercriminals are getting from generative and other AI types. It claimed in early 2024 that the technology will “almost certainly increase the volume and heighten the impact of cyber-attacks over the next two years.” It will have a particularly high impact on improving the effectiveness of social engineering and reconnaissance of targets. This will fuel ransomware and data theft, as well as wide-ranging phishing attacks on customers.

What’s the impact of AI threats?
The impact of AI-enabled fraud is ultimately financial and reputational damage of varying degrees. One report estimates that 38% of revenue lost to fraud over the past year was due to AI-driven fraud. Consider how:

  • KYC bypass allows fraudsters to run up credit and drain legitimate customer accounts of funds.
  • Fake employees could steal sensitive IP and regulated customer information, creating financial, reputational and compliance headaches.
  • BEC scams can generate huge one-off losses. The category earned cybercriminals over $2.9 billion in 2023 alone.
  • Impersonation scams threaten customer loyalty. A third of customers say they’ll walk away from a brand they love after just one bad experience.

Pushing back against AI-enabled fraud
Fighting this surge in AI-enabled fraud requires a multi-layered response, focusing on people, process and technology. This should include:

  • Frequent fraud risk assessments
  • An updating of anti-fraud policies to make them AI-relevant
  • Comprehensive training and awareness programs for staff (e.g., in how to spot phishingand deepfakes)
  • Education and awareness programs for customers
  • Switching on multifactor authentication (MFA) for all sensitive corporate accounts and customers
  • Improved background checks for employees, such as scanning resumes for career inconsistencies
  • Ensure all employees are interviewed on video before hiring
  • Improve collaboration between HR and cybersecurity teams

AI tech can also be used in this fight, for example:

  • AI-powered tools to detect deepfakes (e.g., in KYC checks).
  • Machine learning algorithms to detect patterns of suspicious behavior in staff and customer data.
  • GenAI to generate synthetic data, with which new fraud models can be developed, tested and trained.

As the battle between malicious and benevolent AI enters an intense new phase, organizations must update their cybersecurity and anti-fraud policies to ensure they keep pace with the evolving threat landscape. With so much at stake, failure to do so might impact long-term customer loyalty, brand value and even derail important digital transformation initiatives.

AI has the potential to change the game for our adversaries. But it can also do so for corporate security and risk teams.

Accenture Invests In QuSecure A Specialist In Post-Quantum Cybersecurity
Flow48 raises $69mln and expands into KSA

Flow48 raises $69mln and expands into KSA

Flow48 announced successful closure of its $69 million Series A funding round. The…
Lola secures $1.3 million funding from Vision Ventures and Plus VC

Lola secures $1.3 million funding from Vision Ventures and Plus VC

Lola Do Inc., an up-and-coming innovative tech startup offering a highly customizable…