Education - How scammers use AI to lure more victims

Stay up to date on the latest scams. 

Advanced cyber tracking and profiling technology can identify where they are hiding.

If you suspect you are a victim, contact us today!

Emerging AI Threats How Deepfakes Are
Fueling the Next Wave of Online Fraud

As artificial intelligence advances at breakneck speed, so do the tools available to cybercriminals. Among the most alarming developments is the rise of deepfakes—hyper-realistic audio, video, and image manipulations powered by generative AI.

Deepfakes are now a weapon of choice in sophisticated online scams, enabling fraudsters to impersonate trusted figures with amazing accuracy.

The FBI’s Internet Crime Complaint Center (IC3) reported a 400% surge in deepfake-related complaints, spanning investment schemes, corporate extortion, and personal blackmail.  The losses from these scams already surpassed one billion dollars in 2025.

This article explores how deepfakes are supercharging online fraud, real-world cases that expose the danger, and actionable steps to protect yourself and your organization.

What Are Deepfakes, and How Do They Work?

A deepfake uses deep learning algorithms. Typically, one AI model generates the content (for example, a fake video of a CEO speaking), while another AI model critiques it for realism. With AI, these models can be tested over thousands of iterations, until the outputted media (video, audio, images) becomes nearly indistinguishable from reality.

Here’s the scariest part: To create a deepfake, scammers need only –

1. Publicly available media, such as YouTube videos of their subject to train their AI model). 

2. AI tools like DeepFaceLab, FaceSwap, or commercial platforms (some available for under $10/month).  You can see how scammers are gaining by utilizing cheap and very good technology.

3. Test the model until the desired output is achieved.

Real Life Deep Fake Scam Example:

The $25 Million Corporate Heist (Hong Kong, 2024)

In one of the most high-profile deepfake scams to date, a finance employee at a multinational firm in Hong Kong received an urgent video call from what appeared to be the Chief Financial Officer (CFO) and several colleagues.

The call—made from a phishing email—used AI-cloned faces and voices of real executives, reconstructed from public earnings calls and internal training videos. The impersonated “CFO” voice instructed the employee to authorize 15 transactions totaling $25.6 million USD to a new supplier.

The employee complied. Only later, during a routine audit, was the fraud uncovered. The real CFO had never made the call.

Source – South China Morning Post 

The most popular scams that are appearing now from AI fakes

  1. Real-Time Deepfake Video Calls Tools like HeyGen and Synthesia (when misused) now enable live deepfake interactions with minimal latency.
  2. Micro-Targeted Political & Celebrity Scams. Fake endorsements from politicians or influencers push crypto schemes. 
  3. Deepfake KYC fintech/banking targets ways to bypass video-based identity verification for bank accounts or crypto exchanges.  A fraudster can sample your voice and use it in a deepfake to make a call to your bank and cause havoc.
  4. Ransomware + Deepfake Extortion Victims receive fake “leaked” videos of themselves in compromising situations—demanding payment to prevent release.  We’ll discuss this more in depth in our defamation series as well.

In our next article on this topic, we’ll discuss ways you can protect yourself from getting deep-faked. 

Hint:  Always stay vigilant when it comes to your finances!

Have you already lost money in an online scam, and you aren’t sure how to find the people who are running it? 

Contact us for help. We recommend deep source intelligence technology that often finds subjects in less than 48 hours.

Get in touch today and receive a free phone consultation.

Leave a Reply

Your email address will not be published. Required fields are marked *

author avatar
Michael Turner