Picture this: Your CFO calls you on Zoom. She’s asking you to process an urgent wire transfer for a confidential acquisition. You can see her face. You recognize her voice. Other executives are on the call too, nodding along. Everything looks and sounds exactly right.

So you authorize the transfer. And $25 million disappears into a criminal’s bank account.

This isn’t a hypothetical. This happened to engineering firm Arup in early 2024. Every single person on that video call was fake—AI-generated deepfakes created from publicly available conference footage and interviews. The finance employee didn’t stand a chance. And honestly? Neither would most of us.

Welcome to 2026, where seeing is no longer believing, hearing definitely isn’t either, and “I’ll believe it when I see it” has officially retired as a useful phrase.

The Numbers Are Staggering (And Not in a Fun Way)

Deepfake fraud drained $1.1 billion from U.S. corporate accounts in 2025—triple the $360 million lost the year before. By midyear, documented incidents had already quadrupled the entire 2024 total. And here’s the stat that should keep every business owner up at night: 72% of business leaders now identify AI-enabled fraud and deepfakes as their top operational challenge for 2026.

“But I run a small business,” you’re thinking. “Nobody’s making deepfakes of ME.”

Here’s the thing: small and mid-sized businesses accounted for 70.5% of all data breaches in 2025. Why? Because large corporations have entire security teams hunting for threats. You have Dave from IT, who’s also Dave from “can you fix my printer” and Dave from “the WiFi is slow again.” The attackers know this. They’re banking on it. Literally.

How Voice Cloning Actually Works (And Why It’s Terrifying)

Here’s the part that makes this different from every scam that came before: voice cloning now requires just three seconds of clear audio. Three seconds. That’s less time than it takes to say “Hi, this is [your name] from [your company], how can I help you?”

That podcast interview your owner did? Training data. The company YouTube video introducing the leadership team? Training data. That earnings call recording? You guessed it. Every public audio clip featuring your voice is now a potential weapon against your own business.

Modern AI tools analyze vocal patterns, pitch, and cadence from minimal audio samples and generate completely convincing replicas. The technology has crossed what researchers call the “indistinguishable threshold”—which is a fancy way of saying human ears can no longer tell the difference. Your mom couldn’t pick your real voice out of a lineup anymore. (To be fair, she still calls you by your sibling’s name sometimes, but that’s different.)

And it’s not just voice anymore. Deepfake video has evolved from obvious fakes to real-time interactive avatars. The Arup attack proved that multiple AI-generated executives can participate in a live video call, complete with synchronized facial movements, natural body language, and realistic voices. The technology doesn’t flicker or glitch like it used to. It just… works. The uncanny valley has been paved over and turned into a shopping mall.

The Anatomy of an Attack

The criminals targeting businesses have gotten sophisticated. Here’s how a typical attack unfolds:

It starts with reconnaissance. Attackers gather information from LinkedIn, company websites, press releases, and social media. They identify who has authority to move money and who they report to. They study communication patterns and learn the internal lingo. They’re basically doing more research on your company than that job candidate who asked really good questions in the interview.

Then comes the setup. A finance director receives an email from what appears to be the CFO regarding a “confidential transaction.” The employee is suspicious—they’ve heard about phishing. So the attackers do something clever: they proactively suggest a video call to discuss the details. “Let’s hop on Zoom to walk through this.” The apparent willingness to verify through video creates false confidence. It’s the digital equivalent of a con artist saying “here, let me show you my ID.”

The call happens. Multiple senior executives appear on screen. They look right. They sound right. The request seems reasonable given current business activities. The urgency feels appropriate.

The money moves. By the time anyone realizes something is wrong—usually when the employee checks with headquarters through a separate channel—the funds have vanished into a maze of international accounts. The criminals are sipping cocktails on a beach somewhere while you’re explaining to the board why $25 million is missing.

Why Your Team Will Fall For It

Here’s the uncomfortable truth: your smartest employees are the most vulnerable. Not because they’re gullible, but because they’re responsive, helpful, and trained to trust their leadership. You know, all those qualities you specifically hired them for.

The Arup employee who transferred $25 million wasn’t careless. They were doing exactly what a good employee does—responding promptly to a request from senior leadership. The attackers exploited trust, authority, and urgency simultaneously. When your brain sees a familiar face and hears a familiar voice asking for something that falls within normal business operations, every instinct says to comply.

Finance teams face the greatest risk. Unlike other departments, they can move money directly. They handle urgent transactions regularly. A successful attack against IT might compromise systems. A successful attack against finance immediately transfers cash to criminal accounts. Attackers know the shortest path to money runs through your accounting department. (Sorry, accounting department. We love you. Please don’t leave.)

The “But I’d Never Fall For That” Problem

You’re probably thinking you’d catch it. That you’d notice something off. That your team is too smart.

We thought the same thing right before some of our staff clicked on a fake Starbucks gift card email. (If you missed that Tech Tuesday, let’s just say our IT department had some fun with us during the holidays.)

Consider this: In March 2025, a finance director at a Singapore multinational authorized a $499,000 transfer after a video call with multiple executives. The attackers had learned from previous cases that finance professionals now verify unusual requests. So they proactively suggested the video call themselves. The apparent transparency made the scam more convincing, not less. They essentially said “don’t trust me—let’s do a video call so you can see it’s really me” and it STILL worked because the video call was also fake. It’s fraud inception.

Or consider the advertising executive who received a WhatsApp message from what appeared to be their CEO, followed by a Teams call with an AI-cloned voice trained on YouTube footage. The voice asked them to fund a new business venture. That employee refused—but only because something felt slightly off. They got lucky. Most people don’t.

The number of deepfakes increased from 500,000 in 2023 to over eight million in 2025. The criminals are getting more practice than your team is getting training. They’re putting in their 10,000 hours, and it’s not to get good at the violin.

What Actually Works: The Callback Protocol

Here’s the good news: while detection is failing, verification still works. You just need to implement it consistently.

The single most effective defense is absurdly simple: never authorize high-stakes financial transactions based solely on a video or voice call. Always verify through a separate channel you initiate yourself.

The Callback Protocol:

  • Receive request for wire transfer, payment change, or sensitive action
  • Hang up (yes, even if it feels rude—your CFO will forgive you, the fake CFO will not)
  • Call the requester back using a phone number you already have on file—not one provided in the suspicious communication
  • Confirm the request is legitimate before proceeding

This works because attackers can fake the inbound call, but they can’t intercept your outbound call to a known good number. It’s old-school verification for a new-school threat. Sometimes the best technology is the one from 1990.

Building Your Deepfake Defense

Beyond the callback protocol, here’s what businesses should implement:

Establish code words. Create verification phrases that only your team knows—never shared in email, never mentioned on social media. When someone claims to be the CEO requesting an urgent transfer, the first question is: “What’s our verification word?” No code word, no action. Yes, it feels a little like you’re running a speakeasy. That’s fine. Speakeasies had excellent security.

Limit authorization authority. Reduce the number of employees who can approve payments unilaterally. Require dual authorization for transactions above a certain threshold. The more people involved in a decision, the more chances to catch a fake. It’s like having a buddy system, except instead of not drowning at summer camp, you’re not losing $25 million.

Train specifically for this threat. Generic security awareness training isn’t enough anymore. Your team needs to know that video calls can be faked, that voice can be cloned, and that seeing someone’s face is no longer proof of identity. Run simulations if you can—the companies that practice spotting deepfakes are the ones that catch them.

Audit your digital footprint. Every podcast appearance, conference video, earnings call recording, and social media clip featuring your leadership team is potential training data for attackers. You don’t need to disappear from the internet, but you should know what’s out there and factor it into your risk assessment. That keynote speech you were so proud of? A criminal might be proud of it too, for different reasons.

Slow down urgent requests. Criminals rely on time pressure to prevent verification. Any request that demands immediate action without time for normal approval processes should automatically trigger extra scrutiny—especially if it comes from someone who “can’t be reached” through normal channels. Real emergencies can wait five minutes for a callback. Fake emergencies cannot.

Tax Season Makes It Worse

If you think this is just a Fortune 500 problem, consider what happens every January through April. Tax season creates the perfect storm for deepfake-enabled fraud—and your business is a target. Because nothing says “opportunity” to a criminal like millions of Americans stressed about deadlines and expecting official-looking communications.

The IRS saw $4.49 billion in tax fraud in 2025 alone. AI has supercharged every traditional tax scam: polished phishing emails that look exactly like IRS notices, voice-cloned “accountants” calling to request documents, and fake tax preparers using stolen credentials to file fraudulent returns.

Here’s what’s hitting businesses right now:

The W-2 Phishing Attack: An “executive” urgently emails your HR or payroll department requesting a list of all employee W-2 forms for an audit, insurance requirement, or payroll verification. The email looks legitimate. Maybe it even sounds legitimate when they follow up by phone with a cloned voice. The unsuspecting employee compiles the data and sends it—names, addresses, Social Security numbers, income information for every employee. Within hours, fraudulent tax returns are filed for your entire workforce. By the time employees try to file their legitimate returns, their refunds have already been claimed. Congratulations, you’ve just given criminals a company-wide bonus they didn’t earn.

The Fake CPA Call: Scammers are using AI-generated audio to impersonate tax preparers and accountants, calling clients to request sensitive documents or “verify” information. They’ve harvested enough data from prior breaches to sound credible. “Hi, this is [name] from [your actual accounting firm], I just need to verify a few details on your return.” The technology doesn’t have to be perfect—it just needs to sound real long enough to create urgency. And during tax season, everyone’s urgent.

The Payroll Redirect: Attackers compromise an employee’s email, call your help desk pretending to be that employee locked out of their account, get their password reset, then log into your payroll system and redirect their direct deposit to a criminal’s account. Microsoft documented exactly this attack pattern hitting universities in 2025, with phishing emails sent to nearly 6,000 accounts across 25 schools. The attackers didn’t hack anything—they just asked nicely while pretending to be someone else.

For you or your clients, the threat is personal too. Scammers harvest voice samples from social media, then call elderly parents pretending to be their adult child crying about being in jail for tax fraud, demanding gift cards to pay the IRS immediately. The technology only needs three seconds of audio. That TikTok your client’s kid posted singing in the car? It’s training data now. Suddenly “going viral” has a very different meaning.

It’s Not Just Wire Transfers

While the big-dollar heists grab headlines, deepfake technology is enabling fraud across the board. Attackers are using AI-cloned voices to:

  • Reset passwords by calling IT help desks and impersonating employees
  • Authorize access to systems by fooling voice-based authentication
  • Extract sensitive information by pretending to be vendors, auditors, or partners
  • Manipulate stock prices by creating fake executive statements

The FBI issued a warning in 2025 about AI-powered phishing campaigns using cloned voices of high-ranking officials. Voice cloning for fraud jumped over 400% in 2025 alone. This isn’t a future threat—it’s a current reality. The future is here, and it’s calling from a spoofed number.

The Bottom Line

The same AI technology that’s making your business more efficient is being weaponized by criminals at an unprecedented scale. Video calls can be faked. Voices can be cloned. The verification methods we’ve relied on for decades—recognizing a face, knowing a voice—no longer work. Everything your parents taught you about trust has been technologically invalidated.

But here’s what still works: picking up the phone and calling someone back. Requiring a code word before moving money. Slowing down when someone pressures you to act fast. Building processes that don’t depend on your ability to spot a fake, because that’s a battle humans are losing.

The technology will keep improving. Your defense can’t be “get better at detecting deepfakes”—it has to be “verify through channels attackers can’t compromise.” Trust, but verify. Then verify again. Then maybe verify one more time if it involves more than four figures.

Worried about how your finance processes would hold up against a deepfake attack? With tax season in full swing, it’s worth thinking about who has access to employee W-2s, how payroll changes get approved, and whether your team knows that their boss’s face on a video call doesn’t actually prove anything anymore. We help clients think through internal controls that protect against increasingly sophisticated fraud. Sometimes the best security investment isn’t technology—it’s a better process. Let’s talk.