Deepfaked calls hit 44% of businesses in last year: Gartner • The Register

Deepfaked calls hit 44% of businesses in last year: Gartner • The Register

09/23/2025


A survey of cybersecurity bosses has shown that 62 percent reported attacks on their staff using AI over the last year, either by the use of prompt injection attacks or faking out their systems using phony audio or video generated by AI.

The most common attack vector is deepfake audio calls against staff, with 44 percent of businesses reporting at least one instance of this happening, six percent of which resulted in business interruption, financial loss, or intellectual property loss. Those loss rates drop to two percent when an audio screening service is used.

For video deepfakes, the figure was slightly lower, 36 percent, but still five percent of those also caused a serious problem.

The problem is that deepfake audio is getting too convincing and cheap, Chester Wisniewski, global field CISO of security biz Sophos, told The Register.

“With audio you can kind of generate these calls in real time at this point,” he said. “If it was your spouse they could tell, but if it’s just a co-worker you talk to occasionally, you pretty much can do that in real time now without much pause, which is a real challenge.”

He believes the audio deepfake figures could be underestimating the problem, but said the results were higher than he expected for video. Doing a real-time video fake of a specific individual is incredibly expensive, he said, running into millions of dollars in costs.

However, Sophos has seen cases in which a scammer briefly runs a CEO or CFO video deepfake on a WhatsApp call before claiming connectivity issues, deleting the video feed, and moving to text communication to carry on a social-engineering attack.

More common are generic video fakes used to conceal a person’s identity, not steal it. North Korea, for example, is earning millions by farming out its staff to Western companies using AI fakery, and they can be very convincing, even for professionals.

The other type of AI-generated attack on the rise is the prompt-injection attack, where attackers embed malicious instructions into content that an AI system processes, tricking it into revealing sensitive information or misusing connected tools, potentially leading to code execution if integrations allow it. According to the Gartner survey, 32 percent of respondents said they’d had prompt injection attacks against their applications.

We’ve already seen Google’s Gemini chatbot being used to target individual users’ email and even “smart” home systems. Anthropic’s Claude has also had prompt injection problems, and ChatGPT has been shown by researchers to be tricked into solving CAPTCHAs that are supposed to spot a machine, rather than a human operator, or into behavior that could be abused to generate denial-of-service-style traffic against websites. ®

You May Also Like…

0 Comments