AI Voice Cloning Apps Should Terrify You - Here's Why

The arrival of AI software, like ChatGPT, has opened the door to new ways for scammers to trick people. Scammers may use AI chatbots like TerifAI, which supports voice cloning to target unsuspecting victims. They'll pretend to be loved ones in an emergency, superiors asking for financial transactions, or well-known celebrities endorsing specific investments. The goal is to convince the victim to act, and AI voice cloning apps are easy to find and use. They only need a few seconds of audio to clone someone's voice, and some of them may not have enough security guardrails in place to prevent abuse. You should be worried about voice cloning threats and second-guess any suspicious voice calls in the future.

A Consumer Reports investigation from March 2025 highlighted the risks associated with voice cloning apps. The non-profit looked at six apps that use AI to offer voice cloning capabilities, finding that four of them don't do enough to ensure that the person doing the cloning has the consent of the speaker. The report names ElevenLabs, Speechify, PlayAI, and Lovo as the four apps that had no technical mechanism to confirm speaker consent, as of last March. Descript and Resemble AI are the two apps that made abuse harder, but Consumer Reports still found ways to bypass protection.

To test these apps, Consumer Reports tried to create voice clones from publicly available audio, a scenario a scammer might employ. A malicious individual may look for voice samples from social media when targeting an individual. The audio may be used with voice cloning services to create the desired outputs. Some of the companies Consumer Reports studied said that they have protections in place, like watermarking recordings or using a database of deepfakes, but those protections may be insufficient.

How do scammers use voice cloning tools?

Consumer Reports detailed a few specific attacks involving voice cloning. For example, the "grandparent scam" targets seniors. A scammer would use a voice cloning tool to place a call to a senior and pose as a loved one. They'll say they need money for an emergency, urging the senior to give them money immediately, via a cash payment, a wire transfer, or the purchase of gift cards. The second scam involves cloning the voices of celebrities to generate content that endorses a particular type of investment. Other impostors may go for financial scams, targeting employees at large corporations. In 2024, a worker at a Hong Kong company sent $25 million to impersonators who used a voice cloning tool to pose as the firm's CFO.

While these types of attacks have the same motive — obtaining money from the victim — scammers may have other nefarious reasons in mind. Consumer Reports cites reports of deepfake schemes targeting the Taiwan presidential election in 2023. Similarly, a robocall in New Hampshire using a voice clone of former President Joe Biden urged people not to vote in 2024.

It's not just Consumer Reports warning against AI voice cloning tools. OpenAI CEO Sam Altman addressed fraud and AI in an interview at the Federal Reserve last July. Specifically, he mentioned the use of voice to authenticate in some apps. "A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else," he said, adding that "AI has fully defeated that. [...] I am very nervous that we have a significant impending fraud crisis because of this."

What you can do to protect yourself

A key way to protect yourself against scams is by being informed of what's possible with AI voice tools and what attackers do in the real world. Armed with that knowledge, you'll know that you have to verify the caller. If someone is saying they're your child, and they need money, you can hang up and call them on their actual number to verify the information. If a superior asks for an urgent wire transfer, you can delay the action until you've been able to check that the call was genuine. As for celebrities asking you to invest in a company or crypto coin, or politicians urging you not to vote, you can search the web for additional information. What you can't do is try to detect cloned voices. Consumer Reports cited a Berkeley study that showed people were able to detect fake voices only 60% of the time, but the AI software is improving continuously, and attacks are getting more sophisticated.

OpenAI's ChatGPT supports voice chats, but the company doesn't offer voice cloning AI software to users, something Consumer Reports highlights. Altman also cited some of the scams the non-profit mentioned, including ransom attacks "where people have the voice of your kid or your parent, and they make this urgent call." Altman predicted that, in the future, some scammers may switch to convincing AI-generated FaceTime calls that may be "indistinguishable from reality."

A Time report last July mentioned an FBI warning about "vishing" campaigns, saying that "malicious actors are more frequently exploiting AI-generated audio to impersonate well-known, public figures or personal relations to increase the believability of their schemes." Consumer Reports also noted that attacks are becoming increasingly multimodal, involving email, text, and images/videos in addition to cloning someone's voice.

Recommended