AI Impersonating Humans Terrifies Sam Altman, But OpenAI Doesn't Want More Regulation

OpenAI CEO Sam Altman is in Washington, D.C., this week, to lobby the U.S. government for friendly regulation around the development and use of AI. Reports before Altman's visit claimed the CEO would make it clear to politicians that ChatGPT is already an incredibly important tool for American and worldwide users alike. Altman would acknowledge AI's threat to jobs, but focus on the long-term benefits of democratizing AI, taking a middle-of-the-road approach.

OpenAI's agenda is obvious. The company needs support from the government to create even better versions of ChatGPT on the road to superintelligence. That means anything from incentives to build AI infrastructure in the U.S. to laxer AI regulation that would support faster progress in the field. The latter point actually contradicts Altman's remarks at the Federal Reserve on Tuesday.

The CEO admitted he's terrified of AI impersonating humans, warning of an AI "fraud crisis." He's also worried about malicious actors developing and misusing AI superintelligence before the world can build systems to protect itself.

What's an AI fraud crisis?

"A thing that terrifies me is apparently there are still some financial institutions that will accept a voice print as authentication for you to move a lot of money or do something else — you say a challenge phrase, and they just do it," Altman said, per CNN. "That is a crazy thing to still be doing ... AI has fully defeated most of the ways that people authenticate currently, other than passwords."

While ChatGPT offers an Advanced Voice Mode that lets you talk to the AI via voice, it can't be used to replicate a person's voice. However, other AI services might support such functionality, which could enable abuse. There's also a variety of open-source AI models that malicious actors might install on their devices and then figure out ways to have them clone the voices of real people. Such AI-based scams exist, with attackers using them to extract information and money from unsuspecting victims.

As Altman warned, it's not just our voices that bad actors might be able to clone with AI tools. "I am very nervous that we have an impending, significant, impending fraud crisis," Altman said. "Right now, it's a voice call; soon it's going to be a video or FaceTime that's indistinguishable from reality."

Again, ChatGPT doesn't offer such functionality, nor do other chatbots. But AI technology to create lifelike images and videos already exists. Some tools even let people create videos from a still image — Google's Veo 3 in Gemini is one example. But these products have built-in protections to prevent scenarios like the ones Altman is describing.

What does OpenAI want?

Altman being vocal about the dangers of AI is something to appreciate. He's not painting a rosy picture where AI is the solution to everything. AI can be misused for nefarious purposes. Altman also said that the idea of malicious actors abusing AI superintelligence before the world can protect itself is one thing that keeps him up at night.

These concerns echo Altman's remarks at the end of OpenAI's ChatGPT Agent livestream last week. He explained that the new AI agent opens the door to abuse. While OpenAI has built protections into the feature to prevent bad actors from tricking the AI into revealing personal information about the user, there's always a chance that sophisticated attacks might break through in the future.

Despite those nightmare-inducing fears, OpenAI isn't advocating for stronger oversight from the government. OpenAI's proposals for the U.S. AI Action Plan call for laxer regulation so U.S. AI firms can compete with foreign rivals, especially Chinese companies. "We propose a holistic approach that enables voluntary partnership between the federal government and the private sector, and neutralizes potential PRC benefit from American AI companies having to comply with overly burdensome state laws."

For example, OpenAI wants the U.S. government to allow AI firms to train frontier models on copyrighted material while securing the rights of content creators. OpenAI has also called for tighter export regulation of AI tech, support for improved infrastructure, and the U.S. government actively using AI products.

How to protect yourself

Stricter AI laws that might prevent some of the abuse that Altman mentioned during his interview at the Federal Reserve. The world might not be on the precipice of a "fraud crisis" with better AI laws to protect users. On the other hand, laxer AI development laws in other jurisdictions would still enable the creation of tools bad actors can use to impersonate people.

Internet users should be aware of the threats AI poses, whether they use ChatGPT or other chatbots. They should avoid sending personal data and money to third parties without verifying the authenticity of their claims. Do not PayPal or Venmo money to someone claiming to be a friend or family member during a phone call without checking if they're using AI to spoof someone's voice or appearance. On that note, PayPal is already using AI to prevent scams via PayPal and Venmo payments.

Also, do not give AI chatbots personal information, especially more advanced tools like ChatGPT Agent. Always be involved in the process of making purchases online, even if a chatbot adds the products to your basket. Finally, keep yourself informed. In the future, we might have tools to prove to financial and health institutions that you're human. As CNN points out, Altman is backing such a tool. It's called The Orb, and it should offer proof of humanity in the world of AI.

Recommended