California's Latest AI Law Will Require AI Chatbots To Confirm They Aren't Human

In what is being referred to as a "first-in-the-nation" safeguard for AI, California Governor Gavin Newsom has signed a new AI law that will require AI chatbots to explicitly inform users that they are "artificially generated and not human." The new bill, signed into law as Senate Bill 243, will hopefully help cut down on the frequency of people being confused about the reality of the "companion" AI chatbots they interact with.

For starters, the capabilities of AI chatbots continue to advance at a rapid pace as the models running them improve, making it more difficult for some users to tell AI from humans. With this new bill, though, the developers behind these chatbots will need to provide new safeguards. More specifically, the bill states that "if a reasonable person interacting with a companion chatbot would be misled to belief that the person is interacting with a human," then the chatbot developer must provide a clear notification that the chatbot is not human.

Now, it is important to note that the bill says this ruling does not apply to customer service chatbots or voice assistants where the AI does not maintain a clear and consistent relationship with the user. It's clear that AI chatbots such as ChatGPT, Gemini, and Claude are the primary targets.

Why Governor Newsom pushed this bill forward

Of course, the arrival of Senate Bill 243 is not an unexpected one. Over the past several months we've seen a bizarre trend with AI chatbots as more people have turned to them for everything from research to friendship to romance. AI companies like OpenAI have even found themselves caught up in lawsuits, such as when a teen died by suicide after allegedly consulting with ChatGPT. These lawsuits led to OpenAI adding its own safety guardrails in ChatGPT, as well as the release of new parental controls and other features to help monitor ChatGPT usage.

But those safeguards don't completely solve the issue that so many other AI companion chatbots have introduced to the world. With an increasing number of "AI girlfriend" apps appearing on the internet, having a clear-cut way for developers to ensure that users know what they're getting into is important to help ensure that people don't fall prey to dangerous or misleading AI responses.

Recommended