Following Latest Lawsuit, OpenAI Is Bringing Parental Controls To ChatGPT

OpenAI continues to make headlines for a wide variety of reasons. One of the most notable, though, is tied to a recent lawsuit against the company which alleges that the online chatbot contributed to the suicide of a 16-year-old boy. The lawsuit, which has been brought against the company by the teen's parents, appears to be making waves at the company, as OpenAI has announced plans to approach what it deems "sensitive content" in a different way. Additionally, the company says that it will be implementing parental control systems within ChatGPT sometime in the next 120 days.

"Our work to make ChatGPT as helpful as possible is constant and ongoing," the company wrote in an announcement post today. "We've seen people turn to it in the most difficult of moments. That's why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input."

The post then continues by highlighting some of the ways OpenAI plans to address these concerns, noting that all of these plans are being put into effect within the next 120 days or so. No exact release dates were given, but it at least seems like the company is taking its CEO's comments about improving and providing a safer system seriously.

Growing concerns about AI safety and its uses

This isn't the first time that OpenAI and other AI companies have found themselves in the spotlight for this particular issue. Last year, Character.AI came under fire after a teen took his life after becoming obsessed with his Character.AI companion. 

Then, just last month, another parent penned a guest essay for The New York Times to discuss how her daughter confided in ChatGPT like a friend when she started having suicidal thoughts. Those thoughts eventually turned to reality, and while the chatbot didn't suggest taking her own life in that particular case, it showed how heavily some people are starting to lean on these "AI assistants," and how many are starting to see them as friends and not just machines. While this particular case wasn't related to a teenager like the most recent lawsuit, it still shows a scary pattern of AI usage, and it isn't just tied to a singular age group, either.

These concerns are only going to grow as more people start to use AI on a regular basis. That's why it's important for OpenAI and other companies building AI systems to think ahead and create much-needed parameters and safety systems directly into their AI models.

How OpenAI is addressing safety concerns

People use ChatGPT for solving math problems, help coding or brainstorming projects, and even as a trusted confidant. But even Sam Altman, the CEO of OpenAI, has warned users not to be too open or personal, a sentiment echoed by mental health experts. That's because using ChatGPT as your therapist is dangerous. Not only is there no legal privacy for what you share with ChatGPT, but AI also has a tendency to make false statements or hallucinate. Despite those comments, the company is still taking steps to ensure it can approach these situations as safely as it can.

OpenAI says it will start implementing parental controls within the month, with features including the option to link a parent's account to the teen's account, as well as giving parents the ability to control how ChatGPT responds with "age-appropriate" behavior rules. Parents will also be able to disable chat history and ChatGPT's memory, as well as receive notifications whenever the system detects signs of "acute distress." OpenAI says that this last feature will be heavily guided by experts to help "support trust between parents and teens."

Further, the company notes that it is working with a global network of physicians to improve how its models respond to inquiries that have to do with mental health. OpenAI will continue adding more members to this global network over time to ensure it has as much research and input as possible from experts to help direct the development of newer, better safety measures. One of those new measures includes the aforementioned smart router system, which released with GPT-5 and directs certain queries to more advanced reasoning models when necessary.

Recommended