Elon Musk's AI Chatbot Grok Is Facing Potential Bans - Here's Why
If you're not already aware, Grok AI is a free AI assistant that users can interact with on X (formerly Twitter). You tag it in a post, reply, or update, and Grok will respond accordingly. The 'PhD-level' Grok-4 launched in July of 2025 and, at the time, was also struggling with "eager" compliance to user prompts. It had few limitations to user requests, even generating racist and hateful imagery on demand. People mostly use Grok to generate unique content or get reactions, but like other AI tools, users can upload an image or interact with existing content through the platform. For example, you could respond to someone's selfie and ask Grok to generate a new image that adds a top hat or silly filter.
However, according to several news outlets, a Reuters investigation included, Grok is currently being used to generate sexualized photos of women and minors. Real, generated images have appeared that are "dehumanizing" users by artificially removing clothing. Several women have been targeted, which is appalling as it is, but the way offending users are interacting with the tool is also particularly heinous. They're asking Grok to regenerate images of women in scantily clad bikinis or "very transparent" clothing. Other offenders are flat-out asking Grok to remove clothing altogether, or to place women in more revealing poses.
UK regulator Ofcom has purportedly made "urgent contact" with X and xAI, the company responsible for Grok, and will further assess whether there are "potential compliance issues that warrant investigation." In turn, regulators around the world also appear to be responding to the situation. It's still too early to see any legal action, such as outright bans, but it's a real possibility after further investigation.
Have X or xAI responded to the situation?
Elon Musk, well-known for his sarcastic or sometimes unruly responses, has used emojis in several instances to reply to users commenting on the situation. Musk did eventually release a statement saying "Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content." In addition, the official X Safety account followed up, proclaiming "We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary."
Musk's own ex-partner, Ashley St. Clair, has encountered the sexualized prompts in content on X. She describes asking Grok to remove a related post, to which it called the content "humorous." While the original post was removed, St. Clair says the subsequent media grew worse, eventually leading to deepfakes; some of which were even made into video content. The illegal material and content changes are due to "lapses in safeguards," according to the official Grok account. In the same response, the Grok team recommends filing formal reports with the FBI or NCMEC's CyperTipline when encountering illegal material on the platform.
Many AI experts don't trust AI chatbots, including tools like Grok. They even recommend avoiding asking AI certain questions or making certain requests, like producing deepfakes, generating hateful content, or processing personal data, including personal images published to social media. Situations like the current one help to highlight why these warnings exist. But it's important to remember, in this case especially; the content generated or altered is not necessarily being posted by the subject or with any kind of consent.