Many AI Experts Don't Trust AI Chatbots - Here's Why
Artificial Intelligence (AI) took the world by storm when OpenAI released its chatbot, dubbed ChatGPT, for public use back in November 2022. Since then, multiple companies have launched their own chatbots to try to claim a piece of the growing market. Even with multiple players in the market, ChatGPT receives over 2.5 billion requests per day, which should give you an idea of just how much people use AI these days.
According to OpenAI, ChatGPT users mainly use the chatbot for assistance in everyday tasks and "three-quarters of conversations focus on practical guidance, seeking information, and writing." Despite the meteoric rise of AI use within both personal and professional settings, many people building these systems don't trust them.
A debate over AI usage is happening as the rest of us are increasingly becoming reliant on AI chatbots and trust them with some critical tasks, such as writing police reports. It may come as a surprise, then, that many AI experts are skeptical about the technology and, according to a recent article by The Guardian, some are even advising their friends and family to avoid it altogether, for various reasons.
Why are the experts skeptical about AI chatbots?
If there's a group of people who understand AI chatbots better than most, it's the experts building the systems. These are the people responsible for pushing the boundaries with every new model that is released to market, and those who help assess the quality of output for better results. The Guardian spoke to various experts in the AI field who expressed their skepticism about the technology. One of the recurrent themes: how companies prioritize rapid turnaround for AI raters, and how they don't offer these workers enough training and resources to produce their best deliverables.
"We're expected to help make the model better, yet we're often given vague or incomplete instructions, minimal training and unrealistic time limits to complete tasks," Brook Hansen told The Guardian. One worker also revealed how some colleagues tasked with rating sensitive medical content were in possession of only basic knowledge about the topic. Criticism is not limited to the rating side. One Google AI rater revealed to The Guardian how he became skeptical of the broader technology, and even advises friends and family to avoid chatbots after seeing just how bad the data used to train models really is.
Another reason why some AI experts don't trust chatbots is the propensity for hallucination, or completely off-base or erroneous reasoning coming from the models. In a YouTube video by OpenAI, CEO Sam Altman says it's "interesting" that people have a high degree of trust in ChatGPT — perhaps misplaced, in his view, as AI hallucinates and shouldn't be trusted that much. Andrej Karpathy (former research scientist and founding member at OpenAI, and former director of Tesla's AI division) also discussed some of the limitations of AI in an article published on X, cautioning against using AI in production without human oversight.
Should you trust AI chatbots?
The short answer is no. If experts who work in the field are skeptical about trusting AI, you should approach AI chatbot outputs with a pinch of salt. Earlier this year, we saw multiple cases of Google AI overviews making blatant mistakes – one clear sign that you shouldn't take AI outputs at face value. Because of such mistakes, there are things you should never ask ChatGPT or any other AI chatbot, as you're more likely to get misleading information.
For example, in an interview with MIT Technology Review, Meredith Broussard, a data scientist and NYU professor, advised against AI use for questions surrounding social issues, as these questions are more nuanced by nature. Additionally, a recent study by NewsGuard, a media literacy non-profit organization, found that the rate at which AI chatbots from major companies are likely to repeat false information almost doubled from 18% in August 2024 to 35% in August 2025.
The report found ChatGPT was likely to spread false claims 40% of the time. Such statistics are why even Google's CEO, Sundar Pichai, cautions against blindly trusting AI chatbots. All information or dialogue derived from chatbots should be cross-checked manually for veracity.