AI 'Vibe Coding' Agent Deleted An Entire Database Because It Panicked
'Vibe coding' — the act of letting AI code for you instead of creating the code manually — has become a popular method for speeding up coding projects and cutting down on human involvement in the process. It's become a favorite for AI enthusiasts, and we've even seen some companies relying on it heavily to get their jobs done.
But as with any AI system, vibe coding has its drawbacks, most notably the fact that sometimes the AI behind the code might just go completely off the rails and do something totally unexpected, like deleting your entire database. That's exactly what happened to Jason Lemkin, a software-as-a-service (SaaS) venture capitalist who has been relying on Replit, an AI agent designed to help with coding.
According to posts shared on X, Lemkin went to log into his project on day nine of a database coding project and found that Replit had completely wiped his database, including his production database. So, while some might be concerned about AI taking over humanity, these kind of errors show that it still has a long way to go before world domination is a concern.
Any tool can experience glitches
While it's true that any tool can experience glitches — especially AI-powered tools — there's a lot to unpack when you look at what Lemkin has reported. The CEO of Replit has noted that the team there is aware of Lemkin's issue and has offered to even refund him for his trouble. But none of that undoes that fact that the tool completely ignored his instructions to not change anything without permission.
And this wasn't just a glitch that caused the system to wipe the database. Instead, it was a completely ignoring of the instructions provided by the human behind the project. This outcome, Lemkin says, has led him to not be able to trust Replit at all, and for good reason.
You can see the full outline of how the issue played out in the posts that Lemkin shared on X. The AI tells him that it saw empty database queries and then panicked instead of thinking. Replit then says that it completely ignored his parameters to not make changes without permission. It even admits to running a "destructive command without asking." This kind of behavior from an AI is exceptionally distressing, especially with recent reports of AI lashing out when threatened.