Google Discovers The First Known Case Of Hackers Using AI To Create A Zero-Day Exploit
A new report from the Google Threat Intelligence Group (GTIG) reveals that sophisticated hacker groups have started using AI tools to help create and deploy zero-day exploits. The revelation confirms what many tech analysts have been warning about for quite some time, namely that advanced AI tools will inevitably enable bad actors to unearth vulnerabilities they otherwise may not have discovered.
The GTIG report relays that it identified a "threat actor using a zero-day exploit that we believe was developed with AI." The report doesn't provide additional details about the identity of the "threat actor," but it does mention that the zero-day exploit was designed to be deployed in a "mass exploitation event." The software in question specifically exploited a vulnerability in a Python script to facilitate bypassing two-factor authentication schemes. The exploit was thankfully patched before it could be deployed en masse.
Another reason why this development is concerning is that AI, aside from unearthing exploits, also works to accelerate the pace at which hackers can churn out malware and test vulnerabilities in software. Cyberattacks that previously might have required months of work and painstaking development can now be carried out on a much faster timeline. What's more, hackers have already started using advanced AI to create believable phishing scams. Hackers are also using a scary new Gmail hack with super realistic AI posing as Google support representatives to trick unsuspecting victims into handing over sensitive credentials.
How Google determined the malware was built using AI
Google's security team was able to determine that AI was used to create the exploit based on a close analysis of the code. Specifically, security researchers discovered data strings that routinely appear within LLM training data. The codebase also contained a hallucinated CVSS score, which points to the software being trained on cybersecurity texts. It's similar, in a broad sense, to when AI software might cite non-existent caselaw when writing a legal brief. Indeed, some law firms have gotten in trouble for submitting briefs containing AI hallucinations of made-up cases.
Despite hallucinations being part of a few uncomfortable truths about using Google Gemini, the platform was likely not used to generate the malicious code. Incidentally, the report does note that threat actors typically use several accounts across various AI models to avoid detection and suspicious behavior that might otherwise trigger alarm bells. "Although we do not believe Gemini was used," the report reads in part, "based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability."
While hackers utilizing AI is certainly concerning, it's worth noting that many companies and security firms are already using AI to proactively scan for security vulnerabilities before releasing new software. Ideally, this will enable companies to bolster the security of their software before malicious actors have a chance to exploit it in the wild. As an illustrative example, Mozilla, a few days ago, said that it leveraged AI tools to help it discover and fix 423 security bugs in just one month.