Google Just Exposed How Hackers Are Turning AI Into A Super Weapon
Artificial intelligence (AI) programs can automate tedious activities, speed up research, and streamline communications, but they are only as good as their prompts and the intent behind them. While you're busy figuring out prompts to make ChatGPT more efficient, hackers have been using AI to steal passwords and bank information.
Recently, the Google Threat Intelligence Group (GTIG) published a blog post detailing how malicious actors are abusing various AI programs, including Google's own Gemini, to attack individuals and either make off with crucial information or trick victims into giving it to them. According to GTIG, AI is being used for intellectual property theft, surveillance, and creating new kinds of malware programs, leading the group to compile a list of "threat actors" who tried to use Gemini for malicious means. Google stepped in and stopped these individuals, but it still wanted to paint a picture of what the people were doing, as their coding misdeeds could very well influence the future of cybersecurity.
AI lets hackers quickly find targets and tweak approaches
One of AI's scariest powers is its ability to quickly scan the internet for all the information specified in a prompt. If an AI can speed up searching for parts to build a gaming PC, it can create lists of victims that hackers can use in future attacks. According to GTIG, AIs can rapidly profile potential targets and tell bad actors everything from their industries to their roles and where they sit in an organization. This gives hackers a plan of attack faster than old-fashioned reconnaissance and can suggest avenues they wouldn't normally consider. One such example was the hacker "UNC6418," who utilized Gemini to seek out sensitive information on members of Ukraine's defense sector for a phishing attempt.
Another way AI can be misused is to make scam messages sound more convincing. After an AI produces a list of potential targets, malicious actors can use the programs to generate content for use in phishing scams. Normally, you can differentiate a phishing attempt from a legitimate email with telltale signs such as grammar and misspellings, but AIs craft phishing emails that look a lot more legit. Even worse, according to GTIG, AI programs can mimic human communication while conversing with targets, thus building a level of trust with their would-be victims.
The hacker "UNC2970" (who was linked to the North Korean government) utilized AIs to target cybersecurity experts and pose as recruiters. One phishing kit GTIG uncovered was COINBAIT, which could phish cryptocurrency investors for credentials. According to the organization, COINBAIT was constructed on the public Lovable AI app. Imagine what could have happened had hackers used a more powerful API.
AI is being used to code malware
AIs have numerous coding tools designed to make programming easier, and while it isn't normally possible to make these software products produce malicious code, hackers have discovered a loophole. According to GTIG, users can trick AI software by utilizing "agentic AI capabilities" — fully-autonomous AI systems that can create multi-step, complex tasks with minimal human interaction. Take the threat actor "UNC795," who was caught making Gemini produce "an AI-integrated code auditing capability." It's not clear what their end goal was, but the attempt points to interest in more autonomous, multi-step tooling. However, this is only one example of all the coders who are trying to use Gemini for evil and the ways they are doing so.
Many of the examples in GTIG's report are framed as proofs of concept. They haven't resulted in any significant cyberattacks, but have still produced what the organization is calling "novel capabilities in malware families."
Just look at HONESTCUE as an example. This is a malware sample GTIG uncovered that acted as a backdoor trojan designed to produce a "multi-layered approach to obfuscation." The malware's secret sauce was how it functioned. Once downloaded, HONESTCUE would use Gemini to receive malicious code and download another bit of malware, all without leaving a trace of activity or payloads on the hard drive. While HONESTCUE hasn't been linked to any acts of cyberterrorism, according to GTIG's analysis, the program was developed by amateur coders, which raises the important question of what an expert hacker could do with the Gemini API.