Hackers Used An Infected Calendar Invite To Hack Gemini And Take Control Of A Smart Home
Earlier this year, a group of security researchers used an infected Google Calendar invite to hijack Gemini and introduce real-world consequences to an AI attack. The researchers, who shared their work with Google earlier this year, used the calendar invite to pass off instructions to Gemini to turn on smart home products in an apartment in Tel Aviv.
The instructions were designed to be delivered at a later time, and when the researchers were ready to activate them, they asked Gemini to summarize their upcoming calendar events for the week, which activated the instructions. The researchers say they believe this might be the first time that a hacked generative AI system has had real-world, physical consequences.
According to a report by Wired, the three attacks against the smart home were part of a much larger 14-part research project designed to test indirect prompt-injection attacks against Gemini. The project is titled Invitation Is All You Need, and the results are free to read online.
Accelerating Google's security advances
A Google representative told Wired that the project and the subsequent research that the security researchers shared has helped accelerate Google's work on making prompt injection attacks like this harder to pull off. It has led directly to an uptick in Google's rollout for defenses against these kind of attacks.
That's important, because these kinds of attacks make the danger that comes along with AI clear, especially as it becomes more widespread. As AI agents continue to be released, indirect prompt injections will become a more common issue, so highlighting the problems that surround them as quickly as possible is going to be key to developing security measures to protect from them.
Over the past several years, we've seen some intriguing methods that researchers have employed in their attempts to break AI. From trying to make AI feel pain to using one AI to break another AI, researchers have been taking drastic steps to find out just how much AI can be exploited. Considering that some vocal individuals are increasingly concerned about the dangers AI poses to humanity, having a clearer picture of what can be done to exploit these systems is key for developing security measures that actually work.