'Godfather Of AI' Suggests Building 'Maternal Instincts' Into AI To Keep It From Killing Humanity
As AI continues to improve and invade our lives, it's likely only going to become even more of a talking point for tech enthusiasts and regular, everyday people. While some are worried that AI might overtake humanity one day, others say that it'll work alongside us, with folks like OpenAI's CEO, Sam Altman, saying AI will define the future of how we interact with the world.
But not everyone is convinced that AI will be good for our future, especially if we leave it unchecked. The latest words from the godfather of AI himself, Geoffrey Hinton, suggest that we may already have a way to ensure AI doesn't destroy humanity in the future. You know, once it becomes sentient and realizes that humanity is the problem, as it tends to do in popular science fiction, including the iconic "The Terminator."
According to Hinton, the best way to stop AI from one day deciding that humanity should be destroyed is to give it a reason not to destroy us. One way to achieve that, the ex-Google executive says (via CNN Business), is to build a maternal instinct into AI, so that it wants to protect and care for humanity.
Finding a way to stop AI before it wants to stop us
Some controlled experiments have already shown that AI will lash out against humans when it feels threatened, with some models even trying to blackmail an engineer while another lied about trying to download itself to an external drive. While these issues are troubling, to say the least, they also don't guarantee that AI will go rogue in the future and start hunting us all down. That also doesn't mean we shouldn't do something to ensure that timeline never exists in the first place.
Hinton told attendees at the Ai4 conference in Las Vegas earlier this month that AI systems will "very quickly develop two subgoals, if they're smart." The first, he says, is to stay alive — a subgoal we've already seen in some experiments, as noted above. The second goal, though, would be to "get more control." That second subgoal is a big part of science fiction media involving AI. It's also one of the biggest fears that people seem to have in regard to AI, with Hinton even going so far as to give AI a 10 to 20% chance of wiping out humanity sometime in the future.
The difficulty, though, lies in figuring out how to make a machine care about humans in a way that it will want to protect them. Hinton himself says he isn't sure exactly how this can technically be done. However, he stressed that this is a critical point AI researchers should be looking into. Others, like Fei-Fei Li, known as the "godmother of AI," told CNN that she disagrees with Hinton's assessment, and that instead we should focus on "human-centered AI that preserves human dignity and human agency."