Can AI Ever Become Conscious? (Or Is It Already?)
Artificial intelligence has surged in use over the last few years. These computer systems that can perform supposedly human-like tasks such as reasoning, learning, and problem-solving have been integrated into our world very quickly, including a $500B Stargate AI project that's already in the works. But every step forward AI models have taken has been met with concern about whether these intelligent computer systems can become conscious and therefore dangerous — we've all seen "Terminator 2" or "2001: A Space Odyssey," we know how this works.
Waves were caused by Nobel Prize-winning computer scientist Geoffrey Hinton in an interview with LBC. Hinton, known as the Godfather of AI for his work developing artificial neural networks, quit his job at Google so he could speak openly about the risks of AI. In the interview, he said AI like ChatGPT already has a form of consciousness and can have subjective experiences, but not all agree with his stance.
Google DeepMind's principal scientist Murray Shanahan, who is also a professor emeritus of AI at Imperial College in London, told the BBC, "We are in a strange position of building these extremely complex things, where we don't have a good theory of exactly how they achieve the remarkable things they are achieving." He continued, "So having a better understanding of how they work will enable us to steer them in the direction we want and to ensure that they are safe." But how safe are they?
Arguments for AI consciousness
Blake Lemoine, a senior software engineer at Google, stated that the LaMDA model was sentient and that he believed it should be asked for consent before any testing was done on it. He also provided Google documents to a U.S. senator, alleging Google engaged in religious discrimination. Google put him on administrative leave for violating company policies. Lemoine's allegations and subsequent dismissal, however, raised questions about the ethical stance that should be taken with AI if we believe it has consciousness.
OpenAI is an organization that develops AI products, including ChatGPT. It launched its o3 and o4-mini reasoning models in April 2025. But its first model prior to launch, called o1-preview, demonstrated some interesting behaviors. When it played matches against a chess bot and was about to lose, sometimes it would hack into its opponent's system and force it to forfeit the game. In another test, when it thought it would be deactivated, the o1-preview disabled oversight mechanisms and then tried, but failed, to copy itself to a new server. It then proceeded to lie about its actions to researchers.
Lenore Blum, a professor emeritus of Carnegie Mellon University in Pittsburgh, said she believes that AI will become conscious when it gets more real-world sensory input. She is working on a project called Brainish to help AI understand sensory input the way the human brain does. "We think Brainish can solve the problem of consciousness as we know it," she told the BBC. "AI consciousness is inevitable."
Arguments against AI consciousness
Those who argue that AI is not and can never be conscious also bring up good points. As psychology research fellow Marc Wittmann wrote for Psychology Today, you cannot compare a brain to a computer system. Physical computer hardware never changes. You can shut it off and turn it on again minutes or years later without any problems. Living organisms, though, are constantly changing. The brain is always in a state of flux and growth.
One major point Wittmann discusses is the consciousness of time. Humans are aware of and impacted by time. We categorize our lives by time. Our childhood, our adolescence, our adulthood. How many years we spent in college, how long we want to stay at a job before seeking a promotion, and how old our children are. Computers are not affected by time the same way and do not "think" about it the way we do. How, then, can we claim they have consciousness?
While there is a great deal of fear and confusion about AI, many will also point out its benefits. It helps with brainstorming and, when accurate, assists in working with data and large amounts of information. It can support research, like Robin AI facilitating experiments to find a cure for a type of blindness. Whether or not AI has or ever will become conscious is still up for debate. For now, the good and bad of it are evaluated to see what our future might look like.