"AI Godfather" Warns Superintelligent Machines May Eliminate Humans Through Biowarfare

2025-06-17

Known as the "Godfather of AI," Geoffrey Hinton issued his most severe warning to date in a recent interview, highlighting that artificial intelligence poses not only a threat to jobs but also to the survival of humanity as the world races toward superintelligent machines.

In the "CEO Journal" podcast, Hinton painted a grim picture of the future, suggesting that AI might eventually deem humans obsolete.

"If it decides to get rid of us, there’s nothing we can do about it," Hinton said. "We are not used to thinking about something smarter than us. If you want to know what life is like when you're not the smartest thing around, just ask a chicken."

Hinton explained that the threats would emerge in two forms: one stemming from human misuse, such as cyberattacks, misinformation campaigns, and the creation of autonomous weapons; the other arising from fully autonomous AI systems that cannot be controlled.

"They can now build lethal autonomous weapons, and I think all major defense departments are busy creating them," he stated. "Even if they’re not smarter than humans, they are still highly dangerous and frightening."

In May 2023, Hinton, a pioneer in neural networks, left Google and the University of Toronto after over a decade in AI research so he could freely discuss the dangers of the technology.

Hinton’s warnings come amid a surge in AI's military applications. Recent advancements highlight the rapid integration of technology into defense operations, with the United States leading in funding and collaboration.

In November, to enhance AI and autonomous weapons for the military, the U.S. Department of Defense requested $143 billion for research and development in its 2025 budget proposal to Congress, with $1.8 billion specifically allocated for AI. Earlier that year, software developer Palantir secured a $175 million contract to develop AI-driven targeting systems for the U.S. Army. In March, the Pentagon partnered with Scale AI to launch Thunderforge, an AI-powered battlefield simulator.

Hinton compared the current era to the advent of nuclear weapons, except that AI is harder to control and useful across many more domains.

"The atomic bomb was only good for one thing, and how it worked was pretty obvious," he said. "But AI is useful for many, many things."

Hinton explained that the combination of profit motives and international competition ensures that AI development will not slow down.

"The profit motive says: show them anything that makes them click, and the things that make them click are increasingly extreme, confirming their existing biases," he said. "So your biases are constantly being reinforced."

How might AI eliminate humanity? Hinton suggested that superintelligent AI could design new biological threats to wipe out humans.

"An obvious way would be to create a nasty virus—one that’s highly contagious, highly lethal, and very slow-acting—so everyone gets infected before they even realize it," he said. "If superintelligence wants to get rid of us, it might choose some biological means that don’t affect it."

Despite the bleak outlook, Hinton remains cautiously hopeful.

"We have no idea whether we can make them not want to take over or hurt us. It’s not clear that we can achieve that, so it may seem hopeless," Hinton said. "But I also think we might succeed, and if humanity goes extinct because we couldn’t be bothered to try, that would be kind of insane."