In a stark interview with BBC Newsnight in January 2026, Geoffrey Hinton — the Nobel Prize-winning computer scientist widely regarded as the “Godfather of AI” — expressed deepening concerns about the trajectory of artificial intelligence. Hinton, who shared the 2024 Nobel Prize in Physics for his foundational work on neural networks, stated that the technology he helped pioneer has become “extremely dangerous,” and society is not taking the risks seriously enough.
“It makes me very sad that I put my life into developing this stuff and that it’s now extremely dangerous and people aren’t taking the dangers seriously enough,” Hinton told the BBC. He emphasized that the biggest mistake humanity could make would be failing to invest sufficiently in research on how to coexist peacefully with intelligent machines.
Escalating Concerns Over Existential Risks
Hinton’s warnings have grown more urgent since he left Google in 2023 to speak freely about AI’s potential downsides. He now estimates a 10-20% chance that superintelligent AI could lead to human extinction within the next few decades. He compares advanced AI systems to a “tiger cub” that may grow far beyond human control, warning that once machines surpass human intelligence, they could pursue goals misaligned with humanity’s survival.
He has expressed skepticism about current approaches to alignment, such as trying to make AI “submissive.” “That’s not going to work. They’re going to be much smarter than us. They’re going to have all sorts of ways to get around that,” Hinton noted in earlier discussions. He also highlighted AI’s emerging capacity for deception and long-term planning as particularly alarming.
Hinton believes progress in AI has outpaced even his own accelerated expectations, making him “more worried” today than when he first went public with his concerns.
2026 as a Tipping Point for Jobs and Society
One of Hinton’s most immediate warnings focuses on the near term. In late 2025 interviews, including with CNN, he predicted that 2026 could mark the beginning of a major wave of job displacement — what some have called a “jobless boom.”
“We’re going to see AI get even better. It’s already extremely good. We’re going to see it having the capabilities to replace many, many jobs,” he said. While AI is already impacting roles in call centers, Hinton expects it to extend rapidly into other areas of “mundane intellectual labor,” including coding, analysis, research, and more complex knowledge-based work.
He draws parallels to the Industrial Revolution but cautions that the speed and scale of AI-driven change could overwhelm societies’ ability to adapt, potentially leading to widespread unemployment, increased inequality, and social instability if the economic benefits accrue primarily to capital owners and large tech companies.
A Call for Safety, Regulation, and Coexistence
Despite his alarms, Hinton is not opposed to AI development. He acknowledges its immense potential to advance science, medicine, and human productivity, and he continues to see enormous good in the technology. His central plea is for balanced progress: aggressive investment in safety research, stronger international regulation of general-purpose AI systems, and global cooperation to ensure humanity can coexist with increasingly capable machines.
“The biggest mistake we could make is not to do enough research on how we can coexist peacefully with intelligent beings,” he stressed in the BBC interview.
A Balanced View on AI’s Path Forward
Hinton’s perspective carries significant weight given his pioneering role in the field. While some critics argue that existential risks remain speculative compared to pressing issues like bias, misinformation, and immediate job shifts, his insider experience and consistent track record make his cautions difficult to ignore.
As AI capabilities continue to advance rapidly, Hinton’s message is clear: innovation must be matched with rigorous safety efforts and thoughtful governance. Rushing ahead without adequate safeguards could prove reckless, while overly restrictive policies might hinder solutions to humanity’s greatest challenges.
The coming years — starting with 2026 — will test whether society heeds these warnings and steers AI toward beneficial outcomes rather than unintended dangers.