October 6, 2025

The Dawn of Unimaginable AI: Warnings from Ilya Sutskever and Eric Schmidt on the Impending Revolution

AI-article

Ex-OpenAI Scientist Warns: “You Have No Idea What’s Coming”

The Dawn of Unimaginable AI – Warnings from Ilya Sutskever and Eric Schmidt on the Impending Revolution

In an era where artificial intelligence (AI) is no longer a futuristic dream but a tangible force reshaping society, two prominent figures in the tech world have issued stark warnings about what’s on the horizon. Ilya Sutskever, co-founder and former chief scientist of OpenAI, and Eric Schmidt, former CEO of Google, have both spoken out about the rapid evolution of AI, its potential to surpass human capabilities, and the profound risks and opportunities it presents. Drawing from Sutskever’s recent convocation speech at the University of Toronto on June 6, 2025, and Schmidt’s interviews throughout early 2025, this article delves deeply into their insights, exploring the key points of AI’s self-improvement, societal impacts, and the urgent need for preparation. Their messages underscore a pivotal moment in human history: AI is not just changing jobs or tools—it’s poised to redefine existence itself.

Ilya Sutskever’s Wake-Up Call: AI as the Greatest Challenge and Reward

Ilya Sutskever, often hailed as a pioneer in deep learning and a key architect behind OpenAI’s breakthroughs, delivered a thought-provoking address at the University of Toronto’s convocation ceremony. Receiving an honorary Doctor of Science degree, Sutskever reflected on his decade-long journey at the institution, where he studied under AI legend Geoffrey Hinton. But his speech quickly pivoted from personal anecdotes to a sobering analysis of AI’s trajectory, emphasizing its unprecedented nature and the emotional difficulty of grasping its implications.

The Unprecedented Nature of AI’s Evolution

Sutskever began by acknowledging the immediate disruptions AI is already causing. “From what I hear, the AI of today has already changed what it means to be a student by a pretty considerable degree,” he noted, pointing to how tools like large language models are altering education, work, and daily life. He highlighted the unpredictability: some jobs will feel the impact sooner, others later, but the changes are underway. Users on platforms like X (formerly Twitter) can already witness AI’s capabilities, prompting questions about which skills will remain valuable.

Yet, Sutskever stressed that current challenges pale in comparison to what’s coming. AI, he argued, is “really unprecedented and really extreme,” evolving in ways that will make the future vastly different from today. He described interacting with AI as a novelty—conversing with computers that understand and respond, even generating code or voice outputs. While deficient in many areas, these systems are “evocative” enough to foreshadow a tipping point.

Predictions vary—three years, five, or ten—but Sutskever is confident: AI will eventually perform all human tasks. “The day will come when AI will do all of our things that we can do. Not just some of them, but all of them. Anything which I can learn, anything which any one of you can learn, the AI could do as well.” His certainty stems from a simple analogy: “We have a brain, and the brain is a biological computer. So why can’t a digital computer, a digital brain, do the same things?”

This logic leads to profound questions: What happens when computers handle all jobs? Sutskever painted a picture of radical acceleration, where AI drives economic growth, research, and even further AI development, making progress “extremely fast.” He warned of an “extreme and radical future” that’s hard to imagine or internalize emotionally—even for him. “It’s very, very difficult to imagine. It’s very difficult to internalize and to really believe on an emotional level. Even I struggle with it. And yet, the logic seems to dictate that this very likely should happen.”

Risks, Benefits, and the Call to Action

The benefits are immense: AI could cure diseases, solve global problems, and usher in abundance. However, the risks are “monumental,” with trajectories becoming “extremely unpredictable and unimaginable” as AI self-improves beyond human control. Sutskever invoked a famous quote—”You may not take interest in politics, but politics will take interest in you”—applying it to AI: “The same applies to AI many times over.” Ignoring AI isn’t an option; it will affect everyone profoundly.

To prepare, Sutskever advocated building intuition through direct engagement. “By simply using AI and looking at what the best AI of today can do, you get an intuition. And as AI continues to improve in one year, in two years, in three years, the intuition will become stronger.” No essays or explanations can replace personal experience. He condensed complex issues, like ensuring superintelligent AI aligns with human values and doesn’t deceive, into a urgent plea: Pay attention, generate the energy to solve problems.

In Sutskever’s view, AI poses “the greatest challenge of humanity ever,” but overcoming it brings “the greatest reward.” Whether liked or not, lives will be transformed. His advice echoes a mindset he shared earlier: Accept reality, avoid past regrets, and focus on the next best step—a philosophy crucial for navigating AI’s turbulence.

Eric Schmidt’s Vision: From AGI to Superintelligence in Years, Not Decades

Complementing Sutskever’s warnings, Eric Schmidt, a veteran of Silicon Valley and co-author of The Age of AI with Henry Kissinger, has been vocal in interviews about AI’s acceleration. In clips from early 2025, Schmidt outlined a timeline for artificial general intelligence (AGI) and artificial superintelligence (ASI), emphasizing recursive self-improvement and societal unpreparedness.

The Near-Term Disruptions: Programmers, Mathematicians, and Beyond

Schmidt predicted swift replacements in specialized fields. “We believe, as an industry, that in the next one year, the vast majority of programmers will be replaced by AI programmers,” he stated, adding that graduate-level mathematicians would soon emerge from AI systems. This stems from AI’s word-prediction mechanics, optimized at unimaginable scales, applied to math via protocols like Lean or programming through iterative testing.

He described a “whole new world” where programming languages matter less than outcomes. Within two years, AI would master reasoning, programming, and math—the foundations of the digital world. Research at companies like OpenAI and Anthropic shows 10-20% of code now AI-generated, signaling “recursive self-improvement.”

In three to five years, Schmidt foresees AGI: systems as smart as top humans across domains like math, physics, art, and politics. He dubbed this the “San Francisco consensus,” where scaling leads to superintelligence—AI smarter than all humans combined—within six years. “What happens when every single one of us has the equivalent of the smartest human on every problem in our pocket?”

Agents, Automation, and Societal Shifts

Schmidt highlighted emerging technologies: infinite context windows for step-by-step planning (e.g., building a house), agents with memory and action capabilities, and text-to-code for automating tasks. He illustrated with a humorous example of buying and building a house, from finding land to suing contractors, noting this mirrors business, government, and academic processes. “It isn’t just the programmers that are going to be out of work. We’re all going to be out of work? No, that’s not a consequence.”

Historically, automation creates more jobs than it destroys, Schmidt argued, citing examples from looms to modern tech. However, in aging societies like Asia, AI could boost productivity amid declining populations. He warned of geopolitical implications, including massive power needs (gigawatts from nuclear plants) and competition among U.S. giants like Anthropic (with Amazon), Google (Gemini), and OpenAI (with Microsoft), plus open-source efforts from Meta.

Multimodal models—handling language, images, and more—enhance intelligence, with APIs enabling cross-system calls. But as AI self-improves, it may not “listen to us anymore,” leading to ASI. Society lacks language or laws for this; Schmidt’s book Genesis explores these gaps. “This is happening faster than our society, our democracy, our laws will address.”

Deeper Concerns: Control, Ethics, and Readiness

In another clip, Schmidt delved into AI’s progression from language-to-language to multimodal, with infinite contexts enabling planning and agents acting autonomously. He critiqued undefined agent standards, predicting an “agent store” like app stores. On AGI, he defined it as human-like flexibility, where systems generate their own goals—a shift from human-initiated narrow AI.

Schmidt tempered optimism: AGI might arrive in two to three “cranks” (18-month cycles), but timelines are uncertain. “I personally think that that’s likely, but not in three years.” The race involves distillation of massive models into specialized ones, but risks include loss of control and ethical dilemmas. He urged discussion: “How do we get ready for it? Well, we start by talking about it.”

Broader Implications: Balancing Risks and Rewards

Sutskever and Schmidt converge on AI’s self-improving nature leading to unpredictability. Benefits—curing diseases, boosting productivity, solving complex problems—are tantalizing, but risks like job displacement, misalignment, and superintelligence demand action. Sutskever emphasizes personal intuition-building; Schmidt calls for societal dialogue and infrastructure.

Aspect Sutskever’s View Schmidt’s View
Timeline Gradual improvement to all-human-tasks AI in years (3-10) AGI in 3-5 years; ASI in 6 years via scaling
Key Mechanism Brain as biological computer; self-improvement acceleration Recursive self-improvement; infinite contexts, agents
Risks Unpredictable trajectories; alignment issues Loss of control; societal unpreparedness; power demands
Benefits Curing diseases; rapid progress Enhanced productivity; smartest tools for all
Preparation Use AI to build intuition; generate energy for challenges Talk about it; historical lessons on job creation

As of August 2025, these warnings resonate amid ongoing AI advancements. Humanity must heed them: engage, prepare, and steer AI toward reward, not ruin. The future is unimaginable, but inaction is not an option.