Human Life After Artificial Superintelligence: Separating Hype from Reality
The San Francisco Consensus predicts ASI in 3-4 years. Eric Schmidt and Fei-Fei Li offer a more nuanced view on timelines, economics, and human dignity.
There is a “San Francisco Consensus”—a belief held by the technologists living in the epicenter of the AI boom—that Artificial Superintelligence (ASI) is only three to four years away.
If true, we are standing on the precipice of the most significant event in human history. But are we ready? And more importantly, is the technology actually ready, or are we mistaking the map for the territory?
In a recent conversation with Peter Diamandis, former Google CEO Eric Schmidt and “Godmother of AI” Fei-Fei Li sat down to separate the hype from the reality. They discussed everything from the limitations of current Large Language Models to the birth of “World Models,” and the inevitable economic earthquakes ahead.
This post explores their insights, the divergence in their timelines, and why, in an age of digital superintelligence, preserving human dignity is not just a moral preference—it is a survival strategy.
Source: This post synthesizes insights from Peter Diamandis’s interview with Eric Schmidt and Fei-Fei Li. The original video is available at: Part 1: Eric Schmidt and Fei-Fei Li: Human Life After Artificial Superintelligence (Peter H. Diamandis)
Defining the Horizon: When Does AGI Become ASI?
The conversation begins with a crucial distinction between Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).
-
AGI is human-level intelligence—a machine that can think, learn, and create like a human.
-
ASI is intelligence equal to the sum of all humans, or vastly superior to any single human.
While the “San Francisco Consensus” bets on a 3-4 year timeline to ASI, Eric Schmidt offers a more tempered view. He argues that current models are primarily “next-word predictors.” While they can mimic reasoning, they struggle with the kind of fundamental creativity required to, for example, derive Newtonian physics from raw data without prior training. Schmidt suggests we may need another algorithmic breakthrough to bridge the gap from AGI to true ASI.
The Resilience Connection: This directly supports our Critical Engagement with Technology pillar. By understanding the technical distinctions and the debate around timelines, we move from vague anxiety to informed preparedness. We don’t need to panic about “next week,” but we do need to prepare for “next decade.”
Practical Takeaway: Diversify your information sources. Don’t rely solely on the most optimistic or the most pessimistic timelines. Recognize that even “experts” like Schmidt and Li have differing views on the speed of deployment.
Beyond Language: The Rise of World Models
Fei-Fei Li, a pioneer in computer vision, introduces a concept that goes beyond the current obsession with chatbots: Large World Models (LWMs).
While LLMs (Large Language Models) process text, LWMs possess “spatial intelligence.” They understand the 3D physical world, how objects interact, and how to reason within that space. This is the frontier Li is exploring with her new company, World Labs. The implication is a future where the digital and physical worlds are not separate, but hybridized—impacting everything from robotic surgery to education in the “multiverse.”
The Resilience Connection: This connects to our Mental Resilience pillar. As reality becomes increasingly hybridized (physical/digital), our ability to maintain grounding and distinguish between the two becomes critical. We must expand our definition of “literacy” to include spatial and digital environments.
Practical Takeaway: Start paying attention to “spatial computing” and 3D AI models. The future of AI is not just text in a chat box; it is an immersive, physical understanding of the world.
The Economics of “Runaway Winners”
Peter Diamandis often champions the “abundance” mindset—the idea that technology will demonetize services and raise the floor for everyone. Schmidt, however, throws a sobering bucket of cold water on this optimism.
He warns that while AI will undoubtedly create massive wealth (projected at $15 trillion by 2030) through efficiency, there is no guarantee this wealth will be shared. In fact, the nature of AI—driven by data centers, chips, and network effects—favors concentration. We risk a world of “runaway winners,” where a few nations (primarily the US and China) and a few companies reap the vast majority of the rewards, while regions like Africa or Europe may lag behind due to infrastructure deficits.
The Resilience Connection: This supports our Critical Engagement with Technology pillar. Resilience requires realism. We cannot build a strategy on the hope that “technology saves everyone.” We must understand the economic forces at play to navigate our own careers and community resilience.
Practical Takeaway: Focus on “teaming” with AI. Schmidt emphasizes that the winning combination will be Human + AI. Use the “Einstein in your pocket” to increase your own productivity and value, rather than waiting for the benefits to trickle down.
The Irreplaceable Human: Agency and Dignity
Perhaps the most poignant moment of the conversation comes when Fei-Fei Li is asked about the ultimate role of humans in a world of superintelligence. Her answer is not about productivity or data processing. It is about agency and dignity.
She argues that “asking the right question” remains a uniquely human capacity. Einstein didn’t just solve math; he asked questions that no one else thought to ask. Furthermore, Schmidt points out that we will still value human achievement simply because it is human. We watch humans run the 100-meter dash not because they are faster than cars, but because we care about the human struggle and triumph.
The Resilience Connection: This directly supports our Human-Centric Values and Spiritual & Philosophical Inclusion pillars. In an age of automation, our value shifts from “processing” to “being.” Our dignity is not derived from being smarter than the machine, but from our agency and our relationships with one another.
Practical Takeaway: Cultivate your curiosity. AI can answer any question, but it cannot yet formulate the questions that matter most. Your ability to frame problems and inquire deeply is your competitive advantage.
Critical Analysis: What Aligns and What Doesn’t
Ideas That Align Well with HRP Values
1. The Focus on Human Dignity
-
Why it aligns: Fei-Fei Li’s insistence that “human dignity and agency” must be the center of AI development mirrors HRP’s mission perfectly. We are here to become more fully human, not less.
-
Application: We must evaluate every new tool by asking: Does this enhance my agency, or erode it?
2. The “Human + AI” Collaboration Model
-
Why it aligns: Schmidt’s rejection of the “replacement” narrative in favor of a “collaboration” narrative empowers individuals. It suggests we have an active role to play.
-
Application: View AI as a teammate to be led, not a boss to be feared.
Ideas That Require Critical Scrutiny
1. The Inevitability of Inequality
-
Why it requires scrutiny: Schmidt presents the concentration of wealth as largely inevitable due to “network effects.” While economically sound, a resilience mindset must ask: How do we build local, community-based resilience structures that are not dependent on these centralized giants? We cannot simply accept a “winner-take-all” future without building alternatives.
-
HRP Perspective: We must foster community resilience and decentralized knowledge sharing to protect against the hollowing out of the “middle” by centralized tech powers.
What This Means for Human Resilience
Key Insight 1: Intelligence is Commoditizing, but Wisdom is Not
As “Einstein-level intelligence” becomes available to everyone for free (or cheap), raw intelligence loses its scarcity value. The differentiator becomes wisdom, ethical judgment, and the ability to apply that intelligence in novel, human-centric ways.
Key Insight 2: The Timeline is Less Important than the Trajectory
Whether ASI arrives in 2029 or 2039 matters less than the fact that it is coming. Resilience is about preparing for the trajectory—the increasing hybridization of life and the disruption of traditional economic models—regardless of the exact arrival date.
Key Insight 3: Agency is the Ultimate Human Advantage
In a world where machines can outthink us, our agency—our ability to choose, to question, to create meaning—becomes our most valuable asset. This cannot be automated or outsourced.
Practical Implications for the Human Resilience Project
Mental Resilience
We must inoculate ourselves against “future shock.” The speed of change discussed (solving fundamental science in 5 years) is disorienting. We need practices that ground us in the present moment while we prepare for the future.
Human-Centric Values
We must double down on the “human” aspects of our work: empathy, physical presence, and ethical reasoning. These are the things the machines—even the “super” ones—cannot replicate in a way that satisfies the human soul.
Critical Engagement with Technology
We must maintain realistic expectations about timelines while preparing for the trajectory. Understanding the debate between optimists and realists helps us navigate with wisdom rather than panic or denial.
Agency
We must actively choose how we engage with AI. The “Human + AI” model requires us to lead, not follow. We must maintain our capacity to ask the right questions and make meaningful choices.
Conclusion
The “San Francisco Consensus” may or may not be right about the timing, but the direction of travel is clear. We are moving toward a world where machines will outthink us in math, science, and data processing.
But as Fei-Fei Li reminds us, they cannot replace our agency unless we surrender it. The future is not something that happens to us; it is something we build, question by question, choice by choice.
For building resilience, this means:
-
Don’t compete on compute - You will lose. Compete on humanity, curiosity, and connection.
-
Partner with the machine - Use the tools to amplify your human intent.
-
Defend your agency - Make conscious choices about how you use technology, rather than drifting into passivity.
-
Value the physical - In a world of virtual models, the physical world (and the people in it) becomes a luxury and a sanctuary.
-
Cultivate wisdom over intelligence - As intelligence becomes commoditized, wisdom becomes the differentiator.
-
Build community resilience - Don’t rely solely on centralized tech powers; foster local, decentralized alternatives.
The choice is ours: will we be spectators to the rise of superintelligence, or active partners in shaping a human-centric future? Choose wisely, and choose agency.
Source: This post synthesizes insights from Peter Diamandis’s interview with Eric Schmidt and Fei-Fei Li. The original video is available at: Part 1: Eric Schmidt and Fei-Fei Li: Human Life After Artificial Superintelligence (Peter H. Diamandis)
Eric Schmidt is the former CEO of Google. Fei-Fei Li is a Professor of Computer Science at Stanford and the founder of World Labs. Peter Diamandis is the founder of the XPRIZE Foundation.