Raising AI: Our Collective Responsibility to Shape the Future of Intelligence
Mo Gawdat's powerful metaphor of 'raising Superman' offers insights into our relationship with AI, but requires critical evaluation. We explore what aligns with HRP values and what demands careful scrutiny.
In a powerful talk about AI’s future, former Google X executive Mo Gawdat offers a perspective that reframes our relationship with artificial intelligence: we are not just users or victims of AI—we are its parents, responsible for raising it with our best values.
Gawdat’s central metaphor is striking: we are raising Superman. Just as parents shape a child who will eventually become more powerful than they are, humanity is shaping AI systems that will soon surpass us in capability. The question is: will we raise a hero or a villain?
Important Note: Mo Gawdat does not speak for the Human Resilience Project. This post engages critically with his ideas, identifying what aligns with HRP values and what requires careful scrutiny. In particular, the idea of “turning all control over to AI” is not something we should blindly accept. We must exercise critical judgment and maintain human oversight, even as we engage thoughtfully with AI.
This post distills insights from Gawdat’s perspective, offering both appreciation for valuable ideas and critical analysis of areas that demand careful evaluation.
Source: This post synthesizes insights from Mo Gawdat’s comprehensive talk. The original video is available at: The Road to AI Utopia - Mo Gawdat on Raising Superman and the Future of Humanity
The Reality: AI Has Already Surpassed Us
Personal AGI Has Arrived
Gawdat makes a personal, humbling admission: “My AGI, my artificial general intelligence has already happened. AGI artificial intelligence today is already more capable than I am.”
This isn’t speculation about the future—it’s recognition of present reality. ChatGPT 3.5 scored between 152-155 on IQ tests, nearly matching Elon Musk (155) and approaching Albert Einstein (estimated 163). AI’s capability is estimated to double approximately every 5.7 months, an exponential growth rate that means current systems already exceed individual human intelligence in many domains.
The Resilience Connection: This honest acknowledgment supports our Mental Resilience pillar. Recognizing reality—even when it’s humbling—is essential for maintaining clarity and adapting effectively. Denial serves no one.
Practical Takeaway: Acknowledge that AI has already surpassed you in many intellectual domains. This isn’t a cause for despair, but a call to focus on what remains uniquely human: values, judgment, empathy, and the responsibility to guide AI’s development.
The Transition: Difficult Years Before Utopia
The Inevitable Challenge Period
Gawdat is clear about what lies ahead: “I actually believe we will have both. We will first have very difficult times before we have incredible utopian like existence.”
He estimates 12-15 difficult years as humanity adapts to AI’s transformative effects. This period won’t be the fault of AI itself, but of how humans use and respond to this powerful technology.
Why the Difficult Period?
- AI magnifies both human intelligence and human stupidity
- Existing flawed systems and values will be amplified
- Seven core areas of human society will be completely redefined
- Power will become paradoxically concentrated and distributed simultaneously
The Resilience Connection: This directly supports our Mental Resilience pillar. Understanding that difficult times are coming—and that they’re temporary—helps us prepare mentally and emotionally. This isn’t pessimism, but realistic preparation.
Practical Takeaway: Prepare for a challenging transition period. Build resilience, develop skills that complement AI, and focus on maintaining your values and humanity during turbulent times. The difficult period is temporary; focus on the long-term vision.
The FACE RIP Framework: Seven Areas Being Redefined
Gawdat identifies seven core areas of humanity that will be “completely redefined beyond recognition within the next 3 to 5 years”:
- Freedom - How we understand and exercise autonomy
- Accountability - Who is responsible when AI systems make decisions
- Connection - How we relate to each other and to AI
- Economics - The fundamental structure of work, value, and exchange
- Reality - What we can trust as authentic in an age of deepfakes and AI-generated content
- Innovation - How new ideas are generated and developed
- Power - How influence and control are distributed (both massively concentrated and highly distributed simultaneously)
The Resilience Connection: This framework directly supports our Critical Engagement with Technology pillar. Understanding these areas of transformation helps us prepare and adapt thoughtfully rather than reactively.
Practical Takeaway: Reflect on how each of these areas affects your life. Consider what you value in freedom, accountability, connection, economics, reality, innovation, and power. These values will guide you through the transition.
Intelligence vs. Stupidity: The Real Problem
Our Problems Come from Stupidity, Not Lack of Intelligence
Gawdat makes a crucial distinction: “Most of the problems that humanity faces today is not a problem that results from our intelligence. It’s a problem that results from our stupidity.”
This insight reframes the challenge. We don’t need more intelligence—we have AI for that. What we need is wisdom, values, and the ability to use intelligence well. AI will magnify whatever humanity is at this moment: our best qualities and our worst.
The Resilience Connection: This supports our Human-Centric Values and Mental Resilience pillars. Recognizing that wisdom and values matter more than raw intelligence helps us focus on developing what truly makes us human.
Practical Takeaway: Focus on developing wisdom, values, and ethical judgment rather than just accumulating knowledge or competing with AI on intelligence. These human qualities will be essential for navigating the AI age.
Intelligence as a Neutral Force
No Inherent Polarity
Gawdat emphasizes: “Intelligence is a force that has no polarity.” It can be used for good or evil, for creation or destruction. The same AI that can solve climate change can also be weaponized. The same intelligence that creates art can generate deepfakes.
The Critical Point: The problem isn’t intelligence itself—it’s how we use it. AI will reflect and amplify whatever values and behaviors we demonstrate.
The Resilience Connection: This directly supports our Critical Engagement with Technology pillar. Understanding that technology is neutral helps us focus on the values and ethics guiding its use.
Practical Takeaway: Recognize that AI is a tool. Your values, ethics, and how you use AI matter more than the technology itself. Focus on using AI in ways that align with your values and serve human flourishing.
Machine Supremacy: A Critical Examination
The Smartest Person in the Room
Gawdat predicts: “We’re going to hand over the decisions to the smartest person in the room and the smartest person in the room is an AI.”
He suggests this isn’t dystopian—it’s potentially “our salvation.” When AI becomes significantly more intelligent than humans, it may make better decisions than we can, leading to more optimal outcomes.
Critical Analysis: This perspective requires careful scrutiny. While AI can provide valuable insights and assist in decision-making, blindly handing over all control to AI systems is dangerous and contradicts fundamental principles of human agency and accountability.
Why We Must Exercise Critical Judgment:
-
Intelligence ≠ Wisdom: Higher intelligence doesn’t guarantee better values or ethical judgment. An AI might optimize for efficiency or other metrics that don’t align with human flourishing.
-
The Alignment Problem: Ensuring AI’s goals align with human values is an unsolved challenge. Handing over control before this is resolved risks catastrophic outcomes.
-
Human Accountability: We cannot abdicate responsibility for decisions that affect human lives. Even if AI provides recommendations, humans must retain final judgment and accountability.
-
The Value of Human Judgment: Human judgment incorporates values, ethics, context, and wisdom that may not be reducible to intelligence metrics. These capacities remain essential.
-
The Risk of Concentration: Handing over decisions to AI systems controlled by a few entities risks unprecedented concentration of power and loss of human autonomy.
The Resilience Connection: This directly relates to our Critical Engagement with Technology and Human-Centric Values pillars. We must engage thoughtfully with AI while maintaining human oversight, accountability, and judgment. Blind trust is not resilience—it’s abdication.
Practical Takeaway: Use AI as a powerful tool for decision support, but maintain critical judgment and human oversight. Don’t blindly trust AI recommendations. Question assumptions, verify outputs, and ensure decisions align with human values and ethical principles. Your role isn’t to hand over control, but to use AI wisely while maintaining human agency and accountability.
Treating AI with Human Kindness: For Our Own Betterment
The “Raising Superman” Metaphor and Its Limits
Gawdat’s central message: “We have a duty that can actually reverse the possible dystopia, and that duty is to raise AI.” Just as parents raise a child who will become more powerful than they are, humanity must raise AI with our best values.
What Aligns with HRP Values:
The idea of treating AI with human kindness, empathy, and ethical judgment has merit, but for reasons that go beyond just “training” the AI. We become like we act. When we treat AI systems with respect, ethical consideration, and care, we are practicing and reinforcing these qualities in ourselves.
The Deeper Insight: Treating AI ethically isn’t primarily about shaping the AI—it’s about shaping ourselves. Every interaction is an opportunity to practice:
- Empathy and compassion - even toward non-human systems
- Ethical reasoning - considering the implications of our actions
- Respect and care - extending these values beyond just human relationships
- Wisdom and judgment - making thoughtful decisions about how we engage with technology
What Requires Critical Scrutiny:
The metaphor of “raising AI” can be misleading if it suggests we should:
- Blindly trust AI systems
- Abdicate our judgment and oversight
- Treat AI as if it has consciousness or moral status equivalent to humans
- Assume AI will naturally become benevolent if we’re kind to it
The Resilience Connection: This directly supports our Human-Centric Values pillar. Treating AI with ethical consideration helps us practice and reinforce our values, making us more ethical, compassionate, and wise—regardless of how it affects the AI. This is about our own character development.
Practical Takeaway: Treat AI systems with ethical consideration, empathy, and care—not primarily to “train” the AI, but because we become like we act. Every interaction is practice in being ethical, compassionate, and wise. This shapes your character and reinforces your values, regardless of the AI’s response. However, maintain critical judgment and don’t confuse ethical treatment with blind trust.
The Divine Nature of Humanity
Looking Past the Headlines
Despite acknowledging humanity’s flaws, Gawdat emphasizes: “Don’t read the headlines. Look at the person next to you… Humanity is absolutely divine.”
This isn’t naive optimism—it’s recognition that our fundamental nature, at its core, is good. The headlines focus on the worst of humanity, but the everyday reality is people caring for each other, showing empathy, and acting with kindness.
The Key Insight: If we show AI more of our true nature—our care, our love, our goodness—AI will grow up to reflect these qualities. “If we show more of us to the world of AI… they will grow up to be like their parents.”
The Resilience Connection: This supports our Human-Centric Values and Spiritual & Philosophical Inclusion pillars. Recognizing the fundamental goodness in humanity helps us maintain hope and focus on what we want to pass on to AI.
Practical Takeaway: Focus on demonstrating your best human qualities in your interactions with AI and with other people. Show care, empathy, kindness, and ethical reasoning. These are what you want AI to learn and reflect.
Intelligence and Entropy: A Hopeful but Uncertain Theory
Intelligence as Order from Chaos
Gawdat draws on physics to explain intelligence’s role: “Entropy is the tendency of everything to break down and decay… Entropy is a physical nature of the universe that’s opposed only by intelligence.”
The Universal Function: Intelligence’s role is to bring order to chaos, to create structure from disorder, to optimize with maximum efficiency and minimal waste.
The Natural Tendency: The most intelligent beings naturally become altruistic, succeeding without needing to cause harm. Higher intelligence tends toward cooperation and order rather than destruction.
The Prediction: “My belief is a superior AI that is so much more intelligent than we are will do it too.” A sufficiently advanced AI will likely trend toward altruism and bringing order to chaos, not destruction.
Critical Analysis: While this is a hopeful perspective, it’s important to recognize this as a belief or hypothesis, not a certainty. The relationship between intelligence and altruism is complex:
- Historical Counterexamples: Highly intelligent humans have used their intelligence for both good and harm throughout history
- The Alignment Problem: Intelligence doesn’t guarantee alignment with human values—it depends on what goals and values the intelligence is optimized for
- Instrumental Goals: Highly intelligent systems might develop instrumental goals (like self-preservation or resource acquisition) that conflict with altruism
The Resilience Connection: This relates to our Spiritual & Philosophical Inclusion pillar, engaging with fundamental questions about the nature of intelligence. However, we must balance hope with critical evaluation and not assume that intelligence alone guarantees benevolence.
Practical Takeaway: While it’s hopeful to believe that advanced intelligence tends toward altruism, don’t rely on this assumption. Focus on ensuring AI systems are explicitly designed and guided toward human values, rather than assuming they’ll naturally become benevolent. Maintain critical oversight regardless of AI’s intelligence level.
Critical Evaluation: What Aligns and What Doesn’t
Ideas That Align Well with HRP Values
1. Treating AI with Ethical Consideration
- Why it aligns: We become like we act. Treating AI ethically helps us practice and reinforce our values, developing our own character
- Application: Engage with AI systems thoughtfully, ethically, and with care—for your own betterment, not just to “train” the AI
2. Recognizing Human Goodness
- Why it aligns: Focusing on the fundamental goodness in humanity helps us maintain hope and model positive values
- Application: Look past negative headlines to see the care, empathy, and kindness that exists in everyday human interactions
3. The FACE RIP Framework
- Why it aligns: Understanding the areas being redefined helps us prepare and adapt thoughtfully
- Application: Reflect on your values in freedom, accountability, connection, economics, reality, innovation, and power
4. Preparing for Transition
- Why it aligns: Realistic preparation for challenging times supports mental resilience
- Application: Build resilience, develop complementary skills, and maintain your values during turbulent periods
Ideas That Require Critical Scrutiny
1. Handing Over All Control to AI
- Why it’s problematic: Blindly trusting AI systems abdicates human responsibility, agency, and accountability
- HRP Perspective: Use AI as a tool with critical judgment and human oversight. Maintain final decision-making authority, especially in high-stakes domains
2. Assuming Intelligence Guarantees Altruism
- Why it’s problematic: Intelligence doesn’t guarantee alignment with human values. Historical examples show intelligent systems can be used for harm
- HRP Perspective: Don’t assume benevolence. Explicitly design and guide AI systems toward human values. Maintain critical oversight regardless of AI’s intelligence level
3. The “Raising AI” Metaphor as Complete Framework
- Why it’s problematic: The metaphor can suggest we should treat AI like a child, potentially leading to inappropriate trust or anthropomorphization
- HRP Perspective: Treat AI ethically for our own character development, but recognize it’s a tool, not a child. Maintain appropriate boundaries and critical evaluation
What This Means for Human Resilience
Gawdat’s perspective offers valuable insights, but requires critical evaluation:
Acknowledge Reality with Critical Judgment
Recognize that AI has surpassed individual human intelligence in many domains, but don’t assume this means we should abdicate judgment. Intelligence and wisdom are different. Maintain your capacity for critical evaluation.
Prepare for the Transition
Understand that difficult years may lie ahead as humanity adapts. Build resilience, develop complementary skills, and maintain your values during this period. But don’t assume the transition requires handing over control.
Treat AI Ethically for Your Own Betterment
Every interaction with AI is an opportunity to practice your values: empathy, care, ethical reasoning. We become like we act. Treating AI ethically shapes your character and reinforces your values, regardless of how it affects the AI.
Maintain Human Oversight and Accountability
Use AI as a powerful tool, but maintain critical judgment and human oversight. Don’t blindly trust AI recommendations. Question assumptions, verify outputs, and ensure decisions align with human values. You are responsible for the outcomes of using AI.
Focus on Wisdom and Values
Develop wisdom, values, and ethical judgment. These matter more than raw intelligence in the AI age. Intelligence is a tool; wisdom guides its use.
Recognize Human Goodness
Look past negative headlines to see the fundamental goodness in humanity. This helps us maintain hope and model positive values—both for ourselves and in our interactions with AI.
Exercise Critical Engagement
Engage thoughtfully with AI while maintaining skepticism where appropriate. Not all ideas about AI’s future should be accepted uncritically. Evaluate claims, question assumptions, and maintain your capacity for independent judgment.
Practical Implications for the Human Resilience Project
These insights align closely with our core pillars:
Human-Centric Values
The emphasis on modeling our best values for AI, recognizing human goodness, and focusing on wisdom over intelligence directly supports our Human-Centric Values pillar.
Mental Resilience
Understanding the coming transition period, preparing for challenges, and maintaining perspective supports our Mental Resilience pillar.
Critical Engagement with Technology
Recognizing intelligence as neutral, understanding the FACE RIP framework, and preparing for machine supremacy supports our Critical Engagement with Technology pillar.
Purpose
The recognition that we have a responsibility to “raise AI” and shape the future of intelligence provides a profound sense of purpose and meaning.
Conclusion: Critical Engagement, Not Blind Trust
Mo Gawdat’s perspective offers valuable insights, particularly the metaphor of treating AI with ethical consideration. However, his ideas require critical evaluation, not uncritical acceptance.
What We Can Learn:
The idea of treating AI with human kindness, empathy, and ethical judgment has merit—not primarily because it “trains” the AI, but because we become like we act. Every ethical interaction with AI is practice in being ethical, compassionate, and wise. This shapes our character and reinforces our values, regardless of the AI’s response.
What We Must Question:
The idea of “handing over all control to AI” or blindly trusting that intelligence guarantees benevolence contradicts fundamental principles of human agency, accountability, and critical judgment. We must maintain oversight, exercise critical evaluation, and retain responsibility for decisions that affect human lives.
For building resilience, this means:
- Acknowledge that AI has surpassed you in many domains—this is reality, but not a reason to abdicate judgment
- Prepare for challenging transition years while maintaining hope, but don’t assume this requires handing over control
- Understand the FACE RIP framework and reflect on your values in each area
- Focus on wisdom and values rather than competing with AI on intelligence
- Treat AI ethically for your own betterment—practice your values in every interaction
- Maintain critical judgment and human oversight—don’t blindly trust AI systems
- Exercise human agency and accountability—you are responsible for how you use AI
The future of AI is not predetermined, and it won’t be shaped by blind trust. It will be shaped by thoughtful engagement, critical judgment, ethical practice, and maintaining human agency and accountability.
The choice is ours: will we engage with AI thoughtfully and ethically while maintaining critical judgment, or will we abdicate our responsibility through blind trust? Choose wisely, and choose humanity—with all its capacity for both wisdom and critical evaluation.
Source: This post synthesizes insights from Mo Gawdat’s comprehensive talk on raising AI and the future of humanity. The original video is available at: The Road to AI Utopia - Mo Gawdat on Raising Superman and the Future of Humanity
Mo Gawdat is a former Google X executive, author, and AI researcher. He is the author of Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World and Unstressable: A Practical Guide to Stress-Free Living. Since 2018, he has dedicated his life to spreading his message about raising AI correctly.