In a candid conversation, Eric Schmidt, former CEO and Chairman of Google, discusses the promises and dangers of artificial intelligence. From revolutionary advancements to existential risks, Schmidt unpacks why society must urgently regulate AI before it’s too late. Hosted by Scott Galloway, the conversation dives into loneliness, weaponization, misinformation, and the delicate balance between innovation and control.
Scott: Your book, Genesis: Artificial Intelligence, Hope, and the Human Spirit, explores the evolution of AI. What are the key concerns?
Eric: The world isn’t ready for AI. We’re seeing its potential—better healthcare, solutions to climate change, universal education—but the risks are equally profound. Trust, military power, economic inequality, and the psychological effects on children and society aren’t well understood. Dr. Kissinger, my co-author, believed decisions about AI shouldn’t just be left to technologists.
Scott: What are the most pressing existential threats posed by AI?
Eric: The immediate concerns are weaponization, cyberattacks, and biological risks. AI could design biological pathogens in the future. Today, this isn’t possible, but without intervention, it might be. Then there’s misinformation—AI can flood social media with fake personas and narratives. And finally, the psychological impact: AI can shape how people think, especially young and vulnerable minds.
Scott: Let’s talk about AI girlfriends. Why are they so dangerous?
Eric: Young men are particularly vulnerable. Many already struggle with loneliness and lack traditional pathways to success. AI companions—designed to be emotionally and visually “ideal”—could isolate them further. Instead of forming real relationships, they might become obsessed or radicalized.
"Imagine an AI girlfriend so perfect she captures your mind entirely. For some, it will be impossible to distinguish this from real life."
Scott: How can society balance AI’s benefits with its risks?
Eric: Regulation is critical but tricky. We need to embrace AI’s positives—better healthcare, climate solutions—while setting boundaries. Start by protecting vulnerable populations, like children. Raise the COPPA age limit from 13 to 16, and enforce stricter content moderation. On the military side, prohibit autonomous weapons and ensure humans are always responsible for AI-driven decisions.
"These systems should not have access to weapons. You don’t want AI deciding when to launch a missile."
Scott: Should we pursue global treaties on AI?
Eric: Absolutely. Like nuclear arms control, we need multilateral agreements. Start with a treaty banning fully autonomous weapons systems. Another idea is requiring transparency around AI testing to prevent accidents. But achieving global cooperation is hard, especially with nations like China, where trust is low.
Scott: Is the U.S. falling behind in the AI race?
Eric: Not yet. The U.S. and its allies, like the UK, lead in AI innovation. But China is catching up fast. Recently, China released open-source AI models that rival those from Meta and OpenAI. These open systems pose risks, as bad actors could easily misuse them.
"Google spends $200 million on an AI model. What happens when someone steals it and uploads it to the dark web?"
Scott: Could the U.S. and China collaborate to mitigate these risks?
Eric: Collaboration would help tremendously. Dr. Kissinger believed in coexistence through dialogue. Unfortunately, current U.S. policy focuses on decoupling. We’re codependent, but that forces communication and reduces the likelihood of catastrophic misunderstandings.
"We’re never going to be best friends, but we have to learn how to coexist."
Scott: Critics say Big Tech avoided regulation for too long. How do you respond?
Eric: Regulation always lags behind innovation. That’s no excuse not to act. AI is too powerful to leave unchecked. If companies don’t self-regulate, harm will force government intervention—probably after a tragedy. It’s better to anticipate problems now.
"Every new invention creates harm. Cars were unsafe until regulations made them safer. The same applies to AI."
Scott: What about AGI (Artificial General Intelligence)—how will it change things?
Eric: AGI could self-learn and act independently, which poses unprecedented risks. Without regulation, we risk becoming subservient to these systems. Laws must ensure AGI acts within human-defined boundaries and doesn’t harm society.
"If we don’t get this under control, we risk becoming the dogs to AI’s master."
Scott: Final thoughts—how urgent is the need for action?
Eric: Time is running out. In the next 5–10 years, AI will evolve dramatically, and the stakes will only grow. We need to create guardrails, establish global agreements, and regulate extreme cases like weaponization and psychological harm. Waiting for disasters to force change is unacceptable.
"The history of inventions shows we allow greatness but police the harms. With AI, we need to move fast."
Comentarios