Listen now | Toby Ord on the 1-in-6 odds of extinction, four key types of AI threats, and the policies we need now to safeguard future generations.
͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­͏     ­
Forwarded this email? Subscribe here for more
 
The Generalist
The Generalist
Existential Risk and the Futu…
0:001:19:08
 
Listen now
 

Existential Risk and the Future of Humanity: Lessons from AI, Pandemics, and Nuclear Threats | Toby Ord (Author of "The Precipice")

Toby Ord on the 1-in-6 odds of extinction, four key types of AI threats, and the policies we need now to safeguard future generations.

Mario Gabriele
Jun 24
 
READ IN APP
 

A quick note before today's podcast: Last Thursday, we launched Part 2 of our four-part series on Founders Fund. If you haven’t read it yet, you can catch up on Part 1 here and Part 2 here. For everyone following along, Part 3 drops this Thursday, June 26th.

In the meantime, we hope you enjoy today's podcast episode below.

YouTube

Spotify

Apple

This episode is brought to you by Brex: The banking solution for startups.


How close are we to the end of humanity? Toby Ord, Senior Researcher at Oxford University’s AI Governance Initiative and author of The Precipice, argues that the odds of a civilization-ending catastrophe this century are roughly one in six. In this wide-ranging conversation, we unpack the risks that could end humanity’s story and explore why protecting future generations may be our greatest moral duty.

We explore:

  • Why existential risk matters and what we owe the 10,000-plus generations who came before us

  • Why Toby believes we face a one-in-six chance of civilizational collapse this century

  • The four key types of AI risk: alignment failures, gradual disempowerment, AI-fueled coups, and AI-enabled weapons of mass destruction

  • Why racing dynamics between companies and nations amplify those risks, and how an AI treaty might help

  • How short-term incentives in democracies blind us to century-scale dangers, along with policy ideas to fix it

  • The lessons COVID should have taught us (but didn’t)

  • The hidden ways the nuclear threat has intensified as treaties lapse and geopolitical tensions rise

  • Concrete steps each of us can take today to steer humanity away from the brink


Explore the episode

Timestamps

(00:00) Intro

(02:20) An explanation of existential risk, and the study of it

(06:20) How Toby’s interest in global poverty sparked his founding of Giving What We Can

(11:18) Why Toby chose to study under Derek Parfit at Oxford

(14:40) Population ethics, and how Parfit’s philosophy looked ahead to future generations

(19:05) An introduction to existential risk

(22:40) Why we should care about the continued existence of humans

(28:53) How fatherhood sparked Toby’s gratitude to his parents and previous generations

(31:57) An explanation of how LLMs and agents work

(40:10) The four types of AI risks

(46:58) How humans justify bad choices: lessons from the Manhattan Project

(51:29) A breakdown of the “unilateralist’s curse” and a case for an AI treaty

(1:02:15) Covid’s impact on our understanding of pandemic risk

(1:08:51) The shortcomings of our democracies and ways to combat our short-term focus

(1:14:50) Final meditations


Follow Toby Ord

Website: https://www.tobyord.com/

LinkedIn: https://www.linkedin.com/in/tobyord

X: https://x.com/tobyordoxford?lang=en

Giving What We Can: https://www.givingwhatwecan.org/


Resources and episode mentions

Books

  • The Precipice: Existential Risk and the Future of Humanity: https://www.amazon.com/dp/0316484911

  • Reasons and Persons: https://www.amazon.com/Reasons-Persons-Derek-Parfit/dp/019824908X

  • Practical Ethics: https://www.amazon.com/Practical-Ethics-Peter-Singer/dp/052143971X

People

  • Derek Parfit: https://en.wikipedia.org/wiki/Derek_Parfit

  • Carl Sagan: https://en.wikipedia.org/wiki/Carl_Sagan

  • Stuart Russell: https://en.wikipedia.org/wiki/Stuart_J._Russell

Other resources

  • DeepMind: https://deepmind.google/

  • OpenAI: https://openai.com/

  • Manhattan Project: https://en.wikipedia.org/wiki/Manhattan_Project

  • The Unilateralist’s Curse and the Case for a Principle of Conformity: https://nickbostrom.com/papers/unilateralist.pdf

  • The Nuclear Non-Proliferation Treaty (NPT), 1968: https://history.state.gov/milestones/1961-1968/npt

  • The Blitz: https://en.wikipedia.org/wiki/The_Blitz

  • Operation Warp Speed: https://en.wikipedia.org/wiki/Operation_Warp_Speed


Subscribe to the show

I’d love it if you’d subscribe and share the show. Your support makes all the difference as we try to bring more curious minds into the conversation.

YouTube

Spotify

Apple


Production and marketing by penname.co. For inquiries about sponsoring the podcast, email [email protected].

You're currently a free subscriber to The Generalist. For the full experience, upgrade your subscription.

Upgrade to paid

 
Like
Comment
Restack
 

© 2025 The Generalist LLC
228 Park Ave S PMB 19543, New York, New York 10003-1502 US
Unsubscribe

Get the appStart writing