I’m a writer, researcher, and philosopher. I work as a senior research analyst at Open Philanthropy, where I focus on existential risk from advanced artificial intelligence. I also write independently about various topics in philosophy and futurism, and I have a doctorate in philosophy from the University of Oxford.
Much of my work is about trying to help us orient wisely towards humanity’s long-term future. I see this work as fitting a loose logical structure, with questions about meta-ethics and rationality at the foundation, feeding into questions about ethics (and especially about effective altruism), which motivate concern for the long-term future, which motivates attention to advanced artificial intelligence in particular – and a further strand of my work (what I’m calling the “crazy train” strand) grapples with some of the stranger places that further philosophical reflection of this broad sort can lead. I also sometimes write about topics in the philosophy of mind, and about other aspects of my overall existential/spiritual orientation (death, evil, clinging, sincerity).
I recently completed a series of essays called "Otherness and control in the age of AGI" (PDF here), which mixes a lot of these topics together. You can listen to me discuss this series on the Dwarkesh Podcast here.
You can learn more about me here.
My CV is here.
My twitter is @jkcarlsmith.
My email is joseph [dot] k [dot] carlsmith @ gmail [dot] com.
RSS feed for this site here.
See here for audio versions of the essays, or search “Joe Carlsmith Audio” on your podcast app.
What can we learn from recent empirical demonstrations of scheming in frontier models?
An attempt to distill down the whole “Otherness and control” series into a single talk.
Extra content includes: AI collusion; the nature of intelligence; concrete takeover scenarios; flawed training signals; tribalism and mistake theory; more on what good outcomes look like.
Report for Open Philanthropy examining what I see as the core argument for concern about existential risk from misaligned artificial intelligence.
Introduction and summary for a series of essays about how agents with different values should relate to one another, and about the ethics of seeking and sharing power.
There are oceans we have barely dipped a toe into. There are drums and symphonies we can barely hear. There are suns whose heat we can barely feel on our skin.
Rationality
AI
About
Before doing my doctorate at Oxford, I was a PhD student in philosophy at NYU from 2016-2018. I also have a BPhil in philosophy from Oxford, and a BA in philosophy from Yale.
I spent the academic year of 2017-18 helping Toby Ord write The Precipice: Existential Risk and the Future of Humanity, which is a clear statement of a number of ideas that matter to me: notably, that human history might be just beginning; that the future could be incomprehensibly vast and extraordinary; and that it is extremely important that we do not mess up in a way that destroys this possibility. This New Yorker piece about Toby and the book is also a good summary.
I am also interested in meditation. I've spent over a year of my life on silent meditation retreat, in stretches ranging from a few days to three months.
Opinions on this site are my own, not speaking on behalf of my employer.
Audio:
- Extended audio from my conversation with Dwarkesh Patel. Transcript here.
- "Joe Carlsmith on Scheming AI," on Hear This Idea with Fin Moorhouse. Transcript here.
- "Navigating serious philosophical confusions" on the 80,000 Podcast with Rob Wiblin. Transcript here.
- "Utopia on earth and morality without guilt" on Clearer Thinking with Spencer Greenberg.
- "Creating Utopia" on the Utilitarian Podcast with Gus Docker. Transcript here.
Video:
- "Otherness and control in the age of AGI," (Lecture at Stanford University, October 2024).
- "Otherness and control in the age of AGI," (The Dwarkesh Podcast, August 2024).
- "Sharing Reality with Walt Whitman" (Lovely short video created by Michel Justen, inspired by my essay "On future people, looking back at 21st century longtermism," April 2024)
- "Scheming AIs" (EA Global Global Catastrophic Risks, February 2024)
- "On Infinite Ethics, Utopia, and AI" (The Foresight Institute Existential Hope Podcast, September 2023)
- "The Dangers of Advanced AI: An Existential Risk?" (The Flares Podcast, July 2023).
- "How We Change Our Minds About AI Risk" (Future of Life Institute Podcast with Gus Docker, June 2023)
- "Utopia, AI, and Infinite Ethics" (Lunar Society Podcast with Dwarkesh Patel, August 2022)
- "Existential Risk from Power-seeking AI" (Harvard Effective Altruism Agathon Lecture Series)
- "Orienting towards the long-term future" (Effective Altruism Global 2017)
The writing on this site published prior to October 2022 was originally hosted on the blog Hands and Cities, which now links to this site. You can view the old comments on that blog here.
The logo is a reference to the "true city," which is a term I once heard used to refer to Utopia. The other images are from Marilyn Robinson’s Housekeeping, and from Walt Whitman’s "Crossing Brooklyn Ferry."
"Something happened, something so memorable that when I think back to the crossing of the bridge, one moment bulges like the belly of a lens and all the others are at the peripheries and diminished. Was it only that the wind rose suddenly, so that we had to cower and lean against it like blind women groping their way along a wall? or did we really hear some sound too loud to be heard, some word so true we did not understand it, but merely felt it pour through our nerves like darkness or water?"
— Marilyn Robinson
"It avails not, time nor place—distance avails not,
I am with you, you men and women of a generation, or ever so many generations hence,
Just as you feel when you look on the river and sky, so I felt,
Just as any of you is one of a living crowd, I was one of a crowd,
Just as you are refresh’d by the gladness of the river and the bright flow, I was refresh’d,
Just as you stand and lean on the rail, yet hurry with the swift current, I stood yet was hurried..."
— Walt Whitman