Superforecasting the premises in “Is power-seeking AI an existential risk?”
Good Judgment has solicited reviews and forecasts from superforecasters regarding my report “Is power-seeking AI an existential risk,” along with forecasts on three additional questions regarding timelines to AGI, and on one regarding the probability of existential catastrophe from out-of-control AGI.
A summary of the results is available on the Good Judgment website here, as are links to the individual reviews. Good Judgment has also prepared more detailed summaries of superforecaster comments and forecasts here (re: my report) and here (re: the other timelines and X-risk questions). I’ve copied key graphics below, along with a screenshot of a public spreadsheet of the probabilities from each forecaster.1
This project was funded by Open Philanthropy, my employer.2 The superforecasters completed a survey very similar to the one completed by other reviewers of my report (see here for links), except with an additional question (see footnote) about the “multiple stage fallacy.”3
Relative to my original report, the May 2023 superforecaster aggregation places higher probabilities on the first three premises – Timelines (80% relative to my 65%), Incentives (90% relative to my 80%), and Alignment Difficulty (58% relative to my 40%) – but substantially lower probabilities on the last three premises – High-Impact Failures (25% relative to my 65%), Disempowerment (5% relative to my 40%), Catastrophe (40% relative to my 95%). And their overall probability on all the premises being true – that is, roughly, on existential catastrophe from power-seeking AI by 2070 – is 1% compared to my 5% in the report.4 (Though in the supplemental questions included in the second part of the project, they give a 6% probability to existential catastrophe from out-of-control AGI by 2200, conditional on AGI by 2070; and a 40% chance of AGI by 2070.5)
To the extent the superforecasters and I disagree, especially re: the overall probability of existential risk from power-seeking AI, I haven’t updated heavily in their direction, at least thus far (though I have updated somewhat).6 This is centrally because:
- My engagement thus far with the written arguments in the reviews (which I encourage folks to check out – see links in the spreadsheet) hasn’t moved me much.7
- I remain unsure how much to defer to raw superforecaster numbers (especially for longer-term questions where their track-record is less proven) absent object-level arguments I find persuasive.8
- I had priced in some amount of “I think that AI risk is higher than do many other thoughtful people who’ve thought about it at least somewhat” already.
In this sense, perhaps, I am similar to some of the domain experts in the Existential Risk Persuasion Tournament, who continued to disagree significantly with superforecasters about the probability of various extinction events even after arguing about it. However, I think it’s an interesting question how to update in the face of disagreement of this type (see e.g. Alexander here for some reflections), and I’d like to think more about it.9
Thanks to Good Judgment for conducting this project, and to the superforecaster reviewers for their participation. If you’re interested in more examples of superforecasters weighing in on existential risks, I encourage you to check out the Existential Risk Persuasion Tournament (conducted by the Forecasting Research Institute) as well.
How worried about AI risk will we be when we can see advanced machine intelligence up close? We should worry accordingly now.
Building a second advanced species is playing with fire.
Report for Open Philanthropy examining what I see as the core argument for concern about existential risk from misaligned artificial intelligence.
After discussion with Good Judgment, I’ve made a few small adjustments, in the public spreadsheet, to the presentation of the data they originally sent to me. In particular, Superforecaster AI Expert #10 gave two different forecasts ba ...
It was initially instigated by the FTX Future Fund back in 2022, but Open Philanthropy took over funding after FTX collapsed.
The new question (included in the final section of the survey) was: "One concern about the estimation method in the report is that the multi-premise structure biases towards lower numbers (this i ...
Note that this overall superforecaster probability differs somewhat from what you get if you just multiply through the superforecaster median for each premise. If you do that, the superforecaster numbers imply a ~10% probability that m ...
Technically, the 40% on AGI by 2070 and the 80% on APS-AI systems by 2070 are compatible, given that the two thresholds are defined differently. Though whether the differences warrant this large of a gap is a further question.
More specifically: partly as a result of this project, and partly as a result of other projects like the ...
The final premise -- whether permanent disempowerment of ~all humans at the hands of power-seeking AIs constitutes an existential catastrophe (the aggregated superforecaster median here is 40%) -- also strikes me as substantially a matter of philosophy/ethics rather ...
Though I do think that a lot of the game is in “how much weight do you give to various considerations” rather than “what considerations are in play.” The former can be harder to argue about, but it may be the place where successful for ...
I think of it as similar to the question of how much to update on the fact that markets (and also: academic economists) generally do not seem to be expecting extreme, AI-driven economic growth with the same probability I do.