My work at Open Philanthropy focuses specifically on making sure that the development of advanced AI systems does not lead to existential catastrophe. I've written a long report ("Is Power-Seeking AI an Existential Risk") on what I see as the most important risk here -- namely, that misaligned AI systems end up disempowering humanity. There's also a video summary of that report available here; slides here; and a set of reviews here. Before that, I wrote a report on the computational capacity of the human brain (blog post summary here), as part of a broader investigation at Open Philanthropy into when advanced AI systems might be developed (see Holden Karnofsky's "Most Important Century" series for a summary of that broader investigation).

Predictable updating about AI risk

How worried about AI risk will we be when we can see advanced machine intelligence up close? We should worry accordingly now.

Continue reading
Existential Risk from Power-Seeking AI (shorter version)

Building a second advanced species is playing with fire.

Continue reading
Is Power-Seeking AI an Existential Risk?

Report for Open Philanthropy examining what I see as the core argument for concern about existential risk from misaligned artificial intelligence.

Continue reading
Video and Transcript of Presentation on Existential Risk from Power-Seeking AI

Video and transcript of a presentation I gave on existential risk from power-seeking AI, summarizing my report on the topic.

Continue reading
How much computational power does it take to match the human brain?

Report for Open Philanthropy on the computational power sufficient to match the human brain’s task-performance. I examine four different methods of generating estimates in this respect.

Continue reading