My work at Open Philanthropy focuses specifically on making sure that the development of advanced AI systems does not lead to existential catastrophe. I've written a long report ("Is Power-Seeking AI an Existential Risk") on what I see as the most important risk here -- namely, that misaligned AI systems end up disempowering humanity. There's also a video summary of that report available here; slides here; and a set of reviews here. Before that, I wrote a report on the computational capacity of the human brain (blog post summary here), as part of a broader investigation at Open Philanthropy into when advanced AI systems might be developed (see Holden Karnofsky's "Most Important Century" series for a summary of that broader investigation).
How worried about AI risk will we be when we can see advanced machine intelligence up close? We should worry accordingly now.
Building a second advanced species is playing with fire.
Report for Open Philanthropy examining what I see as the core argument for concern about existential risk from misaligned artificial intelligence.
Video and transcript of a presentation I gave on existential risk from power-seeking AI, summarizing my report on the topic.
Report for Open Philanthropy on the computational power sufficient to match the human brain’s task-performance. I examine four different methods of generating estimates in this respect.