I'm interested in a certain kind quantitatively-oriented, Bayesian, scope-sensitive philosophy -- one that takes seriously just how big and strange the world can be, and much could be at stake in our impact on it. This is the sort of philosophy that most naturally prompts interest in longtermism (though it's not actually necessary for reaching many longtermist conclusions -- see here and here for discussion), and in my experience, some people assume that longtermism is what it ultimately implies. But I'm not so sure. In particular, I think that think that this sort of philosophy leads quickly into some very strange territory -- stranger territory, indeed, than mainstream presentations of longtermism tend to focus on. What I'm calling the "Crazy train" strand of my work tries to explore this territory and its implications (the name "Crazy train" is inspired by Ajeya Cotra's discussion here).
- In "Can you control the past?", I examine the case for non-causal decision theories. I think this case is quite strong, and that if we accept it, it might well shift our moral attention away from our causal impact on future people in our lightcone, and towards our non-causal impact elsewhere in the universe/multiverse (see Oesterheld (2017) and MacAskill et al (2021) for more on this).
- In "On infinite ethics," I examine the ways in which infinities break lots of ethical principles we like (even as they often cause obsession with infinite outcomes) -- and in particular, the way that they puncture the dream of a simple, bullet-biting utilitarianism.
- In "Simulation arguments," I examine various arguments for assigning significant credence to living in a computer simulation.
- In "SIA vs. SSA," I examine a number of tricky issues in anthropic reasoning, and I argue that one approach to them (the "Self-Indication Assumption") is better than another (the "Self-Sampling Assumption"). I then extend the discussion, in "On the Universal Distribution" and "Anthropics and the Universal Distribution," to the question of whether a certain kind of fundamental prior can help resolve some of the trickiness (I'm skeptical).
These topics also interact in a number of rich ways (for example, SIA plausibly updates in favor of being in a simulation, and towards certainty that the universe is infinite; acausal decision theories make it easier to have infinite influence even if your causal influence is finite; and so on). I'm hoping to explore some of these interactions in more detail in future. And I'm hoping to cover a few more "crazy train" topics that I haven't yet dug in on yet as well.
Overall (and as I often emphasize at the end of these essays), I think that we're at a relatively early stage in understanding these sorts of considerations. I think we should tread carefully in trying to orient towards them wisely, and avoid doing brittle and stupid stuff with them in mind. But I don't think we should ignore them, either.
Examination of various arguments that we should assign significant probability to living in a computer simulation.
Infinities puncture the dream of a simple, bullet-biting utilitarianism. But they’re everyone’s problem.
Essay on whether a certain sort of prior helps resolve tough questions in anthropics. My answer: maybe a few, but at high cost.
Essay examining a certain sort of fundamental prior, called the “Universal Distribution.”
I think that you can “control” events you have no causal interaction with, including events in the past, and that this is a wild and disorienting fact, with uncertain but possibly significant implications. This essay attempts to impart such disorientation.