A lot of the writing on this site is about rationality in some broad sense (e.g., about how to make choices and form beliefs), but I sometimes focus directly on certain foundational questions about it. In particular (and in parallel with my approach to ethics), I'm interested in ways of understanding rationality that do not treat it as (a) an external set of norms that your beliefs and choices must conform to, on pain of being subject to some sort of abstract insult; or as (b) some kind of fetishization of "coherence" or "consistency" or some such as important in its own right. Rather, I see an interest in rationality as a natural extension of an aspiration towards agency -- that is, towards being a force for something in the world. On this picture, neither agency nor rationality are obligatory -- if you want, you can simply let yourself be "passive" with respect to the world, and allow what happens to be determined entirely by factors other than your decision-making. And conformity to "constraints" on rationality has no value in itself. But if you start to care about what happens, and to want some things to happen rather than others, then rationality is the art of making a difference; of not burning what you care about for no reason; of actively moving the world in directions that matter to you.

My most substantive writing on rationality basics is in the four-part series "On expected utility," which lays out what I currently see as the strongest argument for expected utility maximization as an ideal of decision-making. I've also written a different, four part series ("SIA > SSA") on anthropic reasoning -- an especially tricky aspect of Bayesianism, which focuses on assigning credences to different hypotheses about "who you are" in a given world, particularly in cases where the possible worlds at stake involve different numbers of observers.

03.24.2022
On expected utility / Part 4:
Dutch books, Cox, and Complete Class

Final essay in a four-part series on expected utility maximization (EUM). Examination of some theorems that aim to justify the subjective probability aspect of expected utility maximization (EUM), namely: Dutch Book theorems; Cox’s Theorem; and the Complete Class Theorem.

Continue reading
03.21.2022
On expected utility / Part 3:
VNM, separability, and more

Third essay in a four-part series on expected utility maximization (EUM). Examines three axiomatic arguments for acting like an EUM-er: the von Neumann-Morgenstern theorem; an argument based on a very general connection between “separability” and “additivity”; and a related “direct” axiomatization of EUM in Peterson (2017).

Continue reading
03.18.2022
On expected utility / Part 2:
Why it can be OK to predictably lose

This is the second essay in a four-part series on expected utility maximization (EUM). This part focuses on why it can make sense, in cases like “save one life for certain, or 1000 with 1% chance,” to choose the risky option, and hence to “predictably lose.” The answer is unsurprising: sometimes the upside is worth it. I offer three arguments to clarify this with respect to life-saving in particular. Ultimately, though, while EUM imposes consistency constraints on our choices, it does not provide guidance about what’s “worth it” or not — and in some cases, the world doesn’t either. Ultimately, you have to decide.

Continue reading
03.16.2022
On expected utility / Part 1:
Skyscrapers and madmen

This is the first essay in a four part series on expected utility maximization (EUM). It presents a way of visualizing EUM that I call the “skyscraper model.” It also discusses (1) the failure of the “quick argument,” (2) reasons not to let your epistemic relationship to EUM stop at “apparently the math says blah,” and (3) whether being “representable” as an EUM-er is trivial (in the relevant sense, I don’t think so).

Continue reading
09.30.2021
SIA > SSA / Part 4:
In defense of the presumptuous philosopher

Final essay in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part discusses some prominent objections to SIA.

Continue reading
09.30.2021
SIA > SSA / Part 3:
An aside on betting in anthropics

This essay is the third in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part briefly discusses betting in anthropics. In particular: why it’s so gnarly, why I’m not focusing on it, and why I don’t think it’s the only desiderata.

Continue reading
09.30.2021
SIA > SSA / Part 2:
Telekinesis, reference classes, and other scandals

This essay is the second in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part focuses on objections to SSA.

Continue reading
09.30.2021
SIA > SSA / Part 1:
Learning from the fact that you exist

This is the first essay in a four-part series explaining why I think one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part lays out the basics of the debate.

Continue reading