Rationality
A lot of the writing on this site is about rationality in some broad sense (e.g., about how to make choices and form beliefs), but I sometimes focus directly on certain foundational questions about it. In particular (and in parallel with my approach to ethics), I'm interested in ways of understanding rationality that do not treat it as (a) an external set of norms that your beliefs and choices must conform to, on pain of being subject to some sort of abstract insult; or as (b) some kind of fetishization of "coherence" or "consistency" or some such as important in its own right. Rather, I see an interest in rationality as a natural extension of an aspiration towards agency -- that is, towards being a force for something in the world. On this picture, neither agency nor rationality are obligatory -- if you want, you can simply let yourself be "passive" with respect to the world, and allow what happens to be determined entirely by factors other than your decision-making. And conformity to "constraints" on rationality has no value in itself. But if you start to care about what happens, and to want some things to happen rather than others, then rationality is the art of making a difference; of not burning what you care about for no reason; of actively moving the world in directions that matter to you.
My most substantive writing on rationality basics is in the four-part series "On expected utility," which lays out what I currently see as the strongest argument for expected utility maximization as an ideal of decision-making. I've also written a different, four part series ("SIA > SSA") on anthropic reasoning -- an especially tricky aspect of Bayesianism, which focuses on assigning credences to different hypotheses about "who you are" in a given world, particularly in cases where the possible worlds at stake involve different numbers of observers.
Final essay in a four-part series on expected utility maximization (EUM). Examination of some theorems that aim to justify the subjective probability aspect of expected utility maximization (EUM), namely: Dutch Book theorems; Cox’s Theorem; and the Complete Class Theorem.
Third essay in a four-part series on expected utility maximization (EUM). Examines three axiomatic arguments for acting like an EUM-er: the von Neumann-Morgenstern theorem; an argument based on a very general connection between “separability” and “additivity”; and a related “direct” axiomatization of EUM in Peterson (2017).
This is the second essay in a four-part series on expected utility maximization (EUM). This part focuses on why it can make sense, in cases like “save one life for certain, or 1000 with 1% chance,” to choose the risky option, and hence to “predictably lose.” The answer is unsurprising: sometimes the upside is worth it. I offer three arguments to clarify this with respect to life-saving in particular. Ultimately, though, while EUM imposes consistency constraints on our choices, it does not provide guidance about what’s “worth it” or not — and in some cases, the world doesn’t either. Ultimately, you have to decide.
This is the first essay in a four part series on expected utility maximization (EUM). It presents a way of visualizing EUM that I call the “skyscraper model.” It also discusses (1) the failure of the “quick argument,” (2) reasons not to let your epistemic relationship to EUM stop at “apparently the math says blah,” and (3) whether being “representable” as an EUM-er is trivial (in the relevant sense, I don’t think so).
Final essay in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part discusses some prominent objections to SIA.
This essay is the third in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part briefly discusses betting in anthropics. In particular: why it’s so gnarly, why I’m not focusing on it, and why I don’t think it’s the only desiderata.
This essay is the second in a four-part series, explaining why I think that one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part focuses on objections to SSA.
This is the first essay in a four-part series explaining why I think one prominent approach to anthropic reasoning (the “Self-Indication Assumption” or “SIA”) is better than another (the “Self-Sampling Assumption” or “SSA”). This part lays out the basics of the debate.