Last updated: 03.26.2024
Published: 03.24.2022
Series
On expected utility / Part 4

Dutch books, Cox, and Complete Class

Previously in sequence: Skyscrapers and madmen; Why it can be OK to predictably lose; VNM, separability, and more. PDF of full sequence here.

This is the final essay in a four-part series on expected utility maximization (EUM). This part focus on theorems that aim to justify the subjective probability aspect of EUM, namely: Dutch Book theorems; Cox’s Theorem (this one is still a bit of a black box to me); and the Complete Class Theorem (this one also supports EUM more broadly). I also briefly discuss Savage, Jeffrey-Bolker, and a certain very general argument for making consistent trade-offs on the margin – both across goods, and across worlds.

I. Comparing with the urns

So we’ve seen three ways of arguing for EUM – an argument from the vNM axioms, an argument from the general connection between separability and additivity, and Peterson’s “direct argument.” In all of these cases, though, we had to assume some probability assignment. Let’s look at that assumption more directly.

The “hanging out with a coin-flipping, urn-pulling God” set-up made the assumption of a probability assignment relatively innocuous, in virtue of the fact that basically everyone wants to be a standard probabilist about things like coins, urns, and spinning wheels. For other types of propositions, though (e.g., “what’s the chance that some human walks on mars before 2100?”), some people, and some theories of probability (see here), start saying: “no, you can’t put probabilities on things like that.”

Still, fans of EUM often do. Indeed, they start putting probabilities on basically any kind of proposition you want — probabilities often understood to express some subjective level of confidence, and hence called “subjective probabilities.” This section briefly describe a more way of thinking about this practice I often use in my own life (I also gestured at this in part 2). Then I turn to some prominent theorems that fans of subjectivity probability often look to for support.

Suppose that God has already created a world. He’s told you some stuff about it, but you don’t know everything. Indeed, maybe God lets you descend into the world, live a particular life for a while, and then return to heaven, equipped with whatever knowledge of the world that life gave you.

And suppose that once you’re back in heaven, no matter what world God has created, you always love heavenly ice-cream with the same level of passion, and strongly prefer it over nothing (feel free to pick a more meaningful prize if you’d like). Now God starts asking you questions of the following format: “would you rather I give you a heavenly ice-cream cone if (a) X is true of the world I created, or if (b) I pull a red ball out of an urn, where the fraction of red balls is p?“ (see Critch (2016) for more on this sort of technique).

Here, if we assume that you always want the ice-cream cone in the scenario you find more likely, we can then say that if you choose (a), then you think X more than p likely; if you choose (b), less; and if you’re indifferent, then p is your probability on X.

What’s more, if you’re a standard probabilist about urns, I’m optimistic about making sure that you’re a standard probabilist about “humans on mars”-ish things as well, on pain of various inconsistencies, guaranteed losses of probability on stuff you like, and other forms of silliness. I haven’t worked through all the details here, but see footnote for examples.1

Now, this whole procedure can feel like a backwards way of assigning probabilities. Don’t I need to know my probability on X, in order to choose between (a) and (b)? And fair enough: really, faced with (a) and (b), you still need to do some kind of brute “which is more likely” mental maneuver in relationship to X and p – a maneuver that we haven’t really elucidated.

Still, though, even if this isn’t a good definition of probability, I find thinking about subjective probabilities in terms of (a) vs. (b)-like choices a helpful tool – and one that makes “but you can’t assign a probability to X” seem, to me, quite an unattractive form of protest. If you’re going to make sensible (a)-vs.-(b)-like choices, then you effectively have to assign such probabilities – or at least, to act like you have.

And I prefer this way of thinking about subjective probability to salient alternatives like “fair betting odds” (see next section). In particular, thinking in terms of betting odds often requires thinking about paying as well as receiving money, and/or about different amounts of money, in different circumstances. I find this more cognitively burdensome than just asking questions like: “would I rather win {blah thing I definitely want} if X, or if a coin flip came up heads?” And such questions skip over complications to do with risk aversion and the diminishing utility of money: there’s no downside to “betting,” here, and the only thing that matters is that you want {blah thing} at all (and the same amount in both cases).

That said, note that this sort of set-up depends importantly on your always loving {blah thing} with the same level of passion, such that the only variable we’re changing is the likelihood of getting it. As ever with these hanging-out-with-God scenarios, this isn’t always realistic: in the real world, you might well prefer <ice-cream if the sandwich I just ate was normal> over <ice-cream if the sandwich was poisoned>, even if you think that sandwich very likely to have been poisoned, because you like ice-cream more if you’re alive. In practice, though, even if you’re a “enough with these unrealistic thought experiments” type, you can generally just check whether your preference for a given prize (e.g., $10k) actually varies in the relevant way. Often it won’t, and the method will work fine even outside of thought-experimental conditions.

At least in the form I’ve presented it, though, this method isn’t a formal argument or theorem. I’ll turn to some of those now.

II. Dutch books

Let’s start with what is maybe the most influential argument for probabilism: namely, the so-called “Dutch book theorem.” This isn’t my favorite argument, but it’s sufficiently common that it’s worth mentioning and understanding.

Consider a proposition X, and a ticket that says: “I’ll pay you $1 if X is true.” The basic set-up of the Dutch book argument is to require you to specify a “fair price” for any ticket like this – i.e., a price where you’re equally happy to buy a ticket, at that price (and hence get some chance of $1), or to sell one (and hence incur some chance of owing $1).

Perhaps you’re thinking: “wait, specifying a fair price on everything sounds like a pretty EUM-ish thing to do. Are you already assuming that I’m some sort of expected utility maximizer? And in particular, are you assuming that my utility in money is linear?”

That is, indeed, often the backdrop picture (we make the price of the ticket small, so as to make it plausible that your u($) is linear, on the margin, for amounts that small) — and it’s one reason I like this approach less than e.g. the one in the previous section. But strictly, we don’t need a backdrop like this. Rather, we can just prove stuff about fair prices like these, and leave their connection with “probability” and “utility” open.

In particular: we can prove that if your ticket prices fail to satisfy the probability axioms, you are vulnerable to accepting a series of trades that leave you with a guaranteed loss (we also prove the converse: if you satisfy the probability axioms, you’re immune to such losses — but I won’t focus on that here, and a glance at the SEP suggests that some further conditions are required). We imagine these trades being made with a “Dutch Bookie,” who doesn’t know anything more than you about X, but who takes advantage of the inconsistencies in your fair prices.

I’m not going to go through the full proof (see here), but as an example: let’s show that if you violate the third probability axiom (e.g., p(A or B) = p(A) + p(B), if A and B are mutually exclusive), you can get Dutch booked. And let’s think of your prices in terms of the fraction of $1 they represent (e.g. 30 cents = .3).

Suppose that A is “Trump is the next US President,” and B is “Biden is next US president.” And suppose your fair price for A is .3, and your fair price for y is .3 (see current Predictit odds here), but your fair price for “Trump or Biden is the next US president” is .7. Thus, the bookie gives you .3 for a Trump ticket, .3 for a Biden ticket, and he sells you a “Trump or Biden” ticket for .7. So you’re at -.1 prior to the election; you’ll get a dollar if Trump wins, a dollar if Biden wins, and you’ll pay a dollar if Trump or Biden wins. Oops, though: now, no matter who wins, you’ll be at -.1 at the end of the night, too (if it’s Trump or Biden, you and bookie will swap a dollar for a dollar, leaving you where you started; and if neither wins, no money will change hands). A guaranteed loss.

Similarly, if you had .3 on Trump, .3 on Biden, and .5 on Trump-or-Biden, then the bookie will sell you a Trump ticket and a Biden ticket, and then buy a Trump-or-Biden ticket, such that, again, you’ll be down -.1, with $1 coming if Trump-or-Biden, but paying out $1 if Trump, and $1 if Biden. Bad news.

There’s a large literature on this sort of argument. On one interpretation, its force is pragmatic: obey the probability axioms (at least with your fair prices), or you’ll lose money. This interpretation has to deal with various objections to the effect of: “what if I just refuse to post fair prices?” Or: “what if there are no Dutch bookies around?” Or “what if I can foresee what I’m in for, and refuse the sequences of trades?” (Or maybe even: “what if there are ‘Czech bookies’ around, who are going around buying and selling in a way that would guarantee me profit, if my prices violate the probability axioms?”). The vibes here are similar to some that come up in the context of money-pumps, but I haven’t tried to dive in.

On another interpretation, the argument “dramatizes an inconsistency in your attitudes.” For example: even granted that “Trump” and “Biden” are mutually exclusive, you’ll pay a different price if I offer you “Trump” and separately “Biden,” instead of “Trump or Biden,” and this seems pretty silly. If that’s the argument, though, did we need all the faff about bookies? (And as Hajek discusses, we’ll have trouble proving the reinterpreted “converse” theorem – i.e., maybe it’s true that if your fair prices violate the probability axioms, you necessarily have inconsistent attitudes, but is true that if you have inconsistent attitudes, you necessarily violate the probability axioms?)

My main worry about Dutch book arguments, though, is that I feel like they leap into a very EUM-ish mind-set, without explaining how we got there. In particular, they operate by making “what fraction of blah utility would you pay, to get blah utility if X?” a proxy for your probability on X, by treating money as a proxy for utility. But if I’m just starting out on my road to EUM, I haven’t necessarily constructed some utility function that would allow me to answer this sort of question in a way that I understand (and saying: “well, your utility per dollar is linear on the margin for small amounts of money, right?” won’t disperse my confusion). Indeed, some of the most common mechanisms for constructing my utility function (e.g., vNM) require some antecedent probabilism to get going.

III. Cox’s theorem

Let’s turn, then, to a different argument for probabilism, which doesn’t assume such EUM-ish vibes: namely, Cox’s theorem (original paper here).

Cox’s theorem assumes that you want to have some notion of a “degree of belief” or “plausibility,” which can be represented by a real number assigned to a proposition (this has a little bit of a “hey wait, why do that?” vibe, but I’ll set that aside for now). It then shows that if you want your “plausibility” assignment — call this plaus(x) — to obey basic logic, along with a few other plausible conditions (see here for a summary), then your plausibilities have to be “isomorphic” to standard probabilities: i.e., there has to be some reversible mapping from your plausibilities to standard probabilities, such that we can construct a Bayesian version of you, whose probabilities track your plausibilities perfectly.

(Or at least: maybe we can? There are apparently problems with various statements of Cox’s theorem – including the original, and the statement in Jaynes (1979) — but maybe they can be fixed, but maybe doing so requires an obvious additional principle? I definitely haven’t dug into the weeds, here – but see e.g. Horn (2003) on “R4”.)

What are these extra “plausible conditions”? There are three:

  1. Given some background information A, the plausibility of B and C is some function of the plausibility of B given A, and the plausibility of C given (A and B). That is: plaus(B∧C|A) = F(plaus(B|A), plaus(C|A∧B)), for some function F.

Jaynes mentions an “argument from elimination” for this, but I’m pretty happy with it right off the bat (though I’m admittedly influenced by some kind of background probabilism). That is, it does just seem that given our current background evidence, the plausibility of e.g. Trump wins the next election and Melania is vice president should be some function of (I) the plausibility of Trump wins the next election, and (II) the plausibility of Melania is vice president | Trump winning the next election.

  1. The plausibility of not-B|A is some function of the plausibility of B|A. That is: plaus(not-B|A) = G(plaus(B|A)).

Again, sounds great. The plausibility of Trump doesn’t win the next election (conditional on our current background evidence) seems very much a function of the plausibility of Trump wins the next election (conditional on that evidence).

  1. F and G are both “monotonic” – that is, they are either always non-decreasing, or non-increasing, as their inputs grow.

Also looks good. In particular: intuitively, the plausibility of B and C should go up as the plausibility of B goes up, and as the plausibility of C|B goes up. And the plausibility of not-B should go down, as the plausibility of B goes up.

From this, though (modulo the complications/problems mentioned above), we can get to the “isomorphic to Bayesianism” thing. I’m not going to try to go through the proof (which I haven’t really understood), but I’ll gesture at one part it: namely, the derivation of the “product rule” p(B∧C|A) = p(B|A)*p(C|A∧B).

As I understand it, the basic driver here is that because ∧ is associative (that is, plaus(A∧(B∧C)) = plaus((A∧B)∧C)), we can show that F needs to be such that F(x, F(y, z))=F(F(x, y), z) (see here). And then we can prove, using a lemma from Aczél (see p. 16 here) that this means there is some (reversible?) function W such that W(f(x,y)) = W(x)*W(y), such that if we treat plug in x = B|A and y = C|A∧B, we get the product rule we wanted.

I wish I understood the “lemma” part of this better. Talking about it with someone, I was able to get at least some flavor of what (I’m told) is the underlying dynamic, but I’m sufficiently hazy about it that I’m going to relegate it to a footnote.2

Overall: I’m putting this one down as “still a pretty black box,” and I’d love it if someone were to write a nice, intuitive explanation of what makes Cox’s theorem work.

Assuming it does work, though, I think it’s a substantive point in probabilism’s favor. In particular, it’s getting to probabilism from assumptions about the dynamics of the “plausibility” that seem to me weaker than – or at least, different from – the probability axioms. Thus, (1) and (2) above are centrally about what the plausibility of different propositions depends on (rather than how to calculate it), and (3) is about a very high-level and common-sense qualitative dynamic that plausibility calculations need to satisfy (e.g., things like “as a proposition gets more plausible, its negation gets less plausible”). And because of its more purely formal (as opposed to pragmatic) flavor, Cox’s theorem suffers less from objections of the form “what if I just don’t do deals with the Dutch bookie?” and “what if there aren’t any Dutch bookies around?” (though this also renders it more vulnerable to: “who cares about these constraints?”).

IV. The Complete Class Theorem

Let’s look at one final way of trying to justify probabilism (as well as something like EUM): namely, the Complete Class Theorem (here I’m drawing heavily on a simplified version of Abram Demski’s explanation here; see also these notes, and the original paper from Wald).

The set up here is something like the following. God presents you with a set of worlds, W, and tells you that you’re going to be born into one of them (W1, W2, etc). Further, each of these worlds comes with a set of observations (O1, O2…) that you’d make, if you were in that world. However, multiple worlds are compatible with the same set of observations. Your job is to choose a policy that outputs actions in response to observations (you’re also allowed to choose “mixed policies” that output actions with some probability). And when you take an action in a world, this yields a given amount of utility (the Complete Class Theorem starts with a utility function already available). And for simplicity, let’s assume that the set of worlds, observations, and actions are all finite.

Thus, for example, suppose that you get one util per apple, and carrots are nothing to you. And your possible worlds are “normal better farm” world, where planting apples yields ten apples, and planting carrots yields ten carrots; and a “weird worse farm” world, where planting apples yields five carrots, and planting carrots yields five apples. In both worlds, your observations will be: an empty field, ready for planting. And your possible actions are: <plant apples, plant carrots, do nothing>.

The complete class theorem says that your policy is pareto optimal (that is, there’s no policy that does better in at least one world, and at least as good in all the others) iff there is some non-dogmatic probability distribution over worlds (i.e., a distribution that gives non-zero probability to all worlds) such that you’re maximizing expected utility, relative to that distribution. That is, not throwing away utility for free, in some worlds, is equivalent to being representable as doing (non-dogmatic) EUM.

The proof from “representable as doing EUM” to “pareto optimal” is easy. Consider the probability distribution relative to which your policy is doing EUM. If your policy isn’t pareto-optimal, there’s some alternative policy that does better in some world, and worse nowhere. And because your probability distribution is non-dogmatic, it’s got some probability on the “does better” world. Thus: there’s some alternative policy that would get more expected utility, relative to your probability distribution. But we said that your probability distribution was maximizing expected utility – so, contradiction.

The proof from “pareto optimal” to “representable as doing EUM” is a bit more complicated, but it comes with a nice geometric visualization. Basically, each policy is going to be equivalent to a vector of real-numbers, one for each world, representing that policy’s utility (or, for mixed policies, the expected utility) in that world. If there are n worlds, we can represent the set of available policies as points in an n-dimensional space. And because you can pursue mixed policies, this set will be convex – e.g., for any two points in the set, the set also contains all the points on the line between them (the line drawn by mixing between those policies with p probability).

To illustrate, here is the space of available policies for the farming example above:

Screen Shot 2022-03-20 at 9.56.41 PM

What’s more, for any given point/policy in this space, we can define a region where a pareto-improvement on that policy would have to live — a region that would generalize the notion of “up and/or to the right,” in two dimensions, to n dimensions. Thus, for a policy that yields two apples, in expectation, in each world, this region would be the grey box below:

Screen Shot 2022-03-20 at 9.57.26 PM

Thus, my policy being pareto optimal is equivalent to the “pareto improvements on my policy” space being disjoint from the space of available policies. And this means that there will be some hyperplane that separates the two (this follows from the “hyperplane separation theorem,” but it’s also quite intuitive geometrically). For example, if my policy is (5, 2.5) – i.e., 50% on planting apples, 50% on planting carrots – then the hyper-plane separating the available policies from the pareto improvements on my policy is the line y = -2x + 10:

Screen Shot 2022-03-20 at 9.58.22 PM

But we can use the vector normal to this hyperplane to construct a probability distribution, relative to which the policy in question is maximizing expected utility. In particular (here my understanding gets a little hazier): hyperplanes are defined as a set of points (x1, x2, …. xn) that satisfy some equation of the form a1*x1 + a2*x2… an*xn = c, where the vector (a1, a2, …an) is normal to the hyperplane. And the hyperplane separation theorem says that there will be some hyperplane, defined by vector v and some constant c, such that for any point x in the available policy space, and any point y in the pareto-improvement space, v · x ≤ c, and v · y ≥ c. What’s more, because we chose the hyperplane to intersect with my policy, my policy is an x such that v · x actually just equals c. Thus, no other point in the available policy space yields a higher constant, when dot-product-ed with v.

But we can “shrink” or “stretch” a normal vector defining a hyperplane, without changing the hyperplane in question, provided that we adjust the relevant constant c as well. So if we scale v and c such that all the elements of v are between 0 and 1, and add up to 1 (I think we can ensure that they’re non-negative, due to the ‘up and to the right’-ness of the pareto-improvements region?), then we can treat the re-scaled vector as a probability distribution (here I think we need to assume that all the worlds are mutually exclusive). And thus, since the points x in policy space represented utilities in each world, the dot product v · x will represent the expected utility of that policy overall, and the constant c (the expected utility of my policy) will be highest expected utility that any available policy can achieve, relative to that probability distribution.

For example: in our weird farm example, the relevant hyper-plane for (2.5, 5) – and indeed, the relevant hyperplane for all pareto-optimal points – is defined by the line y = -2x + 10, which we can rewrite as 2x + y = 10: i.e., the set of points w such that (2, 1) · w = 10, where (2, 1) is the vector normal to the line in question. Dividing both sides by 3, then, we get (2/3, 1/3) · w = 10/3. Thus, the relevant probability distribution here is 2/3rds on a “weird farm” world, and 1/3rd on a “normal farm” world, and the expected utility of a policy that yields 2.5 given weird, and 5 given normal, is, indeed, 10/3 (2/3*2.5+5/3).3

As ever in this series, this isn’t a rigorous statement (indeed, jessicata brings up a possible counterexample here that I haven’t tried to dig into, related to policies that are pareto-optimal, but only EU-maximizing relative a dogmatic prior). Assuming that the basic gist isn’t wildly off, though, to me it gives some non-black-box flavor for how we might get from “pareto optimal” to “representable as doing EUM,” and thus to “pareto optimal” iff “representable as doing EUM.”

How cool is this “iff”, if we can get it? Philosophically, I’m not sure (though geometrically, I find the theorem kind of satisfying). Here are a few weaknesses salient to me.

First, the theorem (at least as I’ve presented it) assumes various EUM-ish vibes up front. Centrally, it assumes a real-valued utility function. But also, because we’re doing mixed strategies (which themselves involve a bit of probability already), the utility of a policy in a given world is actually its expected utility right off the bat, such that the theorem is really about not throwing away expected utility for free, rather than utility proper. There may be ways of weakening these assumptions (though to me, getting around the latter one seems hard, given the centrality of convexity to the proof), but absent weakening, we’re not exactly starting from scratch, EUM-wise.

Second, the theorem doesn’t say a thing that you might casually (but quite wrongly) come away from it thinking: namely, that given a probability distribution and a utility function, if you don’t maximize expected utility, you’re doing something pareto-dominated. That’s false. Suppose, in the example above, you’re 99% that you’re in a normal farm world, and 1% that you’re in a weird farm world. The EV-maximizing thing to do, here, is to plant apples with 100% (EV: 9.9 utils). But planting carrots with 100% (EV: .05 utils) is pareto-optimal: nothing else does as well in the weird farm world. Put another way (see jessicata’s comment here), CCT says that you can rationalize pareto-optimal policies as EU-maximizing, by pretending that you had a certain (non-dogmatic) probability over worlds (and that if you can’t do this, then you’re throwing away value for free). But once you actually have your utility function and probabilities over worlds, considerations about pareto-optimality don’t tell you to do EUM with them. (And come to think of it: once we’re pretending to have probability distributions, why not pretend that our probability distributions are dogmatic?)

Finally: how hard is it be pareto optimal/representable as doing EUM, anyway? Consider any set of observations, O, like the observation of sitting at your desk at home. And consider a random inadvisable action A, like stabbing the nearest pencil into your eye. And now consider a world W where you see your desk, and then, if you stab your pencil into your eye, the onlooking Gods will reward you maximally, such that any policy that puts less than 100% on eye-stabbing in response to desk-seeing does worse, in W, than 100% on eye-stabbing (thanks to Katja Grace for suggesting this sort of example). Boom: stabbing your pencil into your eye, next time you sit down at your desk, is pareto optimal, and representable as doing EUM. But it looks pretty dumb. And what’s more, this seems like the type of counterexample we could generate for any action (i.e., X particular pattern of random twitching) in response to any observations, if the space of possible worlds that contain those observations is sufficiently rich (i.e., rich enough to contain worlds where the Gods reward that particular action maximally). That is, if you’ve got sufficiently many, sufficiently funky worlds, Pareto-optimality might come, basically (or entirely), for free.

Even granted these worries, though, I still count the complete class theorem as another point in favor of EUM. In particular, in toy examples at least, pareto-optimality does in fact rule out significant portions of policy space, and it’s at least somewhat interesting that all and only the pareto-optimal points can be interpreted as maximizing (non-dogmatic) EU. Pareto-optimality, after all, is an unusually clear yes, rationality-wise. So even a loose sort of “pareto-optimal iff EUM” feels like it gives EUM some additional glow.

V. Other theorems, arguments, and questions

OK, these lasts two essays have been a long list of various EUM-ish, theorem-ish results. There are lots more where that came from, offering pros and cons distinct from those I’ve discussed here. For example:

  • Savage seems to me a particularly powerful representation theorem, as it does not require assuming probabilities or utilities up front. I haven’t been able to find a nice short explanation of Savage’s proof, but my hazy and maybe-wrong understanding is that you do something like: construct the probability assignment via some procedure of the form “if you prefer to have the good prize given X than the good prize given Y, then you think X more likely than Y”; and then you get to EU-maximizing out of something like separability. The big disadvantage to Savage is that his set-up requires that any state of the world can be paired with any prize, which is especially awkward/impossible when “worlds” are the prizes. Thus, for example, for Savage there will be some action such that, if it’s raining all day everywhere, you have a nice sunny day at the park; if the universe is only big enough to fit one person, you create a zillion people; and so on. (In a “hanging out with God” ontology, this would correspond to a scenario where God creates one world on the left, but doesn’t tell you what it is, and then he offers you choices where he creates an entirely distinct world on the right, depending on which world he created on the left – where the assumption is that you don’t care about the world on the left). This gets kind of awkward.
  • My understanding is that Jeffrey avoids this problem by defining overall expected utilities for “propositions,” and treating an action as a proposition (e.g., “I do X”). One problem with Jeffrey, though, is that he relies on a not-especially-intuitive “impartiality” axiom (Jeffrey remarks: “The axiom is there because we need it, and it is justified by our antecedent belief in the plausibility of the result we mean to deduce from it”) – but maybe this is OK (I can kind of see the appeal of the axiom if I squint). Another issue that his framework fails to assign you a unique probability distribution — though some people (at least according to the SEP) seem to think that it can be made to do this, and others (see here and here) might think that failing to determine a unique probability distribution isn’t problem. Overall, I’m intrigued by Jeffrey (especially because he avoids Savage’s “sunny day given rain” problems) and would like to understand the proof better, but I haven’t yet been able to easily find a nice short explanation, and this series is long enough.
  • Yudkowsky, here, gestures at a very general argument for making consistent trade-offs on the margin, on pain of pareto-domination (thanks to John Wentworth for discussion). E.g., if you’re trading apples and oranges at a ratio of 1:2 at booth A (i.e., you’ll trade away one apple for more than two oranges, or an orange more than half an apple), but a ratio of 1:5 at booth B, then you’re at risk of trading away an apple for three oranges at booth A, then four oranges for an apple at booth B, and thus giving away an orange for free. It’s a further step from “your ratios on the margin need to be consistent” to “you have to have a well-behaved real-valued utility function overall” (and note that the relevant ratios can change depending on how many apples vs. oranges you already have, without risk of Pareto-domination – a fact that I think Yudkowsky’s presentation could be clearer about). But overall, we’re getting well into the utility-function ballpark with this argument alone. And what’s more, you can make similar arguments with respect to trade-offs across worlds. E.g., if you’re trading tickets for “apples given rain,” “oranges given rain,” “apples given sun,” and “oranges given sun,” the same sort of “your ratios need to be consistent” argument will apply, such that in some sense you’ll need to be giving consistent marginal “weights” to all four tickets — weights that can be treated as representing the EU gradient for an additional piece of fruit in a given world (though which won’t, of themselves, determine a unique probability-utility decomposition).
  • Other EUM-relevant arguments and theorems I’m not discussing include: Easwaren’s (2014) attempt to derive something EUM-ish without representation theorems (though looks like his approach yields an incomplete ordering, and I tend to like completeness); the “accuracy” arguments for Bayesianism (see e.g. Joyce (1998) and Greaves and Wallace (2006)); and much else.

There are also lots of other questions we can raise about the overall force of these arguments, and about EUM more broadly. For example:

  • What about infinities, fanaticism, and other gnarly problem cases? (My current answer: yeah, tough stuff, definitely haven’t covered that here.)
  • Are there alternatives to EUM that are comparably attractive? (My current answer: not that I’m aware of, though I haven’t looked closely at the possibilities on offer (though I did at some point spend some time with Buchak (2014)). We know, though, that any alternative will have to violate various of the axioms I’ve discussed.)
  • How useful is EUM in the real world? Obviously, we can’t actually go around computing explicit probabilities and utilities all the time. So even if EUM has nice properties in theory, is there any evidence that it’s an actively helpful mental technology in practice? (My current answer: it’s an open empirical question how much thinking in explicitly EUM-ish ways actually helps you in the real world, but I think it’s pretty clearly useful in at least some contexts – e.g., risk assessment – and worth experimenting with quite broadly; see Karnofsky (2021) for more.)

Clearly, there’s a lot more to say. My main hope, in this series, has been to give some non-black-box flavor for the types of dynamics that cause EUM to show up at an ideal of decision-making from a variety of different angles – a flavor of the type my younger self was looking for, and which I hope can support emotional and philosophical clarity when making EUM-ish decisions. There is, in my opinion, something quite elegant here – and in a sense, quite deep. Few abstractions are so structurally relevant to our thought and action. So even if you continue to hold EUM as a distance, I think it’s worth understanding.

1

Suppose that you try to give two inconsistent probabilities to a single proposition. For example, you say that you’re indifferent between <ice cream if humans on mars by 2100> and <ice cream if red ball, where 10% of balls are red”>; but you’re also

2

Suppose we have some function F(x1, x2 … xi). And suppose that this function is associative, in the sense that the order of the inputs doesn’t matter, and that the inputs can only take on a finite number of values. (Maybe it ...

3

And we can see why this is the probability distribution rationalizes any point on the pareto frontier: if you’ve got 2/3rds on weird farm, and 1/3rd on normal farm, then the expected number of apples you get from planting apples vs. carrots is the same: 1/ ...