Last updated: 03.23.2022
Published: 03.18.2022
Series
On expected utility / Part 2

Why it can be OK to predictably lose

Previously in sequence: Skyscrapers and madmen. PDF of the full series here.

This is the second essay in a four-part series on expected utility maximization (EUM). This part focuses on why it can make sense, in cases like “save one life for certain, or 1000 with 1% chance,” to choose the risky option, and hence to “predictably lose.” The answer is unsurprising: sometimes the upside is worth it. I offer three arguments to clarify this with respect to life-saving in particular. Ultimately, though, while EUM imposes consistency constraints on our choices, it does not provide guidance about what’s “worth it” or not — and in some cases, the world doesn’t either. Ultimately, you have to decide. 

I. Sometimes it’s worth it

When I first started looking into expected utility maximization (EUM), the question of “why, exactly, should I be OK with predictably losing in one-shot cases?” was especially important to me. In particular, it felt to me like the fact that you’ll “predictably lose” was the key problem with fanaticism (though some fanaticism cases, like Pascal’s Mugging, introduce further elements, like an adversarial dynamic, and/or a very “made-up” tiny probability as opposed to e.g. a draw from a ludicrously large urn). But it also applies (to a far milder degree) to choices that EUM fans endorse, like passing up (A) saving one life for certain, in favor of (B) a 1% chance of saving 1000 lives.

Now, I’m not going to tackle the topic of fanaticism here. But I think the EUM fans are right about B over A – indeed, importantly so. What’s more, I think it’s useful to get a more direct, visceral flavor for why they’re right – one that assists in navigating the emotional challenge of passing up “safer,” more moderate prizes for the sake of riskier, larger ones. This essay draws on moves from the theorems I’ll discuss in parts 3 and 4 to try to bring out such a flavor with respect to life-saving in particular.

I’ll flag up front, though, that here (and throughout this series) I’m going to be focusing on highly stylized and oversimplified cases. Attempts to apply EUM-ish reasoning in the real world are rarely so straightforward (see e.g. here for some discussion), and I think that hesitations about such attempts sometimes track important considerations that naive estimates leave out (for example, considerations related to how much one expects the relevant probability estimates to change as one learns more). My aim here is to bring out the sense in which B is better than A in theory. Particular instances of practice, as ever, are further questions (though I think that EUM is generally under-used in practice, too).

Also: even if predictably losing is OK, that doesn’t make losing, itself, any less of a loss. Even if you’re aiming for lower-probability, higher upside wins, this doesn’t make it reasonable to give up, in the back of your mind, on actually winning, or to stop trying to drive the probability of losing down. EUM is not an excuse, after you lose, to say: “well, it was always a long shot, and we never really expected it to pay off.” Having acted “reasonably” is little comfort: it’s the actual outcome that matters. Indeed, that’s why the predictable loss bites so hard.

In that case, though: why is predictably losing OK? My answer is flat-footed and unsurprising: sometimes, what happens if you win is worth it. B is better than A, because saving a thousand lives is that important. A career spent working to prevent a 1% existential risk can be worth it, because the future is that precious, and the cost of catastrophe that high.

Ultimately, that’s the main thing. But sometimes, you can bring it into clearer focus. Here I’ll offer a few tools for doing so.

II. Small probabilities are just bigger conditional probabilities in disguise

One approach is to remember that there’s nothing special about small probabilities – they’re just bigger conditional probabilities in disguise. Consistency about how you compare prizes X and Y, vs. how you compare X and Y if e.g. a coin comes up heads, or if many coins all come up heads, thus leads to tolerance of small-probability wins (this is a principle that some of the theorems I discuss draw on).

To see this, consider:

A: Certainty of saving 1 life.

C: Heads you save 5, tails you save 0.

Do you prefer C over A? I do – and I don’t have some “but I’ll predictably lose” reaction to C.

Here’s the comparison in skyscraper terms (for simplicity, I’ll just do a two-dimensional version):

Screen Shot 2022-03-17 at 4.20.51 PM

But now consider:

D: Certainty of saving 5.

E: Heads you save 15, tails you save 0.

Do you prefer E over D? I do. And again: nice, hefty probabilities either way.

Screen Shot 2022-03-17 at 4.21.47 PM

So what about:

A: Certainty of saving 1.

F: Two coin flips. Double heads you save 15, anything else you save 0.

Screen Shot 2022-03-17 at 4.23.16 PM

Suppose that here you say: “hmm… a 75% chance of losing is starting to feel a bit too predictable.” But here’s an argument that if you prefer C to A, and E to D, you should prefer F to A. Suppose you start with a ticket for A. I offer you a chance to pay a dollar to switch to C, and you say yes. We flip, the coin homes up heads, and you’re about to cash in your C ticket for five lives saved. But first, I offer you a chance to trade your C ticket, plus a dollar, to play another coin-flip, with pay-outs like E: if the second coin is heads, you’ll save fifteen, and no one otherwise. Since, given the first heads, your C-ticket has been converted into a “five lives with certainty” ticket, this is a choice between D and E, so you go for it. But now, in sequence, you’ve actually just chosen F.

That is: you like C better than A (in virtue of C’s win condition), you like E better than D, and F just is a version of C with E as the “win condition” instead of D. Looking at the skyscrapers makes this clear:

Screen Shot 2022-03-17 at 4.53.53 PM

To bring out the need for consistency here, suppose that when I first offer you C, I also tell you ahead of time that I’m going to offer you E if the first coin comes up heads – and I give you a chance to choose, prior to the coin flip, what answer to give later. It seems strange if, before flipping, you don’t want to switch, given heads; but then, once it actually comes up heads, you do. And if that’s your pattern of preference, I can start you off with an “E-given-first-coin-heads” ticket, which is actually just an F ticket. Then you’ll pay to switch to a C ticket, then we’ll flip, and if it comes up heads, you’ll pay to switch back – thereby wasting two dollars with 50% probability.

We can run longer versions of an argument like this, to get to a preference for B over A. Suppose, for simplicity, that you’re always indifferent between saving x lives, and a coin flip between saving 2x vs. no one (the argument also works if you require >2x, as long as you don’t demand too many additional lives saved at each step). Then we can string seven coin-flips together, to get indifference between A, and a lottery A’ with a ~.8% (.5^7) chance of saving 128 (2^7) lives, and no one otherwise. But .8% is less than 1%, and 128 is less than 1000. So if you prefer saving more lives with higher probability, with no downside, B is better than A’. So, it’s better than A.

(In “skyscraper” terms, what we’re doing here is bunching the same volume of housing into progressively taller and thinner buildings, until we have a building, equivalent to A’s flat and short situation, that is both shorter and thinner than the building in B.)

I think that moves in this broad vein can be helpful for breaking down “but I’ll predictably lose”-type reactions into conditional strings of less risky gambles. But really (at least in the eyes of EUM), they’re just reminding of you the fact that you can value one outcome a lot more than you value another. That is: if, in the face of a predictable loss, it’s hard to remember that e.g. you value saving a thousand lives a thousand times more than saving one, then you can remember, via coin-flips, that you value saving two twice as much as saving one, saving four twice as much as saving two, and so on.

Of course, coupled with an unbounded utility function (such that, e.g., for every outcome O, there’s always some outcome O’, such that you’ll take O’ if heads, and nothing otherwise, over O with certainty) then this form of argument also leads straight to fanaticism (e.g., for any outcome O, no matter how good, and any probability p, no matter how small, there’s some outcome O’’, such that you’ll take O’’ with p, and nothing otherwise, over O with certainty). This is one of the reasons I’m interested in bounded utility functions. But more broadly: if you’ve got an unbounded utility function, then worries about fanaticism will pop up with every good argument for EUM, because fanaticism follows, straightforwardly, from EUM + an unbounded utility function. This is, indeed, a red flag. But we should still examine the arguments for EUM on the merits.

III. Nothing special about getting saved on heads vs. tails

Here’s a different argument for B over A – again, borrowing moves from the theorems I’ll discuss.

Suppose that Bob is dying, but you’ll save him if a fair coin lands heads. Would it be better, maybe, to save him if it lands tails, instead? No. Would it be worse? No. It doesn’t matter whether you save Bob if heads, or if tails: they’re equally probable.

Now let’s generalize a bit. Imagine that B saves 1000 lives if you draw “1” out of an urn with 100 balls, labeled 1-100. Suppose Bob’s is one of those lives. Is it any better, or any worse, to save Bob if you draw ball 1, vs. ball 2? No: they’re equally probable. Ok, so now we’ve got indifference between B and B’, where B’ saves 999 on ball 1, 1 person (Bob) on ball 2, and no one otherwise.

Screen Shot 2022-03-17 at 4.35.33 PM

But now repeat with Sally – another ball 1 life. Is it better, or worse, to save Sally on ball 1 vs. ball 3? No. So we move saving Sally, in B’, to ball 3. Then we move saving John to ball 4, and so on, until we’ve moved ten people to each ball. Call this “ten people for each ball” lottery B’’. So: we’ve got indifference between B’’ and B. But B’’ is just a chance to save ten people with certainty, which sounds a lot better than saving one person with certainty – i.e., A. So, B’’ is better than A. Thus, B is better than A.

Screen Shot 2022-03-17 at 4.37.42 PM

In skyscraper terms, what we’re doing here is slicing chunks of housing off of the top of B’s skyscraper, and moving them down to the ground. Eventually, we get a city-scape that is perfectly flat, and everywhere taller than A.

IV. What would everyone prefer?

Here’s a final argument for B over A. It’s less general across EUM-ish contexts, I find it quite compelling in the context of altruistic projects in particular.

Suppose the situation is: 1000 people are drowning. A is a certainty of saving one of them, chosen at random. B is a 1% chance of saving all of them.

Thus, for each person, A gives them a .1% chance of living; whereas B gives them a 1% chance. So every single person wants you to choose B. Thus: if you’re not choosing B, what are you doing, and why are you calling it “helping people”? Are you, maybe, trying to “be someone who saved someone’s life,” at the cost of making everyone 10x less likely to live? F*** that.

Now, some philosophers think that it makes a difference whether you know who A will save, vs. if you don’t (see the literature on “Identified vs. Statistical Lives”). To me, this really looks like it shouldn’t matter (for example, it means that everyone should pay large amounts to prevent you from learning who A will save – and plausibly that you should pay this money, too, so that it remains permissible to do the thing everyone wants). But I won’t debate this here; and regardless, it’s not an objection about “predictably losing.”

V. Taking responsibility

OK, so we’ve seen three arguments for B over A, two of which drew on moves that we’ll generalize, in the next essay, into full theorems.

I want to note, though, that using “x lives saved” as the relevant type of outcome, and coin flips or urn-drawings as the “states” that give rise to these outcomes, is easy mode, for EUM. In particular: the value at stake in an outcome can be broken down fairly easily into discrete parts – namely, lives – the value of which is plausibly independent of what happens with the others (see the discussion of “separability” below), and which hence function as a natural and re-arrangeable “unit” of utility. And everyone wants to be a probabilist about coins and urns.

In many other cases, though, such luxuries aren’t available. Suppose, for example, that you have grown a beautiful garden. This afternoon, you were planning to take a walk – something you dearly enjoy. But you’ve heard at the village pub (not the most reliable source of information) that a family of deer is passing through the area and trampling on/eating people’s gardens. If they stop by while you’re out, they’ll do this to your garden, too. If you stay to guard the garden, though, you can shoo them away (let’s say, for simplicity, that the interests of the deer aren’t affected either way).

How many walks is your garden worth? What are the chances that the deer stop by? What, exactly, are the city-scapes here?

EUM won’t tell you these things. You need to decide for yourself. That is: ultimately, you have to give weights to the garden, the walk, and the worlds where the deer stop by vs. don’t. Some things matter, to you, more than others. Some states of the world are more plausible than others, even if you’re not certain either way. And sometimes, a given action (i.e. taking a walk) affects what matters to you differently, depending on the state of the world (i.e., the deer stop by, or they don’t). Somehow, you have to weigh this all up, and output a decision.

EUM just says to respond to this predicament in a way that satisfies certain constraints. And these constraints impose useful discipline, especially when coupled with certain sorts of intuition pumps. To get more quantitative about the plausibility of deer, for example, we can ask questions like: “if I no longer had any stake in the deer situation, would I rather win $1M if they stop by, or if a red ball gets pulled from an urn with p% red balls in it?” – and call the p where you’re indifferent your probability on deer (see Critch (2017), and more in part 4). To get a better grip on the value of walks vs. gardens, we can ask questions like “how many walks would you give up to prevent the certain destruction of your garden?”, or we try to construct other intermediate trades in terms of a common “currency” (e.g., how much time or money will you pay to take a walk, vs. to restore your garden after it gets destroyed). And so on.

Still, most of the work – including the work involved in generating these sorts of intuitions – is on you. If your answers are inconsistent, EUM tells you to revise them, but it doesn’t tell you how. And there’s no secret sauce – and in many cases, no “answer” available in the world (no “true probability,” or “true utility,” of blah; and no easy unit, like lives saved or balls-in-the-urn, that you can tally up). You just have to choose what sort of force you want to be in the world, under what circumstances, and you have to face the inevitable trade-offs that this choice implies.

Sometimes, people complain about this. They complain, for example, that there’s “no way to estimate” the probability of deer, or to quantify the ratio of the value differences between different combinations of gardens and walks. Or they complain that various of the theorems I’ll discuss aren’t “action-guiding,” because such theorems construct your utility function and probability assignment out of your choices (assuming your choices satisfy EUM-ish constraints), rather than telling you what probabilities and utilities to assign, which, like a good EUM-er, you can then go and multiply/add.

Perhaps, sometimes, it makes sense to seek such guidance. The world does in fact make it possible to be better or worse at assigning probabilities to claims like “the deer will stop by” (as the existence of superforecasters makes clear); the meta-ethical realists think that the world provides them with a “true utility function”; and anti-realists tend to look for various forms of guidance as well — for example, from their idealized selves.

But the case for acting consistently with EUM does not rest on the availability of such guidance — rather, it rests on the plausibility of certain very general, structural constraints on rational behavior (e.g., you shouldn’t prefer A to B to C to A). And if you accept this case, then talking about whether you “can” assign utilities and probabilities to X or Y misses the point. To the extent you want your choices to be consistent with EUM, you are, always, assigning probabilities and utilities (or at least, some overall  probability*utility weight) to things. It’s not optional. It’s not a thing you can pick up when easy and convenient, and put down when frustrating and amorphous. You’re doing it already, inevitably, at every moment. The question is how much responsibility you are taking for doing so.

To see this, let’s suppose that you want your choices to be consistent with EUM, and let’s look at the deer/garden situation in more detail. The situation here is:

                          Deer stop by                          Deer don’t stop by

Walk                        Walk and no garden                Walk and garden

Stay to guard        No walk and garden                 No walk and garden

The inevitability of assigning both probabilities and utilities is especially clear if you assign one of them, but refuse to the assign the other — so let’s start with cases like that. Suppose, for example, that you’ve decided that saving the garden from otherwise-certain destruction would be worth giving up your next ten walks; that each of these walks adds an equal amount of value to your life; and that the value of walks does not change depending on what’s going on with the garden (and vice versa). Thus, you set a utility scale where the gap between “walk and no garden” and “no walk and garden” is 9x the size of the gap between “no walk and garden” and “walk and garden.”

Screen Shot 2022-03-18 at 12.37.05 AM

Let’s use a utility function with 1 at the top and 0 at the bottom, such that u(walk and garden) = 1, u(walk and no garden) = 0, and u(no walk and garden) = .9. (We can also scale this function by any positive constant c, and add any constant d — i.e. we can perform any “positive affine transformation” — while leaving EUM’s overall verdicts the same. What matters is the ratios of the size of the gaps between the different outcomes, which stay constant across such transformations. In skyscraper terms, we can think of the c as stretching each building by the same factor, and d as moving the whole city up and down, vertically, in the air — a move that doesn’t change the volume of housing).

Thus, if p is the probability that the deer stop by (and we assume that whether you walk, or not, doesn’t affect this probability), then the expected utility of walking is (1-p), and the expected utility of not walking is .9 (i.e., a guarantee of “no walk and garden”).

But now suppose that when you try to do this EUM calc, you start to feel like there’s “no way to estimate” the probability p that the deer will stop by. Maybe you think that humans can’t reliably make estimates like this; or that somehow, applying the idea of probability here is confused — either the deer will stop by, or they won’t.

Fine. Use whatever philosophy of probability you like. Refuse to make probability estimates if you like. But: are you going to go on the walk, or not? You still have to choose. And if you want to act consistently with EUM, then choosing amounts to a probability estimate. In particular, on EUM, you should go on this walk if and only if your probability on the deer showing up is a less than 10% (the solution of 1-p = .9, and hence the point where EUM is indifferent between walking and not walking). So if you go, you’re treating the probability as less than 10%; and if you stay, you’re treating it as more — whatever the words you’re saying in your head.

And now suppose we start with probabilities instead. Maybe you estimate a 1% probability that the deer show up, but you say: “I have no idea how to estimate my utility on walks vs. gardens — these things are just too different; life is not a spreadsheet.” Ok, then: but the EU of walking is .01*u(walk and no garden) + .99u(walk and garden), and the EU of staying is u(no walk and garden). So you should go on this walk if and only if the value difference between “no walk and garden” vs “walk and garden” is >1% of the value difference between “walk and no garden” vs. “walk and garden” (if u(walk and no garden) = 0 and u(walk and garden) = 1, the EU of walking is .99, and the interval between “walk and no garden” vs. “walk and garden” is 1; so the indifference point is u(no walk and garden) =.99). Thus, by going on the walk, or not, you are implicitly assigning a quantitative ratio to the gaps between the value of these different outcome — even if you’re not thinking about it.

OK, but suppose you don’t want to start with a probability assignment or utility assignment. Both are too fuzzy! Life is that much not a spreadsheet. In that case, going or not going on the walk will be compatible with multiple EUM-approved probability-utility combinations. For example, if you stay to guard the garden, an EUM-er version of you could think that probability of deer is 1%, but that the garden is worth 150 walks; or, alternatively, that the probability is 10%, and the garden is worth 15 walks (see e.g. here for some discussion).

But each of these combinations will then have implications for your other choices. For example, if you think that the probability of deer is 1%, then (assuming that your marginal utility of money isn’t sensitive to what happens with your garden, and that you want to say normal things about the probabilities of balls getting drawn from urn — see part 4), you should prefer to win $1M conditional on pulling balling number 1 out of a 20-ball urn, vs. conditional on the deer coming. Whereas if your probability on deer is 10%, you should prefer to win the million conditional on the deer coming instead. So if you have to make choices about how you feel about winning stuff you want conditional on balls being drawn from urns, vs. some “can’t estimate the probability of it” event happening, we can pin down the probability/utility combination required to make your behavior consistent with EUM (if there is one) even further.

Similarly, if your garden is worth less than ten walks to you (let’s again assume that each walk adds equal value, and that walk-value and garden-value are independent), then if e.g. you actually see the deer coming for your house — such that you’re no longer in some kind of “probabilities are too hard to estimate mode,” and instead are treating “the deer will eat the garden unless I stay to guard it” as certain or “known” or whatever — but you’ll have to give up 20 walks to protect the garden (maybe the deer are going to stick around for a while, and you can’t shoo them away?), then you shouldn’t do it; you should let the deer eat the garden instead. So if that’s not what you do, then we can’t think of you as an EUM-er who stayed to guard the house due to some probability on losing a garden worth less than ten walks.

Of course, it may be that you never, in your actual life, have to make such choices. But plausibly, there are facts about how you would make them, in different circumstances. And if those “woulds” are compatible with being an EUM-er at all (as I noted in my previous post, this is a very non-trivial challenge), they’ll imply specific a probability assignment and a specific (unique-up-to-positive-affine-transformations) utility function (or at least, a constrained set of probability/utility pairings).

(Granted, some stuff in this vicinity gets complicated. Notably, if your choices can’t be represented as doing EUM, it gets harder to say exactly what probability/utility assignment it makes sense to ascribe to you. This is related to the problem of “how do we tell what preferences a given physical system has?” that I mentioned last post — and I expect that they warrant similar responses.)

There’s a vibe in this vicinity that’s fairly core to my own relationship with EUM: namely, something about understanding your choices as always “taking a stance,” such that having values and beliefs is not some sort of optional thing you can do sometimes, when the world makes it convenient, but rather a thing that you are always doing, with every movement of your mind and body. And with this vibe in mind, I think, it’s easier to get past a conception of EUM as some sort of “tool” you can use to make decisions, when you’re lucky enough to have a probability assignment and a utility function lying around — but which loses relevance otherwise. EUM is not about “probabilities and utilities first, decisions second”; nor, even, need it be about “decisions first, probabilities and utilities second,” as the “but it’s not action-guiding!” objectors sometimes assume. Rather, it’s about a certain kind of harmony in your overall pattern of decisions — one that can be achieved by getting your probabilities and utilities together first, and then figuring out your decisions, but which can also be achieved by making sure your decision-making satisfies certain attractive conditions, and letting the probabilities and utilities flow from there. And in this latter mode, faced with a choice between e.g. X with certainty, vs. Y if heads (and nothing otherwise), one need not look for some independently specifiable unit of value to tally up and check whether Y has at least twice as much of it as X. Rather, to choose Y-if-heads, here, just is to decide that Y, to you, is at least twice as valuable as X.

I emphasize this partly because if – as I did — you turn towards the theorems I’ll discuss hoping to answer questions like “would blah resources be better devoted to existential risk reduction or anti-malarial bednets?”, it’s important to be clear about what sort of answers to expect. There is, in fact, greater clarity to be had, here. But it won’t live your life for you (and certainly, it won’t tell you to accept some particular ethic – e.g., utilitarianism). Ultimately, you need to look directly at the stakes – at the malaria, at the size and value of the future – and at the rest of the situation, however shrouded in uncertainty. Are the stakes high enough? Is success plausible enough? In some brute and basic sense, you just have to decide.

In the next post, I’ll start diving into the theorems themselves.

Leave a comment
LessWrongEA Forum

Further reading

03.24.2022
On expected utility / Part 4:
Dutch books, Cox, and Complete Class

Final essay in a four-part series on expected utility maximization (EUM). Examination of some theorems that aim to justify the subjective probability aspect of expected utility maximization (EUM), namely: Dutch Book theorems; Cox’s Theorem; and the Complete Class Theorem.

Continue reading
03.21.2022
On expected utility / Part 3:
VNM, separability, and more

Third essay in a four-part series on expected utility maximization (EUM). Examines three axiomatic arguments for acting like an EUM-er: the von Neumann-Morgenstern theorem; an argument based on a very general connection between “separability” and “additivity”; and a related “direct” axiomatization of EUM in Peterson (2017).

Continue reading
03.18.2022
On expected utility / Part 2:
Why it can be OK to predictably lose

This is the second essay in a four-part series on expected utility maximization (EUM). This part focuses on why it can make sense, in cases like “save one life for certain, or 1000 with 1% chance,” to choose the risky option, and hence to “predictably lose.” The answer is unsurprising: sometimes the upside is worth it. I offer three arguments to clarify this with respect to life-saving in particular. Ultimately, though, while EUM imposes consistency constraints on our choices, it does not provide guidance about what’s “worth it” or not — and in some cases, the world doesn’t either. Ultimately, you have to decide.

Continue reading