Care and demandingness
People sometimes object to moral claims on the grounds that their implications would be too demanding. But analogous objections make little sense in empirical and prudential contexts. I find this contrast instructive. Some ways of understanding moral obligation suggest relevant differences from empiricism and prudence. But the more we see moral life as continuous with caring about stuff in general, the less it makes sense to expect “can’t-be-too-demanding” guarantees.
I. Demandingness objections
I’ll understand a “demandingness objection” as an objection of the following form:
The fact that p would imply that one should engage in intuitively “demanding” behavior is evidence against p.
I’ll construe “demanding,” here, as something like “very costly.” More precise definitions might shed more light.
I’m most familiar with demandingness objections in the context of various moral claims. For example, demandingness is often offered as an objection to utilitarianism — a view which implies, naively, that you’re obligated to devote all of your time and resources to helping others, up the point where you are improving the lives of others less than you are reducing the quality of your own life (I think that various non-consequentialist views actually have similar implications). But it comes up in other contexts as well. For example, people sometimes argue that we should discount the moral importance of what happens to future generations of humans, because not doing so implies, naively, that we should sacrifice a lot now in order to make the future better.
Below, I discuss a few ways of making sense of these objections: I think they often draw on important heuristics, and should be taken seriously in various ways. And I think they may make sense on some conceptions of moral obligation. But I also worry that they can obscure the basic contingency of how what matters to us can be arranged in the world.
II. Applying the objection to empirical and prudential claims
Considered abstractly, demandingness objections of the form listed above seem quite strange. Consider, for example, how such objections sound in the context of empirical claims:
- “If there was a big storm heading this way, that would mean we should cancel our canoe trip, which we’ve been planning for months. This is evidence against an oncoming big storm.”
- “If this start-up idea would be profitable at that scale, that would imply that I should drop out of college to pursue it, even though my parents would hate that. This is evidence that the idea wouldn’t be profitable at that scale.”
- “If we took a wrong turn three hours ago, that would mean that we would have to drive all the way back. This is evidence that we didn’t take a wrong turn.” (This example is inspired by one from Anna Salamon/Eliezer Yudkowsky; and by recent misadventures.)
The empirical world, it is generally acknowledged, doesn’t come with any “won’t-be-demanding” guarantee. It can just be, well, any old way. (Or at least, we recognize this in a cool hour. Whether our epistemology always reflects this fact is a different question; and when it seems like it doesn’t, the heuristics involved aren’t necessarily unreasonable.)
I think we implicitly recognize something similar in the context of non-moral claims about what is valuable or worth doing, even if we hold empirical facts fixed. Consider:
- “I have some terrible disease that will cause me to lose my legs if untreated, and the only available treatment for it is very expensive. But if losing my legs were very bad, that would imply that I should pay for this treatment. This is evidence that losing my legs isn’t so bad.”
- “My true love is waiting for me on top of that mountain, which would be exhausting to climb. If true love were so great, that would imply that such a climb would be worth it. This is evidence that true love isn’t so great.”
Indeed, I think the strangeness of demandingness objections in the context of evaluative claims like “true love is extremely awesome” is a fairly direct consequence of their strangeness in the context of empirical claims. If the empirical world can be “any old way,” then it is an empirical question what sorts of mountains you might need to climb, in order to reach your true love. But the value of true love is (basically) independent of the topography of the world’s mountain ranges, and the locations of the lovers. Your true love is just as beautiful and kind, whether he/she is on top of Everest, or just down the block. So to a first approximation, you shouldn’t change your estimate of the value of being with him/her, upon learning of his/her location. God didn’t set the value of true love, then set the mountain ranges, then adjust the value of true love to make sure that it wouldn’t be worth climbing any mountains to get.
Put more generally: if the value of a thing is independent of its cost, you shouldn’t lower your estimates of the value, upon learning that you live in a world where you have to pay high costs to get it. If a restaurant is worth driving 100 miles to get to, it stays that way, whether it turns out to be 1, 10, or 50 miles away from your house. You might think harder about how good the food really is, before setting out on a longer drive. But the food doesn’t get worse depending on how far away you happen to live.
III. Making mistakes vivid
I find it helpful, in this context, to really try to make vivid to myself what kind of mistake I would be making if I e.g. refuse to turn around the car, or to drop out of college, or to get the treatment, or to climb the mountain, assuming that the empirical and evaluative situation is the intuitively “demanding” way in the examples above, but I refuse to accept this.
Thus, for example, if I keep driving after having made a wrong turn, because the implications of having made a wrong turn are too “demanding,” I’m not doing myself any favors. I’m not “getting away with it” — defecting on the real world, with all the costs it “demands” I pay, but successfully cocooning myself in some superior fantasy world. Rather, I’m just continuing to drive in the direction opposite to where I want to go. I’m just moving further and further away from where I want to be, because I don’t want to look at where I am.
Similarly, if I would become a billionaire if I pursued that start-up idea, and this would in fact be well worth dropping out of college and braving the disapproval of my parents for, there’s nothing clever about pretending that it wouldn’t be, or that the idea wouldn’t work: I’ll just be paying a billion dollars for a college degree and more comfortable conversations at Christmas. If, upon really understanding what it would be like to lose my legs, I’d prefer to pay for the treatment, there’s no sense in not paying: I’ll just be left legless, with more money, but less of what I want overall. If true love is worth an exhausting climb, all I’m doing, by refusing to climb, is turning away from something I care about deeply, for the sake of something I care about less.
I think that naively applying “demandingness” objections to morality can risk mistakes of this kind.
IV. Is morality different?
Demandingness objections are much more common in the context of moral claims — in particular, claims about what actions are morally obligatory, as opposed to merely “good,” “admirable,” “supererogatory,” etc — than in the context of empirical or prudential claims. Why might this be? Does the moral world come with some “won’t-be-demanding” guarantee, that the empirical and prudential worlds do not?
It’s worth noting that some very common-sensical moral claims — including non-consequentialist ones — are quite demanding. Thus, for example, we tend to think that if you’re driving to the hospital to save yourself from imminent death, but getting there in time requires running someone over on the way, morality prohibits you from running them over: you have to let yourself die instead (I discuss in more depth here). Indeed, proponents of demanding moral claims sometimes try to argue for the plausibility of their view by appealing to our willingness to accept extreme moral demands in extreme circumstances, like war or emergency (see Sterri (2020) for some discussion of understanding these types of demands within a framework of “informal insurance”; and see also Shulman (2012) for discussion of ways helping out in emergencies can be in an agent’s self-interest).
In this sense, I don’t think “can morality be demanding” is really the question: we tend to be quite open to the possibility that it can, in certain special (and hopefully, unlikely) circumstances. Rather, what prompts the strongest “demandingness objection” is the idea that moral demands can or do have a certain kind of totalizing and omnipresent quality. It’s one thing, one might think, to save a child from drowning in pond; one thing, even, to donate to save the life of a child in the developing world; but it is quite another if this must become the only thing one is doing, all day, every day, up to whatever point stops being best for the children involved — if your “obligations” eclipse all other aspects of your life, and all your other interests and concerns are given space almost entirely for instrumental reasons. The idea that that’s obligatory seems, to many, quite counterintuitive. And certainly, it fits poorly with our actual patterns of social reproach.
Perhaps, then, morality comes with some sort of “won’t-be-demanding-in-normal-circumstances/in-a-totalizing-type-of-way” guarantee? If so, note that proponents of demanding moral theories can still argue that circumstances aren’t relevantly “normal” — e.g., for example, that global poverty and the rest of the world’s problems constitute a type of “ongoing emergency,” which familiar standards of conduct aren’t equipped to handle. Or, to put a similar point in somewhat different terms, a naive guarantee of this form seems strangely insensitive to what the “normal circumstances” of the actual world really are. Surely, one might think, the extent of morality’s demands should depend at least in part on what sorts of opportunities for impact are in play. Surely it matters, that is, whether every ~$3000 donation saves one life, or ten, or ten thousand, or ten million. But if, at ten thousand lives, one starts tolerating various types of “demandingness,” we might start to wonder about one life, too.
Still, though, I think that certain conceptions of moral obligation may well make room for certain “totalizing demandingness limitations” of this form. For example, if we think of morality as a set of norms and heuristics we use to govern our communal life together in mutually beneficial and/or interpersonally justifiable ways (or which we would agree to use, from some veil-of-ignorance-like epistemic perspective), violation of which we all agree to respond to with various degrees of social sanction (regardless of our personal concerns), it may well make sense to argue that norms that demand too much, and which will be predictably violated on too widespread a scale, either aren’t practically viable, or wouldn’t meet relevant standards of “mutually beneficial” or “interpersonally justifiable.” Put another way: if we think of moral obligations as akin to taxes, imposed by our communal life, it may well make sense to argue that the taxes can’t be that high. Otherwise, not enough people will pay them; and not enough people would elect, or agree to be governed by, a government that imposed them (though also, from a suitably veil-of-ignorance type perspective, maybe everyone would so agree: cf. Harsanyi (1955)).
Indeed, utilitarians who think very demanding behavior obligatory still tend to want to exempt violations of the relevant obligations from the sorts of psychological and social implications with respect to e.g. guilt, blame, punishment, reputational-damage, etc that standard sorts of moral-norm-violation involve (this is related, I think, to the sense in which standard notions of “obligation” don’t have an obviously comfortable home in consequentialist ethics in general — see section VI of my previous post for discussion). In this sense, it may be better to think of such claims as suggesting that demanding behavior is obligatory*, to be distinguished from the type of obligatory-without-the-asterisk it is to e.g. actually rescue a child drowning in the pond in front of you. Plausibly, it is obligation-without-the-asterisk that our moral intuitions are most attuned to, and from which most resistance to demandingness stems. And to the extent we’re adding asterisks in front of the “obligations” to which demandingness applies, this might suggest that we’ve changed the topic.
V. Care
So overall, I think some conceptions of morality may make expecting some type of “demandingness constraint” more reasonable in the context of moral claims than in the context of empirical or prudential ones. In particular, I think conceptions of morality as some sort of empirically-informed, compromise arrangement between agents with different values might support such constraints (though whether these conceptions will fit with our other intuitive moral commitments is a further question).
That said, the more you think of a given morally-flavored endeavor (for example, helping others, or making the world a better place) as rooted in and continuous with caring about the goals of that endeavor (as opposed to as something that constrains your pursuit of what you care about, and forces you to “sacrifice” something you care about more for something that you care about less), the less, I think, it makes sense to expect that endeavor to obey some sort of “can’t-be-too-demanding” constraint; and the more similar demandingness objections, with respect to that endeavor, start to sound to the empirical and prudential versions above — at least to my ear.
That is, just as the empirical contingency of the world might arrange what’s in your narrow self-interest in any old way, so too may it arrange what you would care about on reflection in any old way (I am assuming, here, that your caring about something doesn’t automatically class it under your “self-interest”). Just as, if you really understood what was at stake, you might want to climb the mountain, or to get the treatment, so too, if you really understood what was at stake, you might want to donate, or to put a lot of energy into helping with some cause — even when doing so involves significant trade-offs with other things you value. It would’ve been preferable if the world were arranged such that fewer of such trade-offs were required — just as it would be preferable if the treatment were less expensive, or the mountain less rugged. But high prices can be worth paying — and it’s an empirical question what’s on sale, and at what price.
I find the notion of something being “worth it” helpful in this context. The treatment is expensive, but worth it; the climb is exhausting, but worth it. And to the extent that something is “worth it,” I find it useful to remember and visualize the perspective from which it would be seen as such — a perspective that holds vividly and accurately in mind what is at stake on both sides of the scale; which feels, rightly, the weight of each; and which chooses with clarity. (Though I also think it’s important not to prejudge which choices are “worth it” and which are not, and then to seek imaginative confirmation of this judgment).
Indeed, the vibe of “demandingness” can seem ill-suited to such a perspective. Experiencing something as “demanding” suggests a kind of internal tension, maybe even an experience of being “coerced” or “forced.” One “has” to do something, even if one does not want to. But if something is really worth it, from the perspective of what you care about, then a sufficiently informed and understanding perspective would result, I think, in a kind of wholeheartedness about it — a kind of clarity and unity of purpose that “demandingness” does not connote.
And when one chooses not to engage in some “demanding” behavior, I think it’s at least worth exploring whether one can make that choice wholeheartedly, as well.
VI. Reasons to be wary of demandingness
In general, I think that the abstract arguments for very “demanding” forms of moral behavior are very strong (even for non-consequentialists). But I think that the fuzzier, harder-to-articulate heuristics, evidence, and forms of wisdom encoded in our hesitations about such behavior are important sources of information as well. And I worry that for a certain type of person, the abstract arguments — including arguments of the type I’ve gestured at here — will function (especially in a context of certain kinds of guilt and self-hatred) as bulldozers, or as tools for internal coercion, or as disruptions to a way of relating to these issues that was functional and good, even if not theoretically articulated.
Indeed, my sense is that especially in some activism/social impact-oriented communities, people’s psychological relationship with the omnipresence of the possibility of “doing more” is often somewhat delicate. Even for people who have made their peace with what they feel they can do, the peace can be somewhat uneasy: one senses, sometimes, an internal struggle that has been, at times, painful, resulting in a possibly-somewhat-fragile equilibrium that a person might reasonably desire to protect.
Intense and complicated psychological relationships to this kind of stuff make sense. The idea that one could be doing wrong, or failing to protect and promote what you care about most, in high-stakes ways and/or on a widespread scale, is a painful and scary and powerful one — one that can draw on deep fears about the world (including the social world) and about oneself. And the vision of the empirical world at stake is often horrifying in itself, regardless of its implications for our actions. This isn’t about true loves on mountaintops. This is about people dying, needlessly, all around the world; about horrific suffering in places we don’t see, and places we do; about risks that threaten the permanent destruction of everything good and worthy about our civilization; and much else. It’s not a thought experiment.
A full discussion of the many reasons not to grab the nearest, very demanding idea that currently seems compelling to you, and to make extreme sacrifices for its sake, is beyond the scope of this post. The easiest reasons to articulate, I think, are (a) instrumental considerations related to things like “burnout” and “being a healthy strong flourishing person is good in a whole lot of ways”, and (b) uncertainty of the many kinds that should be salient in the context of extreme/socially unusual/irreversible/very costly actions, aimed at sometimes-poorly-understood outcomes and mechanisms, and which you yourself feel a lot of internal resistance to performing. But I think there are a variety of other consideration as well — related, for example, to the complexity and multi-faceted-ness of what we care about; and to the sense in which something’s seeming “totalizing” warrants a lot of caution with respect to it.
The abstract point I want to make is that there are no guarantees — including guarantees about “non-demandingness” — about the way in which what we would care about on reflection might be at stake in our actions. But I think we should be wary about throwing around the weight of the world’s pain, and the stakes of our responses to it, too casually, and especially not as an intellectual bludgeon, or on a moralistic high horse — whether in relationship to others, or to ourselves. And ultimately, the most pressing questions aren’t abstract, or about the circumstances under which we will have “fulfilled our obligations” or “done enough.” They’re not about what the stakes could be in principle. They’re about what the stakes actually are.
Further reading
Second in a two-part series on whether morality falls out of instrumental rationality, if you do the game theory right. I discuss four objections to the morality in question: that it isn’t instrumentally rational; that it gives the wrong types of reasons for moral behavior; that it incentivizes threats and exploitation; and that it licenses arbitrarily bad behavior towards the sufficiently disempowered and unloved.
First in a two-part series about whether morality falls out instrumental rationality, if you do the game-theory right. This part lays out of the basic structure of a prominent argument in favor.
What is altruism towards a paperclipper? Can you paint with all the colors of the wind at once?