The importance of how you weigh it
Like, nobody,” she says, “is considering the décor!
G.A. Cohen, “Rescuing Conservatism
Much of normative ethics centers on which considerations matter, and why. But often, it makes a bigger difference how much a consideration matters — that is, its weight. I think this is an important fact. I think it sheds light, for example, on how helpful to expect training in normative ethics to be for real decision-making; and on whether the difference between consequentialism and non-consequentialism is as significant as sometimes supposed.
I. The basic set, and beyond
There are certain things in ethics that basically everyone treats as important: for example, the flourishing of conscious creatures. And these things are enough to motivate and explain a lot of moral life: for example, the importance of helping people, not harming them, and so forth. Call this set of values “the basic set.”
Some ethicists essentially stop with the basic set, or something in the vicinity. Faced with some candidate further value (say, equality of welfare levels, even amongst people who will never interact or know of each other’s existence; or the intrinsic value of things like an ancient redwood grove, apart from how it impacts any conscious creatures), they will generally be inclined either to explain this candidate value instrumentally, via reference to the basic set (e.g., equality matters, when it matters, because it affects people’s experience of their relationships to others), or to offer some sort of (sometimes hazy and ad hoc) debunking explanation (something something evolution game-theory biases something?).
But other ethicists don’t stop with the basic set. To the contrary, they are actively open to, and interested in, an ethical landscape lush with a complex variety of values and considerations, interacting in intricate and sometimes mysterious ways. They care less about an ethical theory’s simplicity, and more about its fidelity to the lived and often subtle character of human ethical life — and they’re happy to posit new, sui generis values and norms where doing so increases the fidelity in question.
I think there are correlations, here, with certain broader distinctions in temperament. Some people — call them “reductionists” — are naturally more concerned to avoid over-complicating their worldview, and so prefer, where possible, to eliminate and strip down. They like cleanliness, order, comprehension, control. They dislike epicycles, mysteries, inflations, over-fittings. They are more likely to be seen arguing that some possibly-new-and-interesting X is “just” some more familiar and accepted Y.
Others — call them “anti-reductionists” — are more concerned to avoid the sin of Procrustes, who stretched people, or cut off their legs, to force them to fit the size of a particular iron bed. They fear simplicity more than complexity; uniformity more than variety; “just” more than “not just.” They worry less that we will take too seriously a datum that seems to fit imperfectly with some favored theory; and more that we won’t take it seriously enough.
I’m sympathetic to the anti-reductionist impulse in lots of ways, and open to many values beyond the basic set. But I think it’s important to distinguish between whether to stop at the basic set in theory, and whether, in a given case, the basic set is most of what matters in practice. And I worry that too much of a certain kind of philosophy causes views on the former to distort views on the latter.
II. Academic incentives
Part of this worry stems from a sense that academic philosophy may artificially select for and/or incentivize focus on more exotic values and considerations, beyond the basic set.
For example: if you stop at the basic set, certain sorts of philosophical questions become much less interesting. Consider, for example, a global utilitarian, who thinks that the most fundamental answer to basically every normative question (what should we do? which dispositions should we have? what rules should we follow? how should we organize our institutions?) is: “whatever maximizes the welfare of conscious creatures.” If you believe this, whole swaths of philosophy become, at bottom, empirical. Lots of contemporary political philosophy, for example, is devoted to trying to analyze putatively non-instrumental normative considerations at stake in things like democracy, state authority, legitimacy, coercion, consent, punishment, and so on. To the global utilitarian, by contrast, these are all, essentially, instrumental questions, best put to economists, sociologists, political scientists, and psychologists. All the fundamentally normative work is done by “maximize the welfare of conscious creatures.” The rest requires leaving the armchair.
One can imagine someone who isn’t a global utilitarian responding in a similar, leaving-the-armchair type of way. In the context of the value of democracy, for example, philosophers often ask questions like: “Even if democracy doesn’t lead to better policies for the people governed, relative to other systems, isn’t it also expressive of our basic dignity as rational agents, and/or our fundamental equality as citizens? Doesn’t it satisfy intrinsically important rights to accountability and self-government?” And the answer here may well be yes. But we can also imagine someone who thinks: “Ok, but surely leading to better policies is, at least, a really important part of it” — and who then focuses, going forward, on questions about the policy outcomes that different democratic arrangements in fact lead to, and under what conditions, rather than on the other sorts of values philosophers are fond of delineating and debating.
Indeed, I’ve wondered whether there are selection effects at work in this respect. That is, just as we might expect philosophy of religion to be populated, disproportionately, by people who believe in God, so too we might expect normative ethics to be populated, disproportionately, by people who believe in the non-instrumental importance of a rich and complex variety of ethical considerations, accessible from the armchair. Atheists seem more likely to dip into philosophy of religion, answer “nope, no God,” then go on to other things; and similarly, those inclined to stop at the basic set seem more likely to sit down in the armchair for a bit, answer their central questions with reference to the basic set, then get up and leave — in search of further, empirical information and opportunity relevant to promoting their basic values in practice, and disinterested in debates that treat considerations they view in instrumental terms as intrinsically important. Whereas those who, when they sit down, see a rich and mysterious array of distinct, intrinsically-important considerations, amenable to much decision-relevant armchair analysis, are more likely to stay seated.
At the very least, I think, a career, or a paper, spent arguing that “this, too, is an instrumental consideration grounded ultimately in promoting the welfare of conscious creatures” is less exciting/publishable/tenure-worthy than one spent arguing “this is a new, sui generis, intrinsically important consideration that adds to and alters our basic ethical ontology.”
III. Weightings are harder
Arguments for the intrinsic importance of a given consideration are sometimes accompanied by a certain type of caveat, along the lines of “now, exactly how to weigh the consideration I’ve identified is a further question, which I won’t try to answer here. What I’m saying is that it’s a distinct consideration to keep in mind.” The philosopher’s job, on this view, is to canvass the types of considerations at stake, and to understand their structure and interaction. Actual decision-making is a further step.
Indeed, prima facie, it seems easier to make clean arguments about whether something matters at all than about how much it matters relative to other things. The former just requires isolating it from other variables. The latter requires comparing it to other things across a wide range of situations, soliciting possibly-quite-opaque intuitions, and performing consistency checks. Thus, for example, I am aware of many people who accept (a) that you shouldn’t push the fat man in front of the trolley to save five, and (b) that you should push the fat man in front of the trolley to save a billion; but I know of no one who has gone through extensive effort to identify, even very roughly, the ballpark number for which one should push the fat man (though I haven’t read the literature in depth). And I’m not sure they could get it published if they did (see Luke Muelhauser’s post here, on the moral weight of different animals, for an example of the type of attempt at a rough quantitative weighting that I have in mind).
Here’s G.A. Cohen (2011), describing this sort of dynamic, in the context of an argument for the distinctive value of conserving valuable things, as opposed to creating them:
“But please do not expect me to say to what extent our practice should honor the truths that I hope to expose, in comparison with other truths the honoring of which sometimes conflicts with honoring what I perceive to be certain conservative truths.
Philosophers often have something novel to say about what, as it were, ingredients should go into the, as it were, cake even when they can say nothing about the proportions in which these ingredients, or values, are to be combined, across different cases: not because that is not important, but because the problem simply does not yield to general recipe making. Philosophers sometimes end their articles by saying this sort of thing: it is a task for future work to determine the weight of the consideration that I have exposed. Yet nobody ever gets around to that further work. Many wish they could, but nobody knows how to do it.
Although philosophers cannot produce the weighing that is necessary in any practical discussion, their disposition to notice things in ordinary experience that other people miss means that they can nevertheless make a contribution to an immediately practical question. They can contribute by identifying a value that bears on choice and that is being neglected. Consider an analogy. A bunch of us are trying to decide which restaurant to choose. Suppose everybody talks a lot about how good the food is in various restaurants, how much it costs, and how long it takes to get there. Someone, hitherto silent, is uneasy. She feels that we have been leaving something out of account. Then she realizes what it is: “Like, nobody,” she says, “is considering the décor!” This person has made a significant contribution to our practical discussion. But we should not expect her now also to say exactly how important a restaurant’s décor is compared to the other things that matter when we are choosing a restaurant.”
I think Cohen’s description, here, is a reasonable gloss of a certain kind of valuable intellectual work — one that can enrich our understanding and bring out new dimensions of ethical life. But I also worry that getting too used to saying things like “nobody is considering the décor!” will prompt an over-abundance of interest in the décor, and in even more niche considerations (“does the restaurant have a pleasing name?”, “does the cuisine fit, thematically, with our recent topics of conversation?”). And that we’ll take too large a hit on things like food quality and price as a result.
IV. Ethical expertise
Here’s an example that brought this type of worry to mind for me. A few years ago (my memories of this are a bit hazy), I sat in on a seminar run by an ethicist (call him Ethicist 1) who has made not stopping at the basic set a core part of his professional identity. I imagine him now (this is an exaggerated image) almost like a sort of philosophical Oprah, saying the equivalent of “you get a car!” to every candidate intrinsic value that comes his way: “that matters! and that matters! and that matters!” As I recall, this philosopher had been thinking about practical issues related to global health (I think he may have been doing some sort of advising/consulting on the topic?), and he was describing how he had approached the questions at stake.
Around that time, I also remember talking with another ethicist (call him Ethicist 2) focused more centrally on promoting welfare, and who had been directly involved in some important decision-making regarding global health. He was describing some of the barriers he had encountered in trying to push for very basic changes in priorities — changes that would (we were assuming) result in many more lives being saved.
Hearing Ethicist 2 describe his experience, and having recently heard from Ethicist 1 as well (at least, I think this was the order), I had some vision of an academic philosopher — trained to identify new and distinct considerations, and to argue for their intrinsic importance, without taking a stand on their weight — sitting around a table with various global health decision-makers, and pulling a kind of “nobody is considering the décor!” type of move. That is, seeing an opportunity to score a certain, academically-familiar sort of point, and going for it: “Ah, but in addition to saving people’s lives, preventing disease, and so forth, we must also factor in X intrinsic value/consideration, which my background as a normative ethicist puts me in a position to highlight.” And I felt some fear, imagining this, that more people would die, or get malaria, for the sake of some consideration much less weighty, even if still real.
We might see this as a worry about misinterpreting the type of expertise normative ethics bestows. The extent to which academic ethicists should be understood as “ethical experts” in general is open for debate, but broadly speaking, I think (some) normative ethicists do tend to have valuable facility and familiarity with a wide variety of concepts, intuitions, and arguments relevant to moral decision-making. But this expertise need not translate into expertise in weighing the relevant considerations. And very often, the weighing matters at least as much as what’s being weighed.
V. Basic set hypotheses
I think my image of the global health scenario above may have been influenced by an experience I’d had even earlier, of finding myself arguing about what I ultimately concluded was the décor. I had put together a poster for a conference focused on ways of helping people effectively — a poster which was supposed to analyze a certain type of deontological consideration the existing discourse wasn’t really accounting for. As I worked on it, I had to admit to myself: “actually, I don’t think this really matters much in the scheme of things,” despite my hopes, as an academic philosopher starved for real-world relevance, that it would, and that my pointing at it could make a valuable difference. Philosophers generally want their work to be simultaneously decision-relevant, and theoretically novel/interesting. But often the connection feels forced, or false.
Indeed, in lots of areas of human practice (cooking? tennis? learning an instrument?), getting the basics right (sometimes amidst much mess and noise) is more important than learning lots of fancy tricks. We might wonder, then, how often this is true in ethics, too.
One way of formulating this might be in terms of what I’ll call “basic set hypotheses”, of the following form:
In some domain X, considerations grounded ultimately (even if via pragmatic and instrumental heuristics) in some basic and familiar set of values (e.g., promoting human welfare) are by far the most important considerations at stake.
I’m not sure how often, and in which contexts, hypotheses of this form are true. But disagreements about whether a basic set hypothesis holds seem to me more important than disagreements about whether the basic set exhausts the values relevant to the domain in question. And perhaps some interpersonal disagreements are best diagnosed as disagreements about a basic set hypothesis, rather than as disagreements about what values exist at all. For example, it seems to me that some people are just gripped very directly and vividly by the importance of e.g. preventing suffering and helping people flourish, and it may be that this does more to explain their priorities than what values they would give any weight to in principle. Yet somehow, it feels like we often end up arguing about what has value at all.
VI. Does the difference between consequentialism and non-consequentialism actually matter?
Here’s another example of the importance of how you weigh considerations, vs. whether you believe in them at all. As I discussed in my post on doing vs. allowing, we can distinguish between two ways of justifying various types of deontology-flavored norms (e.g., “don’t lie,” “don’t steal,” etc): pragmatic justifications, which appeal to the (sometimes rich and complex) costs and benefits of different practices, heuristics, decision-procedures, expectations, and so on; and intrinsic justifications, which treat the relevant norms as intrinsically important, irrespective of the costs and benefits of the practices, heuristics, etc involved.
Consequentialist-types are generally quite sympathetic to the former type of justification (at least in many cases), even if skeptical of the latter. Whereas non-consequentialists tend to see the latter as necessary to fully capture our intuitions about the norms in question. And glancing at this dialectic, it’s easy to assume that non-consequentialists will generally take the relevant norms more seriously in practice.
But this doesn’t follow. To know how seriously someone takes a given norm, we can’t just look at whether they treat the norm as justified by both A-type and B-type considerations, or by A-type alone; we also have to look at how strong they take A-type considerations and B-type considerations to be. If a consequentialist takes A-type considerations to be extremely strong, for example, and gives no weight to B-type, but a non-consequentialist gives middling weight to both, the consequentialist could very well refrain from lying, stealing, etc more reliably than the non-consequentialist (here I think of a friend of mine, who reported something like: the utilitarians she knows seem more serious about keeping promises than the non-utilitarians).
Of course, if the non-consequentialist gives infinite weight to B-type considerations, as in old-school cases of “absolutist deontology” (e.g. never lie ever no matter what), then their seriousness about the norms in question will be hard for the non-consequentialist to top. But absolutist deontology is widely (though still not universally! c’mon, people!) recognized, at this point, as a horrifying non-starter: consigning a billion people to torture, to avoid lying to someone about your age, is crazily wrong (and note — though this isn’t the key point — that absolutist deontology has, prima facie, no account of how to deal with even trivial risks of violating the deontological norms in question).
Indeed, once absolutist deontology is off the table, I think that many non-consequentialists assume far too quickly that their acceptance of various intrinsic justifications for deontology-flavored norms allows them to easily avoid uncomfortable implications about the types of trade-offs (re: for example stealing, promise-breaking, and so forth, for the sake of the greater good) often offered as objections to consequentialism. In theory, both the consequentialist and the non-absolutist deontologist will sometimes violate the norms in question — the only question is how high the stakes have to be. But in the real world, disturbingly, stakes can quickly get very high — a fact that Trolley-like thought experiments centered on “the one” vs. “the five” often neglect. Indeed, in the context of such stakes, it may be pragmatic justifications that are the more robust.
We can make a similar point about agent-relative permissions — e.g., permissions to privilege oneself, one’s personal projects, ones family, etc in one’s endeavors. Here, again, both pragmatic and intrinsic justifications for these privileges are available; and here again, consequentialists are often sympathetic to the former in various ways, whereas non-consequentialists tend to accept both to some extent, but to emphasize the latter. But this, again, isn’t enough to say how much weight a given justification has.
Thus, for example, people sometimes assume that positing intrinsic justifications for agent-relative permissions is enough to render declining to engage in certain types of intuitively “demanding” behavior (e.g., donating lots of money, devoting lots of energy to helping others, etc) permissible. But the existence of such a justification doesn’t settle this question: you also need to know how much weight it has, and what stakes speak in favor of the demanding behavior. (Sometimes advocates of agent-relative permissions seem ~indifferent to the stakes; but this view is objectionable for reasons similar to why absolutist deontology is objectionable.)
Partly because of considerations like these, I tend to see the distinction between consequentialism and non-consequentialism as less practically important to ethical life than many people. Many of the things that consequentialism implies we should do, especially re: helping others in effective and norm-respecting ways, plausibly fall out of reasonable forms of non-consequentialism as well; and many of the behaviors that people most want out of non-consequentialism (e.g., respecting basic norms, giving care and attention to oneself and one’s own sphere of life) have justifications on reasonable forms of consequentialism, too (and once you add in considerations to do with moral uncertainty, and with various types of funky decision-theories/self-modifications, the distinctions blur yet further).
This isn’t to say that there aren’t, ultimately, important and practically-relevant differences. But the devil, I think, is in the details — and in particular, in what sort of weight we give to different considerations, rather than whether we recognize them at all, or describe them in intrinsic vs. instrumental/pragmatic terms. (There are also other notable correlations between how people who describe themselves as consequentialists vs. non-consequentialists tend to think and act — correlations that plausibly drive various types of tribal affiliations, aesthetic reactions, and so forth. But these aren’t, I think, actually key to the philosophical issues at stake.)
VII. Is there a field here?
If we accept Cohen’s characterization of normative ethics above, we already have a field devoted centrally to delineating and explaining various ethical considerations. But where is the field devoted to weighing them? Could there be one? What would its methods and standards be?
I think we could probably do more than we do now (cf. the unusualness, but also the possibility, of posts like Muelhauser’s). But the overall project does seem difficult; actual practical decisions are richly specific, and different people’s values will differ in fine-grained and hard-to-hash-out ways.
In the absence of such a field, though, I think we should mind the hole it leaves unfilled, and the importance of how one fills it.
Further reading
Second in a two-part series on whether morality falls out of instrumental rationality, if you do the game theory right. I discuss four objections to the morality in question: that it isn’t instrumentally rational; that it gives the wrong types of reasons for moral behavior; that it incentivizes threats and exploitation; and that it licenses arbitrarily bad behavior towards the sufficiently disempowered and unloved.
First in a two-part series about whether morality falls out instrumental rationality, if you do the game-theory right. This part lays out of the basic structure of a prominent argument in favor.
What is altruism towards a paperclipper? Can you paint with all the colors of the wind at once?