Much of my writing on ethics thus far has focused on a few broad themes:
- Especially in the face of Peter-Singer-ish thought experiments, it's easy to relate to morality as an external authority asking you to give up something you care about for the sake of something that you don't, perhaps to an extremely demanding extent. But (in line with my favored approach to meta-ethics), I find it useful to start, at least, by viewing morality (or at least, this aspect of morality) as continuous with your efforts to protect and promote what you care about -- and I think that doing so makes objections about "demandingness" less forceful (see "Wholehearted choices and 'morality as taxes'" and "Care and demandingness" for more).
- What's more, as I explore in the two-part series "Morality and constrained maximization," I think that in the absence of the sort of mind-independent standards posited by the moral realists, this sort of care is a crucial component of any morality that gives adequate weight to the disempowered and unloved. In particular (and pace some views): a morality based solely on fancy game theory won't do the job (though such a morality may be what we need in other contexts).
- However, as I explore in "In search of benevolence," I think that working out what a care-based morality of this kind looks like is quite a bit trickier than it might first appear. In particular, I think there are deep tensions between the aspiration to be other-directed (e.g., attempting to meet and respond to others on their own terms, without "imposing your will" on them), and the aspiration to be impartial (that is, not to exclude agents from your circle of concern, or weight some more highly than others, without good reason). I think this sort of tension is under-appreciated by Effective Altruists, who often assume that there is some unproblematic and universal "Good," maximization of which fulfills aspirations towards both other-directedness and impartiality fully. Or put another way, EA makes most sense against the backdrop of a kind of monotheism about the Good; but moral anti-realists, at least, should be polytheists instead.
Taking seriously the reality beyond your mind and everyday life:
- What happens in our minds is bright and vivid to us, as is what happens in our everyday lives and communities. But most of what matters in the world happens elsewhere -- in a vast and more undiscovered land beyond. The aspiration to live and act in the entire world, then -- to treat equally real things as equally real -- requires believing in, and acting in service of, a world you cannot see. I think that imagination helps with this, as does distinguishing between belief and "realization." (See "Believing in things you cannot see" for more.)
- What's more, I think that this sort of aspiration may go some of the way to distinguishing "altruism" from other types of personal projects, even conditional on moral anti-realism (see the final sections of "In search of benevolence" for more on this). And I think it's a worthy aspiration in its own right, even apart from its altruistic implications (see "Contact with reality" for more).
Focusing on what counts:
- I don't think of myself as a utilitarian or a consequentialist, but I find myself much more sympathetic to the vibes and concerns of the utilitarian/consequentialist strains of the contemporary philosophy community than with the more deontological/non-consequentialist strains. This is partly because I find many of distinctions that the non-consequentialists focus on (for example, between doing and allowing harm) very implausible as candidates for intrinsic moral importance -- especially because they are not distinctions that seem to me important to the victims they are supposed to protect (see "Shouldn't it matter to the victim" for more on this).
- But even where I am sympathetic to the existence of a richer set of considerations than the consequentialists posit, I think it a substantially further (and often, more important) question how much weight those considerations should be given in practice. Indeed, to me it seems plausible that basic and fairly non-controversial considerations about the flourishing and suffering of sentient creatures are often what matter most to our actual decision-making -- and in such contexts, I don't see the distinction between consequentialism and non-consequentialism as an especially useful point of focus (see "The importance of how you weigh it" for more).
Second in a two-part series on whether morality falls out of instrumental rationality, if you do the game theory right. I discuss four objections to the morality in question: that it isn’t instrumentally rational; that it gives the wrong types of reasons for moral behavior; that it incentivizes threats and exploitation; and that it licenses arbitrarily bad behavior towards the sufficiently disempowered and unloved.
First in a two-part series about whether morality falls out instrumental rationality, if you do the game-theory right. This part lays out of the basic structure of a prominent argument in favor.
What is altruism towards a paperclipper? Can you paint with all the colors of the wind at once?
Much of normative ethics centers on which considerations matter, and why. But often, it makes a bigger difference *how much* a consideration matters. I worry that neglecting this leads to the wrong sorts of arguments and points of focus.
We wouldn’t expect prudence to be “not demanding.” Why would morality be different?
Lots of ethical life, especially in a big world, rests on the ability to treat things you can’t see as real.
Shouldn’t deontological distinctions between types of harm reflect something that matters *to* the potential victims of that harm? But they don’t.
Reimagining Peter Singer’s drowning child from the perspective of care rather than guilt.