The Ones Who Walk Away From Utilitarianism: A Review of Five Objections

Hear this article read aloud here , author’s note: this post based on an old discord rant.

It hasn’t escaped my attention, as a Utilitarian, that my ethical theory is… well, it’s not unpopular exactly, because it is a common and influential theory in the field, but hate of it is popular. That is, it features as a recurring antagonist among those who disagree with it with what seems like unusual frequency relative to other competing theories. Whether you are reading Thompson or Nozick , or watching The Good Place , there is no shortage of popular anti-Utilitarian philosophy.

I don’t want to speculate much on this difference here, there are several possible reasons why Utilitarianism is an especially popular target for other theories, and it is likely more than one is true. What I am interested in primarily exploring with this post is what the best/most common overall objections are, in what ways they are compelling or not compelling, and how compelling I think they are to me and other Utilitarians in particular. I of course don’t pretend to be a neutral party, but I don’t mean for this to be a hit piece against my philosophical enemies either.

I should clarify a couple things in the beginning. First of all, I am only including objections that are accurate. Some objections I hear are just misunderstandings, and I am not writing a debunking piece An example I’ve heard in a recent class was a student saying that Utilitarianism doesn’t care about non-human animals. Looking all the way back to Bentham (and to most of the major philosophers along the way) , this is simply not true. It was a misunderstanding based on the common misstatement, in this case made both in the text and by the professor, that Utilitarianism only cares about the sum of human happiness.

Another constraint is that, while I will discuss and bring up specific thought experiments and examples to illustrate objections and motivate the meaning of the principles discussed, I won’t use specific cases as objections themselves. The reasons are a bit complicated, and I will have to oversimplify them here, but roughly it is because our values about specific cases aren’t always part of the same sort of project as theories of ethics. Many specific cases where we can’t accept what a theory tells us is right seem to be areas where our intuitions do not depend on a specific principled objection, in that there is no broad principle we can think of which, if it disagreed with our intuition in this case, we would take as sufficient grounds to change our minds. This leads me to reject the sort of linear regression model of moral reasoning that judges the strength of a principle by how many of our specific practical views it can be used to justify. There are two major reasons for this:

The first is that it seems like this type of model doesn’t care that it will give us insincere reasons for what we believe, and it seems like if we had to choose between accepting an insincere model of our morality and accepting a sincerely convincing moral principle while rejecting the specific implications of it no principle would be enough to convince us of, the second choice is better. The second reason is that we look to moral theories for criticisms, not just justifications, of different views. We want to be able to show those who disagree with us why they ought to change their views, and we want to be capable of anticipating where we are going wrong ourselves. If you demand that a theory give a consistent account of your current practical views, you empower principles that can prove whatever you want in theory over principles that are more critical of accepted norms, but lead to unpleasant conclusions in toy cases.

(You may argue which theories fit this critical/uncritical divide and the argument remains, but looking at the theoretical structure and history, it seems as though criticism is a dimension where Utilitarianism scores better than other popular theories).

I have also left many objections off of the list, as I only want to deal with the five I consider most compelling (not to me specifically, but to those who have systematic problems with Utilitarianism in general). I consider myself a Utilitarian, at least because calling myself a Utilitarian is easier than calling myself an uncertainty theorist with a majority credence in consequentialism and a plurality credence in total hedonistic act Utilitarianism, so although I consider these objections to be among the most convincing, they don’t go far enough to fully convince me. These reasons are as follows:

1. Demandingness

Utilitarianism can demand that strong burdens be placed on some in the service of helping others (I will discuss other objections related to this later in points 4 and 5). Sometimes this is itself considered a strong objection to Utilitarianism, but as an objection it is especially compelling for some in the case where the burden falls on the Utilitarian. This is one of the objections The Good Place highlights, with its episode “ Don’t Let the Good Life Pass You By ”. In particular the episode somewhat unsubtly drags on the demandingness of Peter Singer’s ideas with the cameo of The Most Good You Can Do in the hands of an extremely self-sacrificing character (the show’s creator wrote the new introduction for Singer’s book The Life You Can Save , and several of the show’s actors read for its audiobook, so the overall relationship seems pretty congenial).

Not only is the demandingness of Utilitarianism a realistic interpretation of it, it presents the unique issue that, arguably, no one is a “true Utilitarian”. That is, no Utilitarian lives up to what Utilitarianism says they ought to. The “ought implies can” aphorism comes to mind in this case, but it doesn’t actually describe the problem. It is assumed by Utilitarians that they only “ought” to do what they “can”, but because what they “ought” to do is only limited by what they “can’t” do, realistically this still leaves a good deal people can do that no one will do.

This still looks pretty bad, but put in perspective, it is not a very psychologically difficult conclusion on its own. I think that most people can think of something they have done that they think they shouldn’t have done. Petty lying, cruel statements, I’m willing to bet at least one thing deserving and calling up outright shame at one’s poor decision. I think most people already believe they haven’t lived up to a good standard of morality in full, and would likewise agree on reflection that neither have most people, so while “ought” may imply “can”, certainly it doesn’t imply “will”. I think that most people would view this as a poor excuse for adopting a standard of morality by which they are likely to have never acted wrongly, and neither have most people, if all it accomplishes is shifting the goal-posts (if we wanted a standard of morality that gave good principles that most people could live up to, we might say that something like murder or rape is wrong, but nothing else). That is, the most basic logic of this objection leads to highly counterintuitive conclusions.

Another version of this objection might be that “ought” implies not only “can”, but “probably will”. That is, most of us act wrongly because there are so many moral choices we have, and even if we make most of them right, we are bound to make a few wrong. A good standard of morality, you might say, only asks of you things that you will probably be motivated enough to do each time it asks something of you. Even this principle seems too lenient, there are plenty of scenarios where I think most would agree that the right thing is very hard. If you live in Nazi Germany, maybe it is understandable if you refuse to hide a family of Jews that comes to you for help in your basement, but a good standard of morality would say that you ought to, even that you are doing something wrong by not doing so.

Much of the demandingness of Utilitarianism doesn’t come in the form of discrete difficult choices either, but fairy easy and mundane ones that it is just demanding to make as often as we should. To go back to the Singer example, it seems reasonable to ask us to donate enough to save one life, but we will still have this choice over and over again, and this is what makes it demanding. A good standard of morality tells us to rescue a child drowning in a pond , but when you pass a pond that’s constantly full of drowning children, a principle that says that you should always try to save as many of them as you can becomes demanding, no one will stay by the pond in all of their waking hours they can, pulling children out of the pond, then coming back after they sleep to keep doing it.

The difference in Utilitarianism is that it looks at all of the terrible, preventable things constantly happening in the world, and it tells us that in terms of the moral urgency of our decisions in this context, we are always in a Nazi Germany situation. Choosing a morality because it doesn’t say that we are in a morally dire situation doesn’t seem to, in a way we should consider forceful, change how morally dire our situation is. What remains compelling about the objection is that if we saw an “ideal Utilitarian” who lived up to the full extent of Utilitarian morals, we would find them odd or even disturbing. If we saw someone who lived up to less demanding, commonsense moral standards, we would merely be surprised, perhaps mildly resentful due to our personal inadequacy, but we wouldn’t think of them as beyond the range of a normal human life in the same way. The type of reasoning this objection appeals to is not appealing in an absolute sense, but is appealing in degrees, in that “ought” seems to imply to most people “won’t always, but it wouldn’t seem inhuman if they mostly will” (I have not read it, but I’ve heard good things about Larissa Macfarquhar’s relevant book Strangers Drowning ).

This is not an incredibly appealing intuition to me, it seems perfectly possible to me that morality could just prove to be incredibly demanding, to an extent that it can’t motivate people to fulfill. I think many Utilitarians share this view with me ( Robert Wiblin seems to): as it turns out, we are not especially moral animals. There is only so much we will tolerate morality asking of us, this is a failing of us rather than Utilitarianism, and we resent that Utilitarianism demonstrates this by being one of the few theories willing to find and exceed this limit.

2. Self-defeatism

Another objection to Utilitarianism is that it can be “self-defeating” in the sense that in theory it can say that being a Utilitarian is worse than not being a Utilitarian by its own standards. This objection can take a couple of forms, for instance it can appeal to specific cases where this seems to be true, or, at the extreme end, it can say that as a whole, Utilitarianism is actually worse at fulfilling its own purposes than other theories would be.

An example of a type of case where being a Utilitarian might be worse than not being a Utilitarian is in certain trades. Let’s say that someone agrees to donate a tremendous amount of money to a very good charity, but will only do it on the condition that you burn $20 in secret. While an odd sort of cooked up scenario, it is certainly not impossible to imagine a trade with a structure like this in theory. Another fairly outlandish example is if Hitler decides to flee to an isolated island of perfect Utilitarians after committing his atrocities. Now, the Utilitarian will say that this charity trade is worth it, and that punishing Hitler would be worth the incentive against committing atrocities. The issue is that the structure of these cases makes act Utilitarians untrustworthy.

No incentive will come from hurting Hitler because, due to the isolation of the island, no one off the island will learn about it. No good will come from burning the money because the terms of your agreement require you to do so without proof. You’ll have a reason to convince everyone you would carry out the trades, but the credibility of this claim depends on the degree to which you aren’t a perfect act Utilitarian. There is a Utilitarian reason not to hold up your end of either trade in practice, since Hitler suffering is intrinsically bad, and the money could have instead been given to a good cause. The problem is that the existence of act Utilitarians in a position of choice within these scenarios, if they are known to be Utilitarians, ensures that the wealthy person with the eccentric charitable trade won’t donate to the good cause, and Hitler will commit atrocities without fear knowing he can find safety on Utilitarian island afterwards. 1

For those who have heard of “ Newcomb’s Problem ”, this type of scenario might sound familiar. Indeed, just as Newcomb’s Problem has two favored answers for the rational way to make choices given the same account of what results are valuable, Utilitarianism has two different major forms as well. I have been specifically saying that “act Utilitarians” can’t be trusted in these scenarios, because “rule Utilitarians” absolutely can be trusted in these same scenarios. Rule Utilitarians, like act Utilitarians, believe that what makes an outcome good is the wellbeing of those in it, and that this is what gives us a moral reason to act, but while act Utilitarians believe that you should make choices in the way that produces the best expected outcome, rule Utilitarians believe that you should act in the way that follows the rules that produce the best expected outcomes.

The existence of rule Utilitarians points out a reason to believe that many Utilitarians take this objection seriously. It also suggests however that it is perfectly possible to be a Utilitarian of some sort and not fall prey to these situations. I don’t personally find it as compelling. As with other act Utilitarians and Newcomb two-boxers, it is inescapable to me that in these cooked up scenarios, there is an outcome in which someone has the chance to take $1,001,000 rather than $1,000,000, people who have the opportunity to have their cake and eat it to. The mere fact that the people who manage to be given these choices are unlikely to make them this way doesn’t mean it would suddenly be irrational if they happened to. If they could force themselves to visibly precommit to the other choice, this would be the rational thing to do, but that isn’t the same choice.

Then the question comes up of whether, on the whole, it is better to not be a Utilitarian than to be a Utilitarian according to Utilitarianism. This is a difficult question, but considering that it solves these cases handily, it seems at the very least unlikely to the point of verging on irrelevance that rule Utilitarianism is worse than a non-Utilitarian theory. A more difficult question comes up when you are asking whether it would be better for everyone to be a rule Utilitarian or an act Utilitarian, but even comparing between being an act Utilitarian and a non-Utilitarian, it at least seems as though the burden of proof is on those who claim that being a non-Utilitarian is better for Utilitarianism. What if it turned out that being a Utilitarian was worse though? What should we take the significance of this possibility to be?

I am somewhat sympathetic to the idea that if Utilitarianism was in fact worse than non-Utilitarianism for Utilitarian ends, there is a significant sense in which Utilitarianism is wrong, but it seems that this is only true on a view of moral theories by which their truth can depend on facts about the world. If facts about the world like this can affect the truth of a theory, then this objection seems irrelevant unless Utilitarianism is in fact bad for itself. If this view is rejected, then I can see little reason why it poses a challenge at all. It is perfectly conceivable on theories of morality insensitive to such facts that if I learned that being a Utilitarian was bad for Utilitarianism, what I ought to do is try to persuade others not to be Utilitarians, and that, even if I succeeded and everyone stopped being Utilitarians, there is a deep sense in which Utilitarianism is nonetheless true, and I have served it.

3. Personal action

Another objection to Utilitarianism, invoked by just about all non-consequentialist theories, is that Utilitarianism views all types of choices one faces as the choice to bring about one possible world or another, rather than in terms of one’s personal position relative to the choice. Usually this objection is to something like the lack of an action/omission distinction in Utilitarianism, or something related to the intentions behind an act. While things like intentions and acts can correspond to indirectly valuable aspects of a decision, they are never a factor in themselves in Utilitarian decisions (act Utilitarian at least). Utilitarians say that whichever outcome they could impartially hope would come about is the one they ought to bring about, regardless of their position relative to this choice.

This is the type of thing different versions of the trolley problem developed by non-consequentialists like Phillipa Foot and Judith Jarvis Thompson tend to highlight. It seems likely that those who have a preference will say that if an out of control trolley happens to barrel down a track with one person on it rather than five, this is better than if it had happened to be set on the other track and to run over the five. Likewise I think that most people would believe, by a similar amount, that if a trolley happened to be stopped by one person falling onto the tracks, that would be better than if it continued and ran over five people. At most I think some people would say there is no difference, though I don’t think many would even go this far. I can’t imagine someone saying the reverse cases are better. What changes the trolley problem from the Utilitarian view is the relation you are specifically in relative to each decision.

Someone who cares about intention (in the sense of what you mean to do by your act, not just what you knowingly decide to cause through it) might consider it fine to turn the switch in the version where all you are doing is redirecting the track. Your specific act is derived from the intention to redirect the track from the one with five people, so not killing the five people is the correct interpretation of your action, while killing the one is only an eventual side effect of your well-intentioned act. Someone who cares about the act/omission distinction might say pulling the lever is impermissible, because causing a death through your action would be murder, whereas causing five deaths through your inaction would not be murder.

The two agree, on the other hand, when it comes to whether you can push someone onto the tracks to stop the trolley from hitting five people. In terms of intentions, in this case you intend for the person to get hit through your action. You are using them as part of your action, not just creating a situation where they will suffer as an unfortunate side-effect of your decision. Act/omission, has the same thing to say as in the first case, by killing someone through action, you are murdering them.

Peter Singer and Katarzyna de Lazari-Radek discuss these types of scenarios in Utilitarianism: A Very Short Introduction . As they point out, the answers these types of theories of action give are very psychologically intuitive as a rule, but so are differences they don’t highlight which people are less willing to put into explicit theories. An example is pulling a lever that redirects the trolley onto a loop in the track with someone large enough to stop the trolley on it. While this case is most similar to pushing someone onto the tracks in the ways usually considered relevant, since you are still using this person to stop the trolley from hitting five (there would be no reason to redirect the trolley otherwise, since in this case the track loops back to the track with five people on it), people view it as though it was closer to the original form of the thought experiment.

Likewise, the book points out that there is fuzziness in these categories of act, especially intention. If you don’t intend to kill the person you push onto the tracks, only for them to physically stop the train, isn’t that pretty similar to switching the track not intending to hit the one person, but knowing that them getting hit will be a side effect? Is what matters that you expect them to die before the trolley is stopped from hitting people in one case, and you expect them to die afterwards in the other? Then it would seem that it is fine to throw someone onto the tracks if you only expect them to be fatally wounded and to die in the hospital. Maybe it is that you are still using them to stop the train in one case and not the other? Perhaps this is troubling, but if we are throwing away death as part of the intention, then it is hardly possible to think of it as murder, in fact since harming them isn’t part of your plan, the same objections apply to every effect except the trolley being stopped.

Act/omission seems a bit less fuzzy, but hardly well-defined. As an example, I think one of the commonsense accounts of act/omission is that something is an omission if it would happen if you weren’t there, and an act if it would not happen unless you were there and made the choice you did. This seems to be too willing to remove you from the whole context of your choices, which is inauthentic to the point of absurdity in many real situations. Let’s say that you are on the edge of a mountain, holding onto the hand of someone dangling off the edge. Each moment you hang onto their hand, you have the choice to let go. Which choice is the act and which is the omission? The act/omission distinction also isn’t as intuitive as it at first appears, though it gives you the preferred answer in many toy cases, there’s an argument to be made that, like Utilitarianism, it gives you very counter-intuitive and demanding implications as well when you take it completely seriously.

I will say that although I felt the need to bring this type of objection up, as it covers many types of problems people have with Utilitarianism, I don’t find it very intuitively compelling myself. This is one of those cases where the consequentialist principle seems far more compelling than its competitors, and I’m willing to accept that when it gives me entirely unpalatable implications in specific cases, I would rather selectively reject these as personally unacceptable without invoking some competing principle like intentions or act/omission. I think this is a view shared by other Utilitarians, and it may just represent a psychological difference between the way Utilitarians and non-Utilitarians think about morality.

4. Pleasure/pain asymmetry

Now we get into what I consider the heavy-hitters, that is objections that continue to feel troubling to me, and which I think are uniquely compelling to the Utilitarian mind-set. A Utilitarian is a consequentialist concerned with well-being, which means that our home-turf for caring about stuff is in terms of which outcomes we could will to spontaneously come about, and inhabiting what it is like to be someone in this outcome.

This first heavy-hitter, I would argue, is not as hard to answer on its own. It’s also the easier for a Utilitarian to theoretically fix however, so it is the one I see actual work to fix among Utilitarians, like Objection 2. This is the pleasure/pain asymmetry. In small doses, it doesn’t feel too compelling. Should morality care more about a great meal, or a papercut? Eh, who knows, doesn’t seem that important. It gets disturbing quickly however.

Utilitarianism, in its classic form, says that any amount of suffering can be compensated by a sufficient amount of pleasure. Imagine the worst torture you can, the most unbearable thing your mind can model. Imagine enduring it for days, weeks, months, years without a moment’s break. Now ask yourself about this principle again. Is there any amount of pleasure which, if it were brought about, would compensate for this? Make it worth causing? The standard intuition, which feels nearly inescapable unless you academically distance yourself from the question to a truly remarkable degree, is that no, preventing this pain is morally more important than bringing about any amount of happiness. I am a big fan of happiness, more than I think most people are when it comes to ethics, but this intuition is strong even in me.

I should say that this is not just a suffering/pleasure asymmetry, really the argument is a suffering/anything asymmetry . Imagine your most cherished moral principle, I could probably cause you to imagine a level of suffering that would shatter any but the most coldly academic ethicist’s resolve on the principle’s overriding importance. Suffering seems to clearly be somewhat bad, but it can get intolerably, unimaginably bad when you picture its extremes. This is a big part of the motive behind “negative Utilitarians”, who value suffering as a moral priority on a different scale from pleasure, and considering the suffering/anything asymmetry, this objection is arguably one of the few points on this list that serves as an argument in favor of some sort of Utilitarianism, rather than just an argument against some kind of Utilitarianism.

There are some strange aspects of negative Utilitarianism however, which emerge when you consider three different forms of it, and see none of them are quite what you might hope. A very basic version is just “suffering minimizing ethics”, that is, pleasure has no value in moral judgments, only suffering reduction. This has a good deal of theoretical elegance, but the view that pleasure has no value relative to any pain is itself really counterintuitive. A similar problem emerges with a “lexical” version, where all suffering has moral value, and all pleasure has moral value, but no amount of pleasure has as much moral importance as any amount of pain. While this doesn’t lead to the implausible conclusion that a very happy person is no better off than a totally unfeeling person, it leads to nearly as strange a conclusion, in that even the most wonderful, amazing pleasure imaginable isn’t worth the smallest, most minor annoyance.

The most intuitively plausible, and I think most popular, version of negative Utilitarianism, is the one that gives different states of pain moral weights relative to pleasure that, as a function, approaches zero added value for additional pleasure. That is, every amount of pleasure can compensate some amount of pain, for instance very great pleasure can be worth minor pains, but some amounts of pain can’t be compensated by any amount of pleasure, like the example of the worst torture you can imagine. Theories of value of this sort have the elegance that they provide unique answers to questions in theory, but unlike theories that involve linear relationships, they don’t provide good answers for how to draw the functions, where to put inflection points and asymptotes 2 . They answer vague, strong intuitions, with a demand for extremely specific answers that these intuitions aren’t prepared to provide. Even given these problems, I find negative Utilitarianism fairly compelling, and give theories in this area the next most credence among moral theories after classical hedonistic Utilitarianism.

There are two main ways to answer this intuition that deny the need for some sort of negative Utilitarianism, neither feels totally satisfying, but both make some sense. The basic counter-argument, again given by Singer and de Lazari-Radek, is that we are more familiar with extreme suffering than extreme pleasure. We can imagine very extreme forms of suffering, both because most of us have felt very extreme suffering for at least a few moments, such as a brief moment of seriously burning ourselves on a hot surface, and because we can imagine tortures that we reliably know would cause us suffering beyond anything we may have ever experienced, like being set entirely on fire. With pleasure on the other hand, I don’t think we have brief moments of extreme pleasure in the same way as we have moments of extreme pain to model from. The most pleasurable half-second of our life likely comes nowhere near the intensity of the most painful. Likewise, it is hard to imagine, except in the abstract, what a greater pleasure than we have ever experienced would be like, or what would cause it 3 . We have some sort of moral trauma when it comes to suffering, and can’t trust our reasoning on the extremes of a pleasure/pain symmetry principle in the abstract.

If this isn’t comfort enough, we can be more blunt. Pleasure great enough to compensate unimaginable suffering, by definition, must be that subjective experience we can imagine being worth the trade, sharing the same urgent, immovable sense of personal value. If we say that this state exists in theory, even if we can’t imagine bringing it about, that is enough to satisfy the principle of a pleasure/pain symmetry. If we say it does not exist, then we don’t need to appeal to the view that pain is more important than pleasure at some extreme, we have, within the framework of classic Utilitarianism, demonstrated that happiness that great simply doesn’t exist.

5. Aggregation

And now we come to the objection that is hardest for me to find satisfaction on, which presents, to my mind, the most unpalatable trade-offs in Utilitarianism. This is aggregation across individuals. Utilitarianism says that if something affects more individuals, it is more important. On its face, this is a fairly intuitive principle, and under circumstances where there is no tradeoff in how strong each individual’s stake in a choice is, it gives apparently very reasonable conclusions, like that if one person dies and five survive, this is better than if five people die and one survives.

In cases where each individual’s stake is different however, it results in the following principle: The interest of any individual, no matter how strong it is, can be denied in favor of the interests of individuals who each have a competing interest, however small, if there are enough of these other individuals. It sounds abstract put like this, but to make it simpler, let’s say that I want something a lot, like a whole, whole, whole lot, and whole lot of other people want something else to happen barely at all. If there are enough of these other people, even if they each barely care at all, they can be the ones Utilitarianism says to favor.

This principle feels at least somewhat unpalatable in pleasure cases, perhaps the most notorious example of something like this is the “ repugnant conclusion ”, where it is better (on the total view of population ethics) for a world full of countless trillions of people whose lives are barely worth living to exist, than for a world of a few billion very happy people to exist (this is not a straight-forward implication of all Utilitarianism, and I think some will accuse me of sneaking in other intuitions against the total view, but I think the majority of the “repugnance” from the repugnant conclusion comes specifically from aggregation. If you want an alternate version, you can instead imagine that all of the trillions of others already existed with lives that were no better or worse than non-existence, and ask if it is better for them all to be made just barely better off by reducing the billions of very happy people to the same barely worth living level). This principle is its most unpleasant, however, in the cases that intersect with the pleasure/pain asymmetry. If causing (or failing to prevent) the very severe suffering of one causes some extremely minor joy for millions, it is what Utilitarianism says you ought to do. This feels especially wrong because it resists the final argument I gave for a pleasure/pain symmetry. While it may be the case that a pleasure urgent enough in its value to be worth as much to moral judgments as extreme suffering cannot be imagined, this wouldn’t save us from causing extreme suffering for the sake of pleasure, because if you invoke aggregation, it could be that the sum of lesser pleasures could nonetheless get high enough to justify causing this extreme suffering 4 .

Unlike other objections to Utilitarianism, this one seems resistant to placing yourself in the situation and imagining how good the trade-off is. If I have four units of happiness, it is possible to look for the entity “four-unit happiness” in the world, because there is some experience that has this quality. If two people have two units each, we are asserting that there is a sense in which four units of happiness exists, but there is nowhere in this two person world you can imagine finding it. You need to believe that this state of the world has an attribute analogous to the one person with four-unit happiness.

One way you might try to imagine this is to inhabit the scenario as though these two people were you at different points in time. (This is my preferred thought-experiment for visualizing Utilitarian judgments; what would be in my self interest if I were to be reincarnated as every sentient being ever to exist one after another? No relation. ). It seems as though, if more moments of your life contain happiness or suffering, this is more important than having fewer such moments, and can be traded against the level of this happiness and suffering in each moment. While this is very intuitive, no one inhabits more than one moment at once either. I think that if Utilitarianism gets to visualize its ethics in terms of imagining everyone being different moments of the same person, it should be because this gives the same answer as if we instead imagine every moment of life as being a different person. It is true that, although we have the intuition that longer periods of a feeling matter more than shorter ones, we, once again, can’t look anywhere in the world for this aggregate value, since each component of it only exists in its own moment. Some sort of veil of ignorance is more promising because aggregation does correspond to expected value if you know you will be someone in the world but don’t know who, but I find veils of ignorance more suspect because they ask you to imagine risking consequences rather than causing them.

The main defense a Utilitarian might give against this is that none of us can reliably imagine the goodness or badness of a feeling split across individuals in the usual, empathetic ways, so we can’t trust our intuitive feeling that this split makes a difference either. Just as we can’t imagine any one unified form of this aggregate value when inhabiting a scenario, we can’t imagine being more than one individual at once at all, so we can’t trust this as anything but a general failure of empathetic imagination. This defense on its own isn’t good enough, it basically just tries to force a stalemate, so we need additional reason to believe aggregate value should be treated the way Utilitarianism treats it.

Sadly at this point empathy isn’t enough, the main argument you can make to this effect invokes a more abstract, logical property, “transitivity”. Transitivity, in essence, says that if A>B and B>C, then A>C. It is necessary to assume this when saying that a consistent ranking of multiple outcomes can be found. If you do assume this, then you can reduce aggregation down to two principles. One, if something benefits one individual more than it harms another, this trade improves the world, and two, the quality of outcomes is transitive, that is, greater-than properties carry over.

Now we can imagine two people, “Scapegoat” and “Beneficiary1”. They are put into a machine which is able to produce 1 unit of pleasure for Beneficiary1, by giving to Scapegoat 0.75 units of suffering. This tradeoff, on Utilitarian terms, is always worth it. Now let’s say that Scapegoat stays in the machine, and someone new, Beneficiary2, comes in, and the machine makes the same trade. This new outcome, in which Beneficiary1 and Beneficiary2 are each 1 unit better off and Scapegoat is 1.5 units worse off, must be better than the original state of the world according to the transitive property. We can imagine an infinitely long line of Beneficiaries, each only getting 1 unit better off, as Scapegoat becomes worse and worse off without limit. The transitive property will say that this is always worth it, and no matter how badly off Scapegoat gets, these principles will never tell the line to stop moving through the machine. We can instead imagine another machine that is given an arbitrarily large pool of beneficiaries along with a Scapegoat, and then models a line like this to determine the ideal outcome through transitivity. It will make the tradeoffs that correspond exactly to aggregation, and, like the line case, they will often be horrifying to imagine.

Still, it is hard for me to deny either of the premises that lead to this. One way to try to solve this is to deny the first premise, that a tradeoff involving a lesser harm and greater benefit is always worth it. Negative Utilitarianism of some sort would at least limit how badly off Scapegoat could get. Pleasure aggregations wouldn’t be fixed, but while both are somewhat theoretically concerning, the pain case is far more intuitively concerning anyway. Still, the idea that greater benefit always trades off with lesser harm is both easy to imagine, and avoids the types of problems I highlighted earlier when discussing Negative Utilitarianism.

The other premise is both harder to be emotionally invested in, and harder to logically deny, that being transitivity. I have previously mentioned the writing of Larry Temkin on transitivity , in which he denies that a theory should have it. Still, the examples he gives are of theories in which the fact that two things are being compared changes what you compare about them. An example he gives is of an application process that always tries to, when comparing applicants from two groups, favor the one from the group that has been historically oppressed by the other (for instance when comparing white and black applicants, giving additional points to the black applicant). Since there could be a third applicant oppressed by and oppressing neither other group, this rule wouldn’t be relevant when comparing either to this third applicant, and so the standards used to find better-than or worse-than relations would be different, and could produce an A>B>C>A cycle.

Any example I can think of that challenges the transitive property involves this type of comparison, a principle in which the attributes of interest to the comparison depend on what something is being compared to. Arguably, this is an un-Utilitarian value, in the sense that Utilitarianism tends to view intrinsic values as being entirely located in the attributes of the possible world itself, rather than being located in the world it is being compared to. For a Utilitarian, if a world is valuable by 25 units, it is in itself, and unchangeably, valuable by that many units. When comparing to another world, it cannot become valuable by 20 or 30 in light of what it is being compared to. Still, if transitivity is successfully denied, there is, in my opinion, no strong principled argument for aggregation, even if there are no great alternatives either.

There are other objections that didn’t make it which I could discuss at some length if I made a follow-up post, like dealing with infinities and the extremes of expected value , but I think that this gives fairly good coverage of the relative strengths and weaknesses of different anti-Utilitarian principles. One thing I’ve found helpful about making explicit lists like this is trying to better understand specific thought experiments concerning Utilitarianism. What makes it feel strong or weak? Take the classic “Utility Monster” thought experiment ( originally Robert Nozick’s, but a good account of it can be found here ). Essentially, this thought experiment says that if one person gets endless pleasures from policies that bring about the misery of everyone else, provided this one individual gets deep enough pleasures, they could be justified in being the sole beneficiary of policy.

In the first place, I sometimes see this thought experiment called “anti-consequentialist”. Is this the case? We can imagine back to Objection 3 to see the problem. The world of the Utility Monster seems unpalatable even if it spontaneously came about. It isn’t bad because someone had the wrong intentions, or violated an act/omission policy, if it simply occurred by accident with no policy or intentional background, it would still be viewed as a natural disaster. An even deeper way to deny this is to imagine whether we would violate a non-consequentialist principle to stop this world, for instance if we could prevent it through an act or allow it to come about through an omission. I think most who find this thought experiment compelling would feel justified in acting to prevent this, even at the cost of some principle of personal action. This indicates that, whatever intuition leads to us rejecting it, this thought experiment is in large-part a consequentialist objection to Utilitarianism.

Likewise, someone might think that the problem is that the Utility Monster favors one individual over very very many, but Objection 5 should make us suspicious of this. When inhabiting these sorts of trades, we tend to have clearer reasons to favor strong individual interests over highly diffuse ones. Things are made clearer when you swap the Utility Monster from an individual gaining great pleasure from the lesser suffering of others to an individual undergoing great suffering for the lesser pleasure of many others. In this case, if we think about it for a bit, I think most of us will feel that benefiting this individual greatly by reducing their suffering is worth harming very many others by denying them their lesser pleasures. Arguably, the main reason it is hard to see the thought experiment on these terms is that a Utility Monster is weird, and any policy that favors one individual over everyone else is probably bad in the real world. When inhabiting the case seriously, and reflecting on its nature, arguably the Utility Monster is mostly just an obfuscated and unusual version of the pleasure/pain asymmetry, in denial 5 . Given this, our response to it, at most, should be something like negative Utilitarianism. Put this way, it is not the forceful argument it seems at first glance. Meanwhile, other thought experiments are more effective under scrutiny, for instance those that combine Objections 4 and 5 (something like Ursula K. Le Guin’s story “ The Ones Who Walk Away From Omelas ”).

Utilitarianism seems to get at things that really matter, that it is hard to deny matter, and that matter regardless of vaguer philosophical ideas like free will and personal identity. I first became a Utilitarian about a decade before even learning the word or taking a philosophy class, after reflection on what matters in morality and what doesn’t, it was inconceivable to me that anyone could come to any other conclusion about morality if they had thought about their values deeply. I am not alone in this experience with Utilitarianism, as indicated by this Bertrand Russell quote :

“It appeared to me obvious that the happiness of mankind should be the aim of all action, and I discovered to my surprise that there were those who thought otherwise. Belief in happiness, I found, was called Utilitarianism, and was merely one among a number of ethical theories. I adhered to it after this discovery.” (Russell)

And this Brian Tomasik quote :

“Later, in Spring 2005, I heard the word ‘Utilitarian’ and didn’t quite know what it meant, so I looked it up. I was delighted to discover that there was a name for the philosophy of cost-benefit analysis applied to happiness and suffering that I had been following for the last few years.” (Tomasik)

What has done the most to convince me that other people have had fundamental reasons for rejecting this theory is the existence of serious principled arguments like these, and I find, as I think is also a fairly common experience and another one captured to an extent in Le Guin’s story, that I am more often persuaded to move further from confident Utilitarianism than closer to any other proposed theory, especially non-consequentialist ones. If I leave you with any takeaway, it is that much of ethics seems to me to be about either embracing or fleeing this theory, and that this is a reason to take it very seriously. But it is also a reason for those of us more keen on embracing it to be cautious, to not be extremists, and to consider why so many people walk away from Utilitarianism.

Ed. Note: Some of these concerns may be lessened by acting with long-term probability in mind: Perhaps you’d make a habit of following a rule, even one that is slightly worse than the locally-best action, if you expect long-term benefit. So Utilitarian island might take in most people, but publicly kill widely-hated criminals to prevent them from escaping to the island. So Hitler would have less reason to kill lots of people, making the trade-off worth it (depending on a given situation’s conditional probabilities).  ↩︎

Ed. Note: This is why I, the editor Nick, am interested in the scientific study of consciousness . Pleasure and pain are factual questions, and their moral weightings may turn out to be sensibly drawn in the structure of feeling beings. Also, mathematics can provide ways for us to go from “desired attributes of a function” (like that high enough suffering can’t be outweighed just by lots of pleasure) and “the function itself”.  ↩︎

Ed. Note: Not that some people aren’t thinking hard about this.  ↩︎

Ed. Note: And, of course, aggregation complicates any mathematical constructions that could help us weight pleasure and suffering.  ↩︎

Ed. Note: Also, such a scale of feeling only makes sense with drastically different types of feeling entities, again necessitating the study of consciousness, qualia, and feeling themselves .  ↩︎

If , help us write more by donating to our Patreon .

IMAGES

  1. Utilitarianism: Thought Experiments

    thought experiments against utilitarianism

  2. Utilitarianism: Objections

    thought experiments against utilitarianism

  3. Two Thought Experiments

    thought experiments against utilitarianism

  4. PPT

    thought experiments against utilitarianism

  5. ACT UTILITARIANISM VS RULE UTILITARIANISM

    thought experiments against utilitarianism

  6. PPT

    thought experiments against utilitarianism