3

Situation A: 12-year-old Emma leads a happy life and gains U > 0 utility.

Situation B: Emma is continually raped and tortured for an online audience of consumers of sadistic child pornography. She loses L ≫ 0 utility. Each audience member gains 0 < ε ≪ L utility.

By the Archimedean property, there is some number of audience members N such that N·ε > L and even N·ε - L > U, namely N = ⌈(L + U + 1)/ε⌉. Thus, by simply increasing the number of audience members, B becomes preferable to A . This is true no matter how immense Emma's suffering or how negligible each audience member's enjoyment is.

This example seems to be a reductio against using aggregate utility as a basis for ethics. Has it been discussed in the literature? Which situation do you find preferable?

user76284
  • 1
  • 3
  • 11
  • 6
    See Nozick's utility monster https://en.wikipedia.org/wiki/Utility_monster – Alexander S King Jul 28 '17 at 23:19
  • 2
    There are certainly criticisms against utilitarianism in the literature. For this reason, who downvoted this would do well in explaining the reason for the downvote, please? – Rodrigo Jul 30 '17 at 23:48
  • 2
    Myself, I have always taken these examples as counterexamples that demonstrate why aggregate utility is not a good model for determining "ethical" behavior. (In this case, defining "ethical" to be the kind of behavior I consider to be ethical today). It simply means that if someone provides an aggregate utility model defending an action, I need to take additional steps to determine if aggregate utility is applicable in that particular case. – Cort Ammon Aug 01 '17 at 23:50
  • Maybe utility shouldn't include purely negative things? – Scott Rowe Oct 18 '22 at 15:55
  • @MarcoOcram It's meant to be repugnant. And finding a scenario "repugnant" is not a reason to close a question. Read the rules. – user76284 Oct 24 '23 at 03:30
  • On reflection I recognise that the question raises a philosophical issue. Please excuse a misjudgement. – Geoffrey Thomas Nov 26 '23 at 17:07
  • @GeoffreyThomas No problem. Can the question be reopened? – user76284 Nov 28 '23 at 04:37
  • There is an implicit assumption in your scenario, that each audience member cannot possibly achieve better utility than \epsilon by doing literally anything else. All you've shown is that net utility in your scenario > 0, but utility is relative, and 0 is not a meaningful benchmark. So, your scenario B is preferable to, say, the situation C where all of the audience members are tortured at -L utility while Emma watches for \epsilon utility, but is not preferable to the situation D where everyone goes out for ice cream at 2*\epsilon utility. – Him Jan 11 '24 at 20:50
  • I think that Mill, and probably many other proponents of Utilitarianism, would simply deny that "consuming sadistic child pornography" is the best way to optimize the audience members' utility. Possibly they are driven to this in the way that a heroin addict is driven to shoot up again and again, but this is not the same thing as such activity being utility-optimal. – Him Jan 11 '24 at 20:57
  • @Him The thought experiment is that an agent is confronted with these two possibilities. Changing the conditions of the thought experiment is not germane. – user76284 Jan 12 '24 at 00:43
  • @Him Changing the meaning of utility is indeed one way utilitarians have tried to wriggle out of unpalatable conclusions. But it is no longer utilitarianism, classically understood, if extraneous restrictions are placed on what does or doesn't "count" as utility. – user76284 Jan 12 '24 at 00:47
  • @user76284 "The thought experiment is that an agent is confronted with these two possibilities." Sorry, I took this as meaning that the agents in the scenario have no choice, and an agent outside of the scenario (the thought-experimenter) has the ability to bring either scenario A or scenario B into existence, yes? – Him Jan 15 '24 at 03:44
  • @Him Your last clause is correct. The exact mechanism is irrelevant but, for example, the agent may have a means by which to release Emma from her prison. – user76284 Jan 15 '24 at 21:45
  • @user76284 "release Emma from her prison" - you mean by choosing Scenario A? But this would unsummon all of the video watchers. The ones who were given no choice but viewing. Hardly seems fair to unexist them because of something that they had no choice in. – Him Jan 16 '24 at 00:16
  • @Him I don't understand your comment. – user76284 Jan 16 '24 at 00:50
  • @user76284 in Scenario B, there is an "audience of consumers". In Scenario A, they don't exist. So by choosing Scenario A, the agent chooses their nonexistence. – Him Jan 16 '24 at 01:25
  • @Him "In Scenario A, they don't exist." Says who? – user76284 Jan 16 '24 at 18:02
  • @user76284 well, they're not included in the calculation. Utility is summed over everyone, so since they're not included in "everyone", then they must not be, eh? If they are, then what is their utility in Scenario A? – Him Jan 16 '24 at 18:59

5 Answers5

2

It's a classic and very effective critique of utilitarianism. Le Guin's "The Ones who Walk Away from Omelas" has been mentioned. Along the same lines, I enjoyed reading Steven Lukes' "The Curious Enlightenment of Professor Caritat".

Olivier5
  • 2,162
  • 1
  • 2
  • 20
2

Utilitarianism, when one tries to do the "math" that it assumes can be done, often leads to support for these sorts of very apparent evils. This is well understood in the realm of moral philosophy. This is one of many problems with using Utilitarianism as a moral methodology. A similar example would be the healthy person who enters a hospital for minor surgery, and who ends up being dismantled into component organs to save a dozen dying people who desperately needed organ transplants, by the utilitarian doctors running the hospital. A different kind of problem occurs with summing over infinite futures, where the "benefit" to the world of a communist utopia for our infinite future citizens was used to justify any horror that supposedly helped to achieve that utopia.

So -- problems for Utilitarianism:

  • The metric one should use for utility is often poorly chosen. Happiness/pain has very obvious shortcomings, because of its focus on trivial issues. Welfare/harm is potentially a far more useful criteria, but is much harder to characterize. But even this neglects the development of character, which is a key feature of people improving themselves. Utilitarian approaches instead tend to lead to weak/self-indulgent characters.
  • The numerics it assumes are actually impossible to calculate, for any case, much less summing over multiple options with varying possible consequences.
  • The concept of rating degree of benefit or harm requires detailed magnitude info about internal mental states of differing subjects, which is impossible to get in principle.
  • Animals have welfare too, integrating animal welfare into human welfare calculations increases the degree of impossibility of the above two points by orders of magnitude.
  • Summing over the future presumes degrees of knowledge of future outcomes, that also are impossible for us in principle.
  • IGNORING future effects, and only evaluating welfare for current entities, leads to some horrible long-term decisions.
  • Causing harm to achieve benefit may be justifiable in some cases, but is clearly not in others. The ignoring of "rights" by utilitarians leads to the sorts of evils that led to this question.

There are similar lists of faults that one can make for Rights ethics, and Virtue ethics, and the various Darwinian ethics.

These criticisms presume that we humans have valid moral intuitions, and we can use these intuitions to sort between proposed theoretical moral models that we can use to make moral decisions. What the criticisms show is that we do not yet HAVE a single fully valid moral theory. Instead, we have multiple reasonably good moral theories, each of which has some potentially gross flaws. To make moral decisions, we therefore should show that when using a theory to address a problem, this problem does NOT fall into the theory’s known areas of weakness, or that a particular theory is congruent with the recommendations of the other competing "good" theories.

Dcleve
  • 13,610
  • 1
  • 14
  • 54
1

A very similar problem was proposed by Le Guin in the short story "The Ones who Walk Away from Omelas". I found one really good paper that goes into the issue at depth, but I admit that I don't completely understand it. I think the argument is that a person's consciousness will stop them from having maximum enjoyment of an activity if they know that somebody else is being harmed. Therefor, to maximize the utility of normal, non-sadistic activities, the child must be freed.

A different solution mentioned by the IEP is to introduce a second principle, namely that utility should be equality distributed among everyone. I have never heard this proposal before and the only source given is Sidgwick's The Methods of Ethics.

E Tam
  • 1,073
  • 5
  • 11
0

Rather getting emotional I would like to give my analysis in a rational way. Assuming ceteris paribus Situation B would be preferable. But .. In any legal system, the law is taken care in such a way that no innocent should be punished even if they have to let thousands of guilty or malignant go unpunished. The question is why so? Because in the given utility theory (by @user76284 ) holds good in a short term and in isolation, but if taken in the aggregate of all such crime (any unwanted act by mass) and long term there will create only chaos. [you can analyse the utility graph gain by Germans during Hitler regime in short term and long term due to action was taken by Him and there are many more examples.]. Given the choice, the first priority should be given to Win-Win over Loss-Win or obviously Loss-Loss, independent of the fact how small Gain is. You can get the same by Nash-equilibrium.

Akhilesh
  • 63
  • 4
  • Thank you for your response. The legal system and "future crime" are immaterial to the thought experiment. I am curious, however. What makes you think situation B is preferable? – user76284 Jul 31 '17 at 13:10
  • No, I never thought about situation 2. Because personally, I don't believe in Majority bias. But if we just consider utility maximization(thinking only in black and white way) as you describe situation 2 will be preferable. But we also need to take judgment as such decisions are not black and while in nature but it's a gray area. Which makes situation 1 preferable as I explained. – Akhilesh Jul 31 '17 at 16:00
  • Would you agree, then, that utility maximization is not always the ethical choice? – user76284 Jul 31 '17 at 18:45
  • @user76284 Yes, I Agree – Akhilesh Aug 01 '17 at 03:40
-1

Aggregate utility has some valuable applications, like progressive taxation, the idea that higher incomes are taxed at a higher rate, because they are mostly used to purchase luxury goods and items, so that lower income people don't have to sacrifice the essential in order to pay taxes.

We can see here, a minority being imposed a small sacrifice (no third mansion or fifth sport car) while the majority enjoys a substantial gain (being able to afford shelter, food and clothes), with an intended net positive aggregated value.

Progressive taxes are applied and generally accepted all over OECD, and when contested it's mostly with other arguments based on aggregate utility like trickle down economics.

So what is the difference with your - obviously unacceptable - example?

I don't think it comes from the ability to measure utility. Although the monetary values of tax payments can be compared more easily than the utility variations for Emma and the sadists(*), the actual utility gained from spending that money poses the same problem: how is my third mansion compared to you having food on the table every day?

There is of course the disgust, the moral intuition most people would have that what happens to Emma, or the surgery patient in Dcleve's example, is wrong. But utilitarianism precisely aims at offering a rational approach to decision making and not relying on this feeling, so the argument is inoperant in this context.

Probably the main issue is that very few people would accept to be in Emma's position in your situation B. The same goes for Dcleve's surgery though experiment. Although I can perfectly see myself in the position of having 2 mansions and begrudgingly give up on the third one, I definitely refuse to be raped all my life or dissected alive, however great the benefit for others could be.

If I can't see myself or my loved ones on the losing end of the bargain, then by either Kant's moral imperative or social contract theory I can't expect others to take this place for my sake. Or, in reverse, approving of Emma's situation would put us in a situation where we would have to agree to take her place. Hence our reluctance to this scenario.

What we are seeing is that, although utility aggregates can be a valuable tool to argue that a given maxim can be made universal (Kant) or that a given policy can obtain a consensus (social contract), we see that by itself it's not enough to account for our moral sentiment. We have to assume an underlying moral basis that will help us establish if the sacrifice is acceptable.

(*) nice punk rock band name, thought

armand
  • 6,280
  • 1
  • 13
  • 36