-1

Can a self-sacrificing altruist (SSA) want to die? Or does it immediately follow that an SSA who wants to die is not an SSA?

user128807
  • 99
  • 1
  • 1
    There is no necessary connection between people's convictions and what they want. So nothing follows from one about the other, immediately or otherwise. – Conifold Sep 01 '21 at 05:39
  • 2
    Is the substance of your question whether an action that is meeting an agent's desires can classify as altruistic? Or is it more that you're specifically interested in death-drive-achievement as a class of action? (if so, maybe you might find progress moving the focus of the question towards actions rather than this "SSA" agent centre!) – Sofie Selnes Sep 01 '21 at 06:44
  • 1
    I presumed the question was “are the goals of ssa consistent with a means/goal of seeking to die”. Bet thsts right – Al Brown Sep 01 '21 at 07:29

2 Answers2

1

In logics, all possible contextual conditions are expressed in the proposition. The solution is simple, see below.

But in language, a lot of conditions are implicitely assumed. If I say "see you at the obelisk", it doesn't mean that you should climb to the top or enter inside. Depending on the circumstances, we (you and I) assume that we're not talking of a videogame, a climbing contest, etc.

Then:

LOGICS:

Strictly, an SSA follows:

A) (if) Altruist-purpose (then) Go-die-immediately

which is not the same as

B) Go-die-immediately

So, the answer to your question, from a strict logical perspective, would be NO. An "(A) that is (B)" (as per your second question) is just a (B). Ignoring the ill-formed question ["An (A) that's a (B) is not an (A)"], (B) is not (A).

LANGUAGE:

So, from a linguistic viewpoint, if John calls himself an SSA, but he's not doing nothing altruistic, he clearly doesn't want to die. But if he calls himself an SSA, and he's making protests in Afganistan in the name of allow-bob-sponge-on-tv (I mean, a altruist goal, although superficial), to the point of heavily risking his life every day, he might possibly want to die.

So, the answer is: it depends on the context, YES xor NO.

RodolfoAP
  • 7,393
  • 1
  • 13
  • 30
1

The comments and other answer seem to be overcomplicating the question. I don’t have a great answer, but at least (I think) I know the question and what an answer would look like.

I presume the question is approximately “are the goals of ssa consistent with the means/goal of seeking to die”. This means I viewed any distinction between wants and philosophy of life as unintentional and needing to be jettisoned.

Obviously it immediately boils down to whether the subject believes that dying would benefit others most or that living would. I can see scenarios where he could go either way. Some people are convinced that overpopulation is the world’s biggest problem. It’s hard to imagine even among that crowd someone thinking that the elimination of one ssa would be a net gain though. He could reduce his footprint or whatever effect he believes is most significant.

Of course there could be individual circumstances that would apply, like he can’t take care of himself, or he needs very expensive ongoing medical treatment.

Overall, it seems unlikely. Heck even if he thought the world would most benefit by the elimination of people in an almost mathematical headcount sort of way, then couldn’t he eliminate more people by staying around and dissuading procreation, arguing for euthanasia, or going on a killing spree? I guess in the latter case the calculation of harm done to victims would come into play whereas it would not be part of the calculation regarding his own continued existence.

We could imagine a situation where he felt that living was wonderful, giving value v, but that each added person makes it some amount less wonderful, by the amount x per person. And furthermore

0 < x < v/N, where N is the world’s population.

In this case, eliminating a random person does more harm than good. The logic does not apply to himself though due to his philosophy of life (if I understand ssa correctly). The maximum benefit he could give to others as a whole would be to eliminate himself only.

This is so theoretical, and simple, and specifically it eliminates extreme behavior changes that affect each person’s x. In particular, because x < v/N, in this case he could stay in order to advocate the creation of more people.

I’m going with no. Barring unusual and extreme specifics, he could by his own values do more good sticking around.

Al Brown
  • 486
  • 2
  • 8