Cartesian’s Nazi Example

Posted in Desirism, Ethics, Morality, Philosophy on  | 3 minutes | 25 Comments →

Months back on CommonSenseAtheism.com, commenter Cartesian offered the following argument against desirism, and I felt it was appropriate to repost here on the promise that I find the original link. Though I assure you I copied it verbatim, I agree that it’s professional to use first-order sources wherever possible, and promise to find the link. There’s also a technical reason I want to post it here instead of CSA. I’ve noticed that links to individual comments don’t work. That is, even when I used the direct URL for a particular comment, the post still loads with the page at the top, which forces the user to search for the quote his or her self. For this reason, it’s good to quote sources numerically at CSA, i.e. “So-and-so’s fifteenth on post X…”

At any rate, here’s Cartesian’s Nazi example:

Suppose the Nazis had killed or brainwashed anyone who disagreed with them, and succeeded in conquering the world. They keep a handful of Jewish people around in zoos, just to torture. Suppose the most popular television show in Naziland features ordinary Nazis — selected by lottery from among the Nazi population — torturing these Jewish people just for fun. The billions of Nazis in the television audience absolutely LOVE it. It’s like American Idol to them. They look forward to it all week. It’s what they want most in life: to see those Jewish people tortured. These Jewish people are kept in a pretty sorry mental state (due to nearly constant torture, and perhaps even some drugs), so that each of their desires not to be tortured is weaker than each of the Nazis desires to torture them.

You and your friend Jerk live in Naziland. Jerk is a typical Nazi: he really badly wants to win the lottery so he can appear on this television show and torture some Jewish people. You, on the other hand, don’t. You’ve done some thinking lately, and you’ve concluded that torturing people just for fun is awful, and you want no part of it. (Naturally, you keep these opinions to yourself, for fear of being taken in for “re-education.”)

Clearly, in this situation, your desire is good and Jerk’s desire is bad. But, in this situation, only Jerk’s desire tends to fulfill more and stronger desires than it thwarts. (His desire, if satisfied, would fulfill the very strong desires of billions of blood-thirsty Nazis, while thwarting the weaker desires of only a few Jewish people.) Your desire, however, actually tends to thwart more and stronger desires than it fulfills. So, according to desirism, *your* desire is bad and *Jerk’s* desire is good.

But that gets things exactly backwards. So desirism is false.

I agree wholeheartedly, and submit that’s an apt explanation of why good must mean something more than, “tends to fulfill the desires in question” or “tends to fulfill more than thwart other desires” or however else one wants to phrase it.


25 comments

  1. Hendy

     says...

    The original occurrence is HERE. Search for “cartesian” and the quote from above is the first occurrence. I couldn’t seem to find a way to directly link to the comment or I would have…

    I think the key to desirism is it’s assumption of universally malleable desires. It not only looks at the current majority picture but also at possible alternative pictures. For example:

    Population
    – 1MM torturous Nazi’s
    – 10 Jews being tortured
    – Jerk, a torturous Nazi (say he’s one of the 1MM)
    – You, the one Nazi who does not like torturing

    While I’m fuzzy on exactly how this works, let’s take two scenarios:

    Path 1: Continue torture routine
    – 1MM strong desires fulfilled (the Nazis)
    – 10 weak desires thwarted (the Jews)
    – 1 strong desire thwarted (yours)

    Path 2: No one desires torture
    – 10 weak desires fulfilled
    – 1 strong desire fulfilled
    – No desires thwarted

    Am I seeing this completely wrong? I listened to Luke’s e-book on morality and recall his example of the rapists. If we only consider that rape is a fixed desire, then your objection absolutely holds.

    But Luke is considering that we can actually remove (i.e. “turn down”) the desire on all rape down to 0. In that case, no one’s desire not to be raped is thwarted and no one’s desire to rape is thwarted.

    If that was the case above… then I think it essentially comes down to which is better? 1MM strong desires fulfilled and 11 thwarted or 11 fulfilled and none thwarted…

    A wholly separate objection to this is that even if Luke considers desires the only objective basis for morality… does he go to far in naivety by assuming that these desires can simply be changed/adjusted/turned down to 0. In current society, we simply imprison for actions resulting from desires, not desires themselves. The punishments often do not seek to eliminate the desires but only demonstrate that there are punishments that hopefully override the desires the next time (if there is a < life sentence).

    But what if there are no punishments in society because the society wants to do what the 1MM want done… who’s doing to enact the utilitarian principle which suggests eliminating all of those desires?

    Lastly, this quote from p. 27 of Luke’s e-book is an interesting one to ponder: “And if everybody desired to be surrounded by deafening noise, then it would be
    morally right to carry a blasting boombox everywhere you went. In this sense, morality is
    subjective.”

    Given this quote, it seems that either of the following situations is morally equivalent:

    – turn down 1MM desires to torture so that no desires are thwarted
    – turn down 11 desires not to be tortured so that no desires are thwarted

    In that case, the second is actually better because 1MM desires are fulfilled (and strong ones!) and none are thwarted vs. simply having none thwarted.

    ‘Nuff from me.

  2. Hendy

     says...

    I tried this once already and had a ton written but it hasn’t shown up in 2 weeks so we’ll start slow…

    The post you want is HERE. I count about 30 posts down. Or just search the page for “cartesian” and it’s the first hit.

  3. cl

     says...

    Hendy,

    Sorry about the delay. WordPress flagged your comments as spam.

    That said, where to begin? I’m glad you commented, because I need a soundboard just as much as the next writer. We seem to be in agreement that salient questions remain.

    I think the key to desirism is it’s assumption of universally malleable desires.

    Different people argue desirism differently, but in my experience, the general consensus is actually that non-malleable and malleable desires exist. If by “assumption of universally malleable desires” you mean that only malleable desires are said to exist, I’d have to disagree – but it’s really a minor detail as far as the discussion between us is concerned.

    If we only consider that rape is a fixed desire, then your objection absolutely holds.

    I think my objection to defining good as “tends to fulfill other desires” holds either way, but I’ll leave that alone for now.

    …Luke is considering that we can actually remove (i.e. “turn down”) the desire on all rape down to 0. In that case, no one’s desire not to be raped is thwarted and no one’s desire to rape is thwarted.

    I understand. This is the “turn down the knobs” technique. Questions remain:

    1) What happens when the desire being evaluated isn’t such a flagrant moral offender as rape? To include an example from the real world, just this morning I saw an article describing legislation that would ban toys in value meals unless fast food restaurants make them healthy for kids. In a case like that, who’s to say which desire ought to be turned down to zero? Can such be done without the brute imposition of one value system over another? Can such be justified?

    2) Depending on which desirist you’re listening to, we’re to either consider all desires that exist, or all the affected desires in any given evaluation. In Cartesian’s example, removing the Nazis’ desire to torture doesn’t actually result in 0 desires thwarted – except in the isolated consideration of the desire to rape / not be torture. If we remove the Nazis’ desire to torture, we actually thwart all their affected desires, too, because the fulfillment of those desires depends on the fulfillment of the desire to torture. In Cartesian’s example, the society is so morally corrupt that their entire worldview and culture is founded upon the desire to torture, so this would seemingly result whether we consider all desires that exist, or only the affected desires.

    In current society, we simply imprison for actions resulting from desires, not desires themselves.

    That’s absolutely correct, and echoes another one of my objections: desires don’t affect others unless acted on. This is far more problematic for Luke than I believe he realizes. It’s actually a point of glaring inconsistency. Now, I don’t have a link immediately available for this, but I can promise you that Luke has criticized the Bible for promoting “thought crime” as do many atheists [i.e. Jesus in the New Testament states that even thinking about adultery is like doing it]. Yet, since desires are thoughts, when the desirist promotes the condemnation of certain desires, how is desirism NOT an endorsement of thought crime?

    …what if there are no punishments in society because the society wants to do what the 1MM want done… who’s doing to enact the utilitarian principle which suggests eliminating all of those desires?

    Yes, yes, and yes. I call this the “problem of brainwashing” and submit that desirism [as currently formulated] is wholly defenseless against it.

    In that case, the second is actually better because 1MM desires are fulfilled (and strong ones!) and none are thwarted vs. simply having none thwarted.

    I agree. It’s also better because it’s easier. Why do you think it is that desirists generally seem to argue for the oppressed minority in these types of examples? I honestly believe it’s in response to a moral intuition that things like rape and torture are wrong. Think of all the time, money and resources a society would need to remove 1MM desires, when they could just remove the desires of the 10 being tortured. What prevents the desirist from pursuing the pragmatic course of action here, if not their own moral compass?

    Lastly, this quote from p. 27 of Luke’s e-book is an interesting one to ponder: “And if everybody desired to be surrounded by deafening noise, then it would be morally right to carry a blasting boombox everywhere you went. In this sense, morality is subjective.”

    Yet, Luke and Fyfe have criticized me for ‘misunderstanding’ theirs as a theory of majority rule / survival of the fittest. If what Luke said above is true, it follows that if everybody wanted to murder, then it would be morally right to murder. That’s so absurd it makes me feel like I might be wasting my time, but I can’t believe how many ostensibly skeptical people seem to nod along in agreement. If you ask me, that sort of thinking is downright scary.

  4. woodchuck64

     says...

    If what Luke said above is true, it follows that if everybody wanted to murder, then it would be morally right to murder. That’s so absurd it makes me feel like I might be wasting my time, but I can’t believe how many ostensibly skeptical people seem to nod along in agreement. If you ask me, that sort of thinking is downright scary.

    If everybody wanted to murder, that would be a race of beings “homo murderous”, not “homo sapiens”. For homo murderous, would murder be moral? I think it probably would be, but not under desirism.

    Similarly, Cartesian’s Nazi example doesn’t seem to describe homo sapiens, either. It would be impossible to torture people in zoos for long because it would too strongly violate our instinctive biological empathy. Too many people would simply do what comes naturally and empathize with the sufferers, to the eventual breakdown of the whole system. The example is only compelling if we either imagine a species “homo torturus” who loves torture as much as we love empathy, or human beings that have themselves been emotionally mutilated at childhood by, say, government-mandated “education”, which would then be another form of desire thwarting that must be taken into account. For homo torturus, though, torture could well be moral, but not under desirism.

    Desirism says that promoting desires that tend to fulfill other desires and discouraging desires that tend to thwart other desires is the best way to fulfill one’s own desires. If this is not empirically, biologically supported then it fails from the start. For homo murderous and torturus, desirism as a prescription doesn’t work, as far as I can tell.

  5. Hendy

     says...

    @cl:

    Thanks for the response. I’ll have to conduct a longer one when I get home…

    For now on your last statement, it would be interesting for Luke/Alonzo to comment on whether the existence of one person who didn’t want to listen to loud music would change things…

    @woodchuck:

    I can see you point, but also don’t think you grasp the exercise. The point isn’t to think up a necessarily plausible scenario and then to go from there… ethical systems are constantly tested with implausible systems to see how they hold up to the absurd.

    This is similar to testing epistemologies via asking how one could deal with the situations in which the world was created 5min ago, that we’re in a matrix, or that every time you leave a room, it disappears. Are those situations likely? Heck no! But it’s via the absurd and worldview-stretching-exercises that we test their limits of usefulness and value.

  6. woodchuck64

     says...

    Hendy:

    The point isn’t to think up a necessarily plausible scenario and then to go from there… ethical systems are constantly tested with implausible systems to see how they hold up to the absurd.

    Implausible is fine. What I’m objecting to is going beyond the defined scope of desirism. For example, in criticizing Divine Command Theory, I can’t offer the objection that God’s commands could be cruel and dishonest because that is not allowed by the definition of “God”. Similarly, desirism does not apply to beings that are not at least normatively human.

    If a mutation occurred in the human race that effectively eliminated empathy over a few generations, desirism would become completely invalid/irrelevant as a moral theory (both descriptive and prescriptive). In a similar way, Cartesian’s Nazi example relies on people with mutilated empathy, and, if that change happened without human agency, desirism no longer has much meaning or validity for that society. But meanwhile, desirism remains just as valid for homo sapiens (so far as I can tell).

  7. Hendy

     says...

    @woodchuck: I guess I just disagree. I don’t think it’s all that farfetched. Take whatever wholehearted killing-hungry Nazi’s that did exist and kill off everyone in the world. You’d have the most twisted humans alive as comprising the world population.

    It’s not normative, but not impossible. People could exist, in my opinion, to make this work even though it’s very unlikely.

    I think the same scenario would be used to justify DCT as potentially better because of the reference to a non-human-based standard for action. Even if the above situation occurred… right/wrong wouldn’t stem from the humans but from without and thus the torturers could be condemned (even if no one was able to bring about justice since they were the majority).

  8. Hendy

     says...

    @cl:

    Re. your criticism of thought crime… I don’t know if you saw, but Luke put up a post today about thought crime!

  9. woodchuck64

     says...

    Hendy:

    I guess I just disagree. I don’t think it’s all that farfetched. Take whatever wholehearted killing-hungry Nazi’s that did exist and kill off everyone in the world. You’d have the most twisted humans alive as comprising the world population.

    I’m not certain what you’re disagreeing with. Do you agree that if a mutation occurred in the human race that effectively eliminated empathy over a few generations, desirism would become completely invalid/irrelevant as a moral theory (both descriptive and prescriptive)?

    If you agree with that, then I’m pointing out that the Nazi scenario is similar: the human race is essentially culled of all but a few human beings with limited or absent empathy and thus no longer has the biological properties as a whole that make desirism logical or compelling as a moral theory.

    If you disagree, you would be effectively arguing that to be valid as a moral theory for the human race today, desirism must also work for just about all conceivable social (and possibly non-social) organisms as well, regardless of behavior and biology. If that’s your argument, I would readily agree that desirism falls short. But I would then argue that desirism isn’t supposed to work for bees, wasps, prides, wildebeests, chimpanzees, bands of roving sociopaths, etc. but only for homo sapiens. Getting agreement that it is valid for homo sapiens would be really all I would ever hope for.

    (I think DCT is clearly better than desirism in theory. It just isn’t practical for people who aren’t Christians.)

  10. cl

     says...

    woodchuck64,

    I tend to agree with Hendy that Cartesian’s example reflects a very real potential for the species of homo sapiens. Sure, the world isn’t like Cartesian’s example right now, but it wouldn’t take much to envision a state of affairs where something like Cartesian’s example could arise in our world, today. A small group with access to nuclear weapons could effectively start things in this direction.

    OTOH, I agree with you that a state of affairs where every person wanted to murder seems implausible. Yet, so is a state of affairs where everybody wanted to blast loud music out of a boom box all day, right? The point of my example was to counter Luke’s: that everybody desires X does not make X moral. Though Luke didn’t say exactly that, without some emendations, it’s hard to imagine what else he’s saying. Besides, as Hendy implied, something like a “murderous majority” is not that far-fetched.

    I’ve heard a decent handful of people object to making these types of hypothetical evaluations to test desirism, but I agree with Hendy that we need to concoct all sorts of trials in order to test a moral theory. I believe we should evaluate both hypothetical situations and real-world scenarios, and we can come up with instances of both that directly confront the desirist hypothesis.

    I also agree with you that DCT is better in theory, yet, just not practical for everybody.

  11. Hendy

     says...

    @woodchuck:

    I see your point. Honestly, I think it’s primarily a philosophical party trick that people have simply developed in order to examine things under odd conditions to see if it holds up. If it does, it’s supposed to be a better theory.

    However, I agree with you that if it’s not really pragmatic or realistic… what does the hypothetical mind-f*ck tell us?

    I think the same for “evil demon” problems. The main purpose I have seen for the scenarios in which the world was created 5min ago, we’re brains in a vat, the room goes away when I leave, and things like this… is simply to let foundationalists or reformed epistemology subscribers the freedom to hurl insults of self-refutation and circularity upon rationalists, empiricists, and the like. So, on one hand… they’re right: epistemology along these lines can’t exactly verify itself and can’t allow one to be certain that we’re not in an “evil demon world.”

    But the solution? Invent an epistemology in which first principles or presuppositions are allowed or justified. Really? Isn’t that simply formalizing what we already do anyway? The whole problem is designed so that there’s no way to tell the difference between a real world and a matrix. At the end of the day, then, we just live like how it seems: the real world.

    Thus, for the morality problem, is the Nazi clan as sole survivors on the planet any more/less likely than the evil demon matrix? Probably not! It’s just an exercise for tinkering to see what one gets. We want a perfect moral system that holds up under these circumstances, however, because such a one is considered more bomb-proof.

    DCT could be better, but I see it suffering from the Euthyphro dilemma as well as transference limitations (great, what god says goes… does anyone know what god’s telling us about situation x?). In theory, yes, if we had a being telling us precisely what was good because he/she/it defined the good… we’d be all set. But as far as I can see, we don’t have anywhere near to this level of specification or even surety even among those professing belief in the same being with the same set of instructions (holy book). Therefore, in practice I don’t currently see DCT as excelling.

  12. woodchuck64

     says...

    cl:

    Sure, the world isn’t like Cartesian’s example right now, but it wouldn’t take much to envision a state of affairs where something like Cartesian’s example could arise in our world, today. A small group with access to nuclear weapons could effectively start things in this direction.

    Sure, but then I would argue in such a world that desirism needs to be thrown out because it just won’t work. Desirism needs people who aren’t emotionally mutiliated, who aren’t empathically scarred beyond recovery to be compelling.

    But back in the real world, desirism remains compelling me as the best moral system for homo sapiens — emotionally healthy and empathic for the most part. Isn’t Cartesian’s example supposed to make me feel differently somehow?

    Hendy:

    We want a perfect moral system that holds up under these circumstances, however, because such a one is considered more bomb-proof.

    Well maybe this gets to the heart of my confusion. It seems to me a moral system defined independently of behavior/biology of social organisms is impossible. A moral system can only reflect or be defined in terms of the essence of the beings it is meant for, it can’t stand apart or be measured any other way. Change the essential makeup of the beings, and the moral system must change as well, the old becomes meaningless. Does that make sense?

  13. cl

     says...

    woodchuck64,

    …back in the real world, desirism remains compelling me as the best moral system for homo sapiens — emotionally healthy and empathic for the most part.

    Okay, you seem to have a particular disdain for hypothetical evaluations. What is it that makes desirism compelling to you? When I look at it, I see a theory that allows for a high margin of error and I hear writers like Luke and Fyfe that avoid tough questions. Do you really believe that X is good whenever X “tends to fulfill other desires?” Don’t you see that as overly simplistic, perhaps to the point of uselessness? I do.

    Isn’t Cartesian’s example supposed to make me feel differently somehow?

    Well yeah, but it doesn’t work when you just declare it nonsense because you don’t believe humanity could ever become that depraved. You seem to suggest that the truth of the theory is relevant, and that we only need something that works.

    To each their own, I guess.

  14. Hendy

     says...

    @woodchuck:

    I hear you and think that’s reasonable. The question is “Is the system useful for humans as we know them,” not “Is the system useful for any hypothetical situation comprised of beings with arms, legs, and a head but don’t function like any humans we know of.”

    I hear you. I’ll have to ponder that. Like I said, I think it’s probably got a lot to do with how academia has pursued pushing these things to the limits, not about it’s actual usefulness.

    @cl:

    I hear you, as well, about wanting to know where the systems work. Perhaps you could create a post with, say, three moral questions in the form of “Is X permissible?” and then evaluate the three scenarios with DCT, Desirism, and Error Theory?

    Just an idea.

    I’ve really been wanting to re-listen to Luke’s “What is Morality” talk as I almost swear there was something circular or non-clear in it. I kept trying to grasp the definitions and I could have sworn that on the surface no desire is good/bad — they just exist — yet when digging down, what you are supposed to do is what a person with “good” desires would do, and those are what thwarts the least and fewest while fulfilling the most and best.

    But while you can grapple with “most/fewest,” I don’t think I ever heard a definition of “least/best.” It made me curious if there was some kind of unwritten hierarchy that went unstated…

  15. woodchuck64

     says...

    cl:

    Okay, you seem to have a particular disdain for hypothetical evaluations.

    Not at all; as you say a nuclear holocaust could make the Cartesian Nazi scenario plausible in some form. I’m not saying it’s silly to pose hypotheticals, I’ve said that desirism, as I understand it, is defined to fail in such scenarios. So it’s not clear to me what is achieved by posing them.

    but it doesn’t work when you just declare it nonsense because you don’t believe humanity could ever become that depraved.

    I’m not dismissing the scenario as nonsense; rather, I’m saying desirism is, in all likelihood, no longer a valid approach if humanity becomes that depraved. If Nazis truly desire to torture other human beings just like them solely because they have an arbitrary label “Jew” applied (and not for some incredibly heinous crime, say, or not as subhumans or animals), desire evaluation breaks down: following desires that best fulfill desires is just as likely to result in you being tortured — a sort of self-destructive society where chaos is good and stability is bad. (Note that I’m taking from the example that Nazis believe that Jews are people and are innocent, or one could simply object that desires predicated on false information should not be maximized).

    Do you really believe that X is good whenever X “tends to fulfill other desires?” Don’t you see that as overly simplistic, perhaps to the point of uselessness? I do.

    “Tends to fulfill desires” does sound simplistic, relative to what? I would put it instead as X is a better action than Y if X tends to fulfill more desires than Y. That to me is compelling because it seems to work in majority of cases, while only positing the existence of biological drives and the premise that individual desires are best met in a society where desires fulfillment is maximized and desire thwarting is minimized.

    My understanding of desirism may be oversimplified but it’s something like: “I want. We all want. If we can find a way to get what we want while minimizing hurting each other, we’re all better off. I’m better off.” To me, that seems a useful approximation to reality (in the absence of supernatural revelation).

    And I should add that I’m still in the process of fully grasping objections to desirism, so I’m looking forward to your 12 objections post.

  16. Hendy

     says...

    @woodchuck

    My understanding of desirism may be oversimplified but it’s something like: “I want. We all want. If we can find a way to get what we want while minimizing hurting each other, we’re all better off. I’m better off.

    I don’t know if that’s quite how it works. Hurting, I think, is lumped in simply with the other desires. In Luke’s downloadable EBOOK, for example it states clearly that:

    No desire is intrinsically better or worse than any other desire, because intrinsic value does not exist (p. 24).

    Given that, I’d say it’s just how to to bring about all of our desires while thwarting as few as possible; pain is just one of those desires.

    This might help transition to the Nazi example. Desirism is interesting in that it contradicts our typical response that hurting is simply “bad” no matter what. If such a scenario happens, desirism very well could sat that the Nazi’s should continue on torturing because the most desires are fulfilled. It doesn’t matter that all of us say, “But that poor victim!” because a majority of desires to protect him or alleviate his suffering simply aren’t present.

    Perhaps another way in which you don’t have to stretch things quite so far is to look at scenarios like this one but not quite so implausible/impossible.

    cont…

  17. Hendy

     says...

    …cont.

    @woodchuck:

    For example, what about this:

    – 5 hardened criminals are being sent to a different prison via a small prop plane and a correctional facilities officer.

    – The plane goes down on an island and the pilot is killed, leaving only the 6 alive.

    – The have a map that is specifically for that area, recent within days, and shows specifically that the island they landed on is not present, showing that it is undiscovered.

    – They see ships pass by in the coming weeks and months but not a one gets close enough and their fire attempts are not working.

    – They find a radio from the crash and are able to pick up on some AM frequency in which they hear it announced that the world has given up on finding the plane. Shortly after the radio dies and they have no more batteries.

    – The waters are infested with piranas and the island is pure sand and nothing else. The plane is completely sunk and gone. There is literally no hope in leaving.

    Great. Scene set. Quite improbable but perhaps more probable than the entire surviving species being murderers. Here you have only a small subset of individuals, but they are completely isolated from the rest of the population and let’s suppose that the evidence they have provides certainty that they will never leave the island and will die there.

    The criminals hate the corrections officer and want to torture him, kill him, and eat him. The officer obviously does not want this. No one but them will ever know what happens and thus the desires of the rest of the population are oblivious and cannot be thwarted or fulfilled. All will die, regardless of whether by starvation or at the hands of another.

    Is it permissible to kill him? It’s 5:1 in favor of torture/murder. For them not to means 1 desire fulfilled and 5 thwarted. To give in means 5 fulfilled (in both torture/murder, but also in food) and 1 thwarted.

  18. cl

     says...

    Just a quick one @ Hendy: have you seen,

    Proposed Method For Meaningful Evaluations In Desire Utilitarianism

    &

    Conducting Single-Agent Evaluations With The Hierarchy-Of-Desires Method

    …yet? Since desirism is ostensibly objective, I found it odd that no desirists appear to be attempting actual evaluations. The former represents my best shot [it’s old and needs some reworking, for sure]. The latter is an attempted evaluation given a scenario similar to the one you describe. Although, I began with a single agent to keep things simple, and, because I believe that something like morality applies even given a single agent.

    Anyways. For what it’s worth. Thanks for the great comments and good-faith participation, by the way. There’s been a lot of trolling-type comments over at CSA lately, and I really appreciate that you keep such a level head and pleasant attitude.

  19. Hendy

     says...

    @cl:

    I’ll check those out. I’ve noticed it getting quite heated at CSA! I can get that way, too, but thanks for the compliment. I respect you as well. I’m in the midst of a potential de-conversion that has been going on for about 8mos now and frequent several blogs. To be honest, when I first encountered you (probably around my first email to you a couple months back?) either at Debunking or CSA, I recall distinctly thinking, “That’s the theist I should be paying attention to.”

    I find your arguments good, your dedication to logic and the facts at hand excellent, and I respect your conduct as well. You mentioned the “weird feeling in the tummy” thing about the discussion on evil… that’s about what I had in reading your comments initially way back: “Here’s a guy who’s obviously quite intelligent and believes. I hope to follow up on that” would be a ballpark response.

    I look forward to continuing to see you around.

  20. woodchuck64

     says...

    Hendy, regarding hurting being just another desire (to be free from pain), I agree. In my admittedly limited understanding of reality, it seems to be, at base, all about pleasure and pain (pleasure encompassing the joy of experiencing another’s joy, for example, pain including the pain of loneliness) and I see desirism starting from that and building on it.

    (scenario, 5 criminals, 1 corrections officer)…
    Is it permissible to kill him? It’s 5:1 in favor of torture/murder. For them not to means 1 desire fulfilled and 5 thwarted. To give in means 5 fulfilled (in both torture/murder, but also in food) and 1 thwarted.

    Permissible for human society, no, because torture/murder tends to thwart more desires than it fulfills. That is, in studies of human desires, we would conclude fairly easily that this kind of torture/murder is bad.

    If we just measured the desires on that island, we would conclude that this kind of torture/murder is good, but then we’re not supposed to be measuring arbitrary subgroups and reaching moral conclusions that way, according to desirism. Rather, we’re supposed to reach moral conclusions by measuring the general behavior and desires of the species as a whole. That’s why there’s so much reliance on use of phrases like “tends to thwart” and “tends to fulfill”.

    Making a moral judgement on such a limited scope would be like measuring the temperature on a tropical island as a means of determining the global temperature, it won’t give you the correct answer. Desirism is attempting to determine a global value, human morality, and you can’t do that by measuring just one subgroup.

    This highlights an obvious difficulty with desirism in that it may be hard to know when you’ve measured enough desire-fulfilling/thwarting to make a solid moral judgement on a particular action. But the same is true for any measurement of a variable, but ultimately constrained, property of reality.

    Let’s suppose the island group of criminals and correction office are the only beings on earth. Desirism is introduced to them, through a book buried in the sand. What should they conclude? The book insists on using the phrase “Tends to fulfill”, “Tends to thwart” for good and bad desires resp., so the group, remembering society and how torture and murder acts on desires, again must conclude torture/murder is wrong according to desirism.

    Finally, let’s suppose the group has lost all memory of human society. “Tends to fulfill”, “tends to thwart” now has no meaning except on this particular island. Now at last the group may conclude torture/murder is good. But here’s the thing: they’re wrong because they’ve measured a sample size much smaller than the human race and reached the wrong conclusion. In reality, torture/murder does tend to thwart more human desires than it fulfills, and it might be more rational not to torture/murder with the expectation that, statistically, more desires will be fulfilled for the group eventually by so doing. (I know the scenario tries to provide certainty of no rescue, but the future is always unknown and this must be taken into account in some fashion.).

    Caveat: this is my current understanding of desirism.

  21. Hendy

     says...

    @woodchuck:

    Great point about remembering the focus on “tends.” I hadn’t thought of that. I was more focused on recreating a Nazi scheme that was not quite as far fetched as a nuclear event :)

    Very interesting points about whether they recall human society as a whole or not and I agree — if they were to be unaware of the trends in desire fulfilling/thwarting for humans in general and made a decision based only on their small sample size… they would be making a false choice unknowingly.

    Perhaps that is the key to the Nazi example which you have brought up: considering on the Nazi sample size isn’t realistic since far more than these 1MM used to exist and they painted a far different picture with their desires. In fidelity to the race’s desires, they should reach the conclusion not to torture.

    In the end, I find that all these scenarios require some type of executive branch for they all tend to assume that the majority will have a round table discussion and self-correct which is probably not true. You have to have someone with some power and force who is concerned with what is right to prohibit violators!

  22. mojo.rhythm

     says...

    With respect to the Nazi scenario, it is obvious that everyone here finds the notion of torturing an innocent group of captured Jews for fun morally repugnant. I think we can ask “is this aversion to torturing helpless beings a desire that tends to fulfill other desires?” and I think the answer is a resounding yes. We have good and strong reasons for action to promote this desire.

    The desire to not be tortured is not a malleable desire. It is hard-wired into our nervous system from evolutionary history. The desire to watch someone suffer IS a malleable desire, and a desire that, if gotten rid of collectively, would lead to states of affairs in which more desires are fulfilled then thwarted.

    If all the Nazis were watching a group of helpless Jews being dipped into a bucket of boiling oil (call this [1]), then all of a sudden a mad neuro-scientist shot an electromagnetic pulse at the Nazi crowd, altering their neuronal structure, replacing their desire to watch the Jews suffer with the desire to help them (call this [2]), under Desirism, this would be an objectively better state of affairs, since this new desire would tend to fulfill greater and stronger desires then it thwarts.

  23. Reidish

     says...

    I think we can ask “is this aversion to torturing helpless beings a desire that tends to fulfill other desires?” and I think the answer is a resounding yes. We have good and strong reasons for action to promote this desire.

    But that doesn’t show it is a “good” desire, assuming desirism is true. Rather, desirism requires that it must be the case that the desire has to overall fulfill more and stronger desires than it thwarts.

    If all the Nazis were watching a group of helpless Jews being dipped into a bucket of boiling oil (call this [1]), then all of a sudden a mad neuro-scientist shot an electromagnetic pulse at the Nazi crowd, altering their neuronal structure, replacing their desire to watch the Jews suffer with the desire to help them (call this [2]), under Desirism, this would be an objectively better state of affairs, since this new desire would tend to fulfill greater and stronger desires then it thwarts.

    But of course this is not a response to the objection. Instead, you are proposing a different example where desirism yields the same answer as our intuitions / neurological hardwiring / whichever moral theory one uses to condemn torture for fun.

  24. Hendy

     says...

    @mojo

    I hear you, but we’re also asking this from the standpoint of a larger polling base than the actual examples. If the world consisted only of the Nazi’s and some Jews… then, the answer might actually not be a “resounding yes” to your question.

    I do think the malleable vs. unmalleable point is well made. It could possibly be argued that some have distorted themselves through their environment and experiences so much that the desire to torture is not malleable anymore. Perhaps that’s a stretch, but possible.

    Lastly, your example somewhat relies on a 3rd party of enforcers who do have the right desires to go through and change the malleable desires of the Nazis with technology. But the point is that no such 3rd party would exist… the scenario only contains one person with “good” desires which oppose those of the masses.

    Check my posts above in which I reference Luke’s ebook which says both

    “No desire is intrinsically better or worse than any other desire, because intrinsic value does not exist”… and

    “And if everybody desired to be surrounded by deafening noise, then it would be morally right to carry a blasting boombox everywhere you went. In this sense, morality is subjective.”

    How does your statement that “…this new desire would tend to fulfill greater and stronger desires then it thwarts” reconcile with Luke’s statement that there are no desires that are better or worse than others? Even if you judge by strength or magnitude you’re using those as the measuring stick of “betterness”… how does this work if there really aren’t intrinsic values in desirism?

    And lastly the nazi scenario creates a world like the second quote in which everyone except 6 want to listen to “deafening noise.” Is it any different if “deafening noise” happens to be “torture Jews”? Desirism indicates that if everyone wants it, that’s what one should do. Well… in this case, everyone but a few want it… should we not pick up our boom boxes?

  25. cl

     says...

    mojo,

    I think we can ask “is this aversion to torturing helpless beings a desire that tends to fulfill other desires?” and I think the answer is a resounding yes. We have good and strong reasons for action to promote this desire.

    Sure, we do, but we’re not twisted Nazis.

    The desire to watch someone suffer IS a malleable desire, and a desire that, if gotten rid of collectively, would lead to states of affairs in which more desires are fulfilled then thwarted.

    Actually, in the Nazi example. getting rid of the desire to watch Jews suffer would lead to a state of affairs in which more desires thwarted than fulfilled. Remember, the Nazis’ entire infrastructure and all the desires that come along with it depend on torturing the Jews.

    Hendy,

    I do think the malleable vs. unmalleable point is well made.

    I didn’t. The aversion to torture is not a desire, such that it could be classified a non-malleable desire.

    Is it any different if “deafening noise” happens to be “torture Jews”?

    Well, Luke might try to say that the former is a permissible desire, whereas the latter is an evil desire, and he wouldn’t necessarily need an appeal to intrinsic value to do so. The delineation would be based on the relational effects of one desire to all other desires that exist. However, this wouldn’t absolve Luke of the uncomfortable conclusion that if torturing tended to fulfill other desires, then torturing must be good. I can imagine him thinking something like, “yeah, but that could never be the case.” If that’s the case, then it seems like we’re right back at intuition and intrinsic value.

Leave a Reply

Your email address will not be published. Required fields are marked *