Conducting Single-Agent Evaluations With The Hierarchy-Of-Desires Method
Posted in Desirism, Ethics, Logic, Morality, Thinking Critically on | 7 minutes | 4 Comments →We discussed the method and some preliminary objections here. I think the best way to illustrate the method’s strengths and weaknesses would be to just dive in and play around with it.
It is my opinion that any moral theory worthy of being considered “the best” should be able to guide both isolated individuals and interactive groups towards the “moral good” at any given time. So, I’ll begin by considering the effects of any particular desire on the affected desires of an isolated individual, in order to specifically determine whether or not the particular desire tends to fulfill or thwart other desires. My hypothesis was that if desirism’s definition of good is sufficient, the numbers should line up with our moral intuitions most of the time.
For today’s test runs, we’re admittedly going to use somewhat of a dummy, but we’ll develop the circumstances a bit more as the story unfolds. That said, the context is a post-nuclear-fallout metropolis with [ostensibly] only a single survivor, our agent, who is unsure whether others are alive, and whose only road map for decision-making will be that “good = such as to fulfill more than thwart all other [affected] desires in question.”
I cannot stress enough that what we’re exploring here is the relationship between any particular desire to all other affected desires. That’s it. Certain factors become irrelevant, for example the truth-value of the proposition that is the base of the desire, or the fact that sometimes “good” can result from “bad”. For the purpose of these evaluations, what we’re asking ourselves is something like, “if our agent were to act on this particular desire, to what degree would it tend to fulfill or thwart the others?” Our analysis is taking place in a pre-act context, i.e. desires are the objects of evaluation.
For each evaluation, our agent’s desires (A-F) will be:
A: the desire to have a healthy baby; B: the desire to stay physically fit;
C: the desire to explore as much terrain as possible;
D: the desire to find out who started the war;
E: the desire to smoke marijuana;
F: the desire to do somersaults.
The particular desires we’ll evaluate will be to walk, to shoot heroin, and to read books. Granted, whether or not each of these acts are subject to moral evaluation is open for argument. For the sake of today’s example, we’re treating all desires as subject to moral evaluation, and for simplicity’s sake we’re going to assume that any particular desire either fulfills or thwarts affected desires [Boolean evaluation]. In the real world, it is often the case that a particular desire neither fulfills nor thwarts affected desires [non-Boolean evaluation], so our method will need to accommodate this fact. We’ll talk more about this later.
Evaluation 1: Our agent desires to walk.
IMO, the desire to walk would seem to:
- indirectly fulfill A, on the logic that walking constitutes exploration by default, and one’s chance of discovering a mate is increased if one roams for a mate (+6);
- directly fulfill B, on the logic that walking contributes to physical fitness (+5);
- directly fulfill C, on the logic that walking is a form of exploration by default (+4);
- indirectly fulfill D, on the logic that one’s chances of finding information or survivors [who might have information] are increased by walking (+3);
- indirectly fulfill E, on the logic that by exploring one’s terrain, one is more likely to realize a state of affairs where the proposition “I am smoking marijuana” can be made true (+2);
- indirectly fulfill F, on the logic that one’s chance of finding a suitable place to do somersaults is increased if one actively explores one’s terrain (+1).
If we were to add these up, we’d have a score of 21/0. IOW, the desire to walk would tend to fulfill more than thwart our agent’s balance of desires. Overall, walking appears to be a right act for our agent, and I’m willing to bet that most of us have moral intuitions that would agree. Even amongst those of us who would object to subjecting the desire to walk to moral scrutiny, I’m willing to bet our gut reaction is something like, “Well of course there’s nothing wrong with walking under these circumstances.” That’s precisely what the numbers told us in this case, 21/0 where the 0 refers to zero desires thwarted.
What happens when we evaluate a different desire?
Evaluation 2: Our agent desires to shoot heroin.
IMO, the desire to shoot [use] heroin would seem to:
- indirectly thwart A, on the logic that shooting heroin decreases one’s chances of discovering a mate(-6);
- directly thwart B, on the logic that shooting heroin carries high health risks(-5);
- indirectly thwart C, on the logic that shooting heroin would render one less-capable of exploring one’s terrain due to lethargy (-4);
- indirectly thwart D, on the logic that shooting heroin brings one no closer to finding out who started the war(-3);
- indirectly thwart E, on the logic that shooting heroin is less likely to realize a state of affairs where the proposition “I am smoking marijuana” can be made true (-2);
- indirectly fulfill F, on the logic that shooting heroin just might make one want to do somersaults (+1).
If we were to add these up, we’d have a score of 1/20. IOW, the desire to shoot heroin would tend to thwart more than fulfill our agent’s balance of desires. Overall, shooting heroin seems to be a wrong act for our agent, and I’m willing to bet that most of us have moral intuitions that would agree.
What happens if we change the particular desire again?
Evaluation 3: Our agent desires to read books.
IMO, the desire to read books would seem to:
- indirectly thwart A, on the logic that reading would generally not be conducive to an agent seeking a mate under these circumstances (-6);
- indirectly thwart B, on the logic that reading doesn’t increase physical fitness (-5);
- indirectly thwart C, on the assumption that reading and exploring one’s terrain are mutually exclusive desires – that is, one cannot do both at the same time (-4);
- indirectly thwart D, on the logic that print communications companies would not be operating such that discovering an answer in print would be likely (-3);
- indirectly thwart E, as reading would generally not be conducive to an agent looking to smoke marijuana under these circumstances (-2);
- directly thwart F, on the assumption that doing somersaults and reading are mutually exclusive activities (-1).
If we were to add all these up, we’d get 0/21, for the desire to read books. This time, I’d say we definitely have an instance where the numbers strongly disagree with our moral intuitions. Would anyone really say that the desire to read books is “more bad” than the desire to shoot heroin?
My immediate reaction to what I’ve done here is, “so what?” All it seems I’ve done is to propose a method that lends itself to pragmatic decision-making. This is precisely why desirism’s definition of “good” needs adjustment, IMO [at least its prescriptive definition].
That a particular desire tends to fulfill more than thwart the balance of affected desires is only an indicator of its pragmatic value. In today’s examples, I would object to the conferring of “moral good” upon a particular desire simply because it tends to fulfill more than thwart the balance of affected desires. Similarly, I object to the conferring of “moral bad” upon a particular desire simply because it tends to thwart more than fulfill the balance of affected desires.
Pine
says...CL:
You wrote: “I cannot stress enough that what we’re exploring here is the relationship between any particular desire to all other affected desires. That’s it.”
I’m not sure such a system has a practical use. Let’s examine it’s usefulness to four hypothetical people. The person who tries to do what’s right, the person who doesn’t care about what’s right, the person who is confused about what’s right and finally the person who wants to be seen as having done what is right.
Most people who desire to do ‘right’ already weigh out their decisions in a way similar to the one you have proposed. Certainly few ‘do the math’ in a literal way as you have done, however there remains a consideration of the weight of their desires (IE: to be moral or hold to a certain moral principle or code) which ultimately affects their decision. To this person the mathematics would be impractical and perhaps impossible. For one thing, most people who seek to make rational moral choices do not see their action as isolated from the consequences. As consequences are not always solid, this leaves one some doubt as to whether they have given too little (or perhaps too much) weight to one possible outcome or another. I know, you said we were dealing with a boolean system… but that very proposal is contradictory to the decision making process most moral people employ. There are no black and whites in practice so why consider the ‘facts’ weighing in on our decision as though they did? Furthermore, as you have already pointed out, the system as proposed leads one away from their moral intuitions, not towards it. To this end I would say that this would serve only to confuse a moral person’s thoughts. As it would not clarify, I again question it’s usefulness to them.
The person who doesn’t care about what’s right seeks to fulfill their current desire without regard to the consequences. To this person your scale is much too small. Surely their current desire, therefore their greatest, is to them weighted at 1000 or even perhaps at 1,000,000 and all other desires, should they even rate on the scale at all, are firmly at a 1. The math then becomes a redundant exercise as the result is always the same. I suppose they could use your system to somehow rationalize their being in the right, but then they don’t care about that sort of thing. For this person the confirmation your system supplies is like someone standing beside a ravenous lion telling them it’s quite alright they have mutilated the young zebra they now devour.
A person who is ‘confused’ as to the right course of action will undoubtedly have some presuppositions about what’s right. But perhaps they, at least, can employ the system proposed and gain clarity of thought in terms of how to proceed. Except they are confused about what’s right, which more than likely means they already have trouble ranking their desires. I envision this person constantly questioning not the cold hard mathmatics of it all so much as their ranking of their own desires. At one moment one desire fulfilled could seem ‘moral’ and at the next the very opposite could appear to be true. By supplying them a way to justify either end, their already confused mind would spiral endlessly unable to ever really have assurance of their action.
The person who wishes to be seen as having done what is right will undoubletly try to rank their desires in the order which
‘most people’ will also rank them. This may lead them to a more ‘objective’ result. However, as you previously pointed out, when the resulting math contradicts the intuitive moral compass how will they then react? Could they ever justify their position based upon the mathmatical formula? I find that when we ask others to consider our actions, we prefer the viewpoint from before the action. We ask them to understand what we felt in that moment, what the circumstances were, to try to put themselves in our shoes when we made that choice. Our evalution of others is often not so gracious. We consider actions in the now, with the further reaching consequences now more plainly in view. We do not seperate the suffering of loved ones from the murder of the victim simply because the murderer could not have known or understood the full consequence of their decision to kill. Again, the argument made to justify any action after the fact would be shallow and cold. ‘But that’s not what I’ve attempted here’, you might protest… I realize this is not what you have attempted. That is precisely why it will never do as a practical system for evaluating one’s actions (desire fulfillment) before the fact.
Dominic Saltarelli
says...Lots of people think reading sucks. You may be onto something. Maybe that’s why I enjoy reading, because it’s so decadent and sinful!
cl
says...Sad, but true.
LOL! LOL!!