Proposed Method For Meaningful Evaluations In Desire Utilitarianism
Posted in Desirism, Ethics, Logic, Morality, Philosophy on | 5 minutes | 15 Comments →We’ve been discussing the moral theory called desire utilitarianism or desirism lately, and unfortunately, I’ve noticed a tendency towards oversimplified evaluations that lack correspondence to real-world ethical scenarios.
For example, we might debate whether the desire to exterminate a minority is good or bad, according to the theory of desirism. Presuming we agree the desire to exterminate another human being thwarts their desires, proponents of “extermination is bad” might point to this fact and attempt to affix an across-the-board value of “bad” to that desire. Other people dream up all sorts of wild and fanciful “what if” scenarios that purport to disprove the theory: “if extraterrestrials with horrible taste in music threaten to exterminate us unless we worship Milli Vanilli, then worshiping Milli Vanilli is good.”
If only it were that easy.
In the real world that applied ethics deals with, neither a desire nor the fulfillment or thwarting thereof occurs in a vacuum. Whether we have a single agent or a set of 7 billion, every agent (or set of agents) has a complex hierarchy-of-desires that must be considered in order for evaluations to have meaning. Because of this, much like practice in the dojo lacks correspondence to a real-world street fight, thought experiments that evaluate the mere “presence vs. absence” of a single desire are toy examples that postpone clarity at best. By taking these superficial approaches, we fail to take advantage of desirism’s greatest strength: its amenability to objective, numerical quantification.
Since desirism is [ostensibly] an objective theory, why waste time with all this philosophical posturing and semantics? Let’s figure out a way to quantify this stuff and crunch some numbers!
Today, I’d like to propose a method of evaluation based on the hierarchy-of-desires concept I introduced in the last post. I believe there are several advantages to the hierarchy-of-desires concept, among them the abilities to proportionately quantify desire strength and generate empirical, mathematical results.
So far, my tests indicate that the logic holds whether we’re evaluating a single agent or set of agents, and it can be expressed with a tiered pyramid:
My suggested protocol is a simple eight-step process:
- Identify the desire(s)-in-question [the desires we are to evaluate];
- Identify all other desires that might be affected by the desire(s)-in-question;
- Rank the affected desires according to strength;
- Identify the affected desires that would tend to be fulfilled if we promote the desire(s)-in-question [considering individual tokens if necessary];
- Identify the affected desires that would tend to be thwarted if we promote the desire(s)-in-question, [considering individual tokens if necessary];
- Quantify the value of all the desires the desire(s)-in-question would tend to fulfill;
- Quantify the value of all the desires the desire(s)-in-question would tend to thwart;
- Express this relationship with a ratio, e.g. ’10/45′.
That’s it. The first integer represents our fulfillment score, our second integer represents our thwarting score. If our first integer is greater than our second integer, then the desire(s)-in-question tend to fulfill the affected desires, overall. If our first integer is less than our second integer, then the desire(s)-in-question tend to thwart the affected desires, overall. If our numbers are equal, then the desire(s)-in-question are neutral [neither desire-fulfilling nor desire-thwarting]. The difference between our two numbers corresponds to intensity: in our example ratio of 10/45, we have strongly thwarting desire(s)-in-question. On the other hand, if our ratio is 27/28, we have negligibly thwarting desire(s)-in-question. We can also express this as a percentile relationship by dividing our fulfillment and thwarting scores by our total moral value. In this case, 10 ÷ 55 = .18181818, and 45 ÷ 55 = .81818181. The desire(s)-in-question are 18% fulfilling and 82% thwarting.
Adding up the value of all tiers gives us our total moral value. In the case of our ten-tiered example, this would be 10 + 9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1, which gives us 55. Now consider our example ratio of 10/45. You might immediately notice that 10 + 45 = 55, which corresponds to our total moral value, but (19 + 36), (27 + 28), (34 + 21), (40 + 15), (45 + 10), (49 + 6), (52 + 3), (4 + 51), (9 + 46), (11 + 44), (16 + 39), and (54 + 1) also give us 55. Now, I’m no mathematician, and so far I’ve only tested the ten-tier example, but I believe I’ve tested my method enough to claim with confidence a rule that fulfillment score + thwarting score = total moral value. If by some chance, the sum of our fulfillment and thwarting scores does not equal our total moral value, we should check to see if we’ve made a miscalculation somewhere.
The ability to cast testable predictions is a telltale sign of a bona fide scientific hypothesis. My hypothesis is testable and predicts that if desirism’s definition of good is even fairly reliable, then our moral intuitions should agree with the numbers in the overwhelming majority of evaluations. Another way to say this is that we should not be able to easily generate results that conflict with our moral intuitions.
In the next post, I’ll walk us through a couple of examples that will identify, quantify and evaluate the desires of an hypothetical agent. Then, once confident, we can apply the method to groups of agents in either hypothetical or real-world scenarios. Once we’ve got a feel for how to run these evaluations, peer testing in the form of reader participation will be strongly encouraged.
The more tests we run, the more we learn. If things go smoothly and the protocol sustains perfunctory testing, we polish and fine-tune. Else, if it appears the protocol cannot easily sustain perfunctory testing, we rebuild or scrap.
Jnester
says...Bravo cl.
You are right that if desirism is supposed to be objective then that’s how it should be tested. I haven’t seen anything like this yet. I look forward to seeing the results and it is good to have something like this to take the debate to the next level. You should show this to Fyfe or try to get it published somewhere.
Thomas Reid
says...Where does the ranking according to strength fit into your calculation? There didn’t seem to be a use for the pyramid in determining the fulfilled-to-thwarted ratio.
Pine
says...The first problem I see is disagreement about where a certain desire is to be placed upon your pyramid. While the system in practice is objective enough, the construction is highly subjective and perhaps vulnerable to error (or at least the claim of error). Furthermore, even were “we” to agree about your scaling of desires, how are we to ever take the leap from stating our given convictions regarding our desires to stating we have laid hold of a proper and ‘true’ understanding of the weight of those desires?
The second problem I see with your system is that it does not seem to allow for partially fullfilled/thwarted desires. It’s very black and white. Some may dispute it is possible to partially fulfill (or thwart) a desire without fully assigning the full scope of points to either category. IE: If my desire were to provide for my family and I accomplished this but not to the extent I desired to… would I get the full range of points for fulfillment? Or would the fulfillment score be weighted to the degree to which I fulfilled the desire?
Pine
says...Sorry for the double post, but I’ve thought of two more:
Some may argue that there is too small a gap between the top desire and the lower sub-priority desires. Were we to lengthen the gap, we would more than likely not only find the very top less filled, but would also find those few desires residing in the top (especially ‘the’ top priority) perhaps having so much weight that the thwarting of all other desires at once would not be sufficient to overcome it’s weight. (If the top represented 1000 or 1,000,000 and the bottom 1… then many more sub-priority desires could be absorbed by the 1 top than is currently possible with the scale topping out at 10.) Certainly some will disagree as to the scaling you have provided. That being said, as it seems your argument is against desirism, you have been very generous with your scale to the favor of those arguing the morality of this system. To show immoral behaviours justified with such a small gap certainly would disprove (or go to great lengths to discredit) this as ‘the’ definitive moral compass.
The second thought I had regards unintended consequences which may or may not happen. If my desire is to sleep with an attractive girl, I cannot factor in certain desires until the results are known. For instance: My desire to not have children, my desire to not have an STD, my desire for my marriage to remain intact, my desire for secrecy, my desire to sever contact with the individual after the act, my desire to suffer no bodily harm from anyone already endeared to the person I’m engaging in the act with… and so on. I simply will not know all the results until after the fact, and perhaps not until long after the fact. So then, at what point must the behaviour be calculated according to your method in order to supply the best results?
The problem is that if we are to use this method beforehand, it will be shown inadequate to the task. If, however, we apply the method afterwards we can certainly determine whether or not the act was moral… but then hindsight normally takes care of this (in terms of sensability) without complex formulas. To factor in all possible outcomes ahead of time would certainly make the formula either less applicable or much more complex (to factor in percentage chances of undesired [or desired] results and scale them into the total score).
Even in the case of child sacrifice, there are an infinate amount of unknowns. What the person would have accomplished (for good or evil), what influences their children would have brought to bear (if they had children)… again, I find it highly unlikely we can truly factor in all the desires which have been thwarted as we simply do not know to what degree any action we take has affected the course of events which follow.
Godless Randall
says...There didn’t seem to be a use for the pyramid in determining the fulfilled-to-thwarted ratio.
i took it to be right here
We can also express this as a percentile relationship by dividing our fulfillment and thwarting scores by our total moral value. In this case, 10 ÷ 55 = .18181818, and 45 ÷ 55 = .81818181. The desire(s)-in-question are 18% fulfilling and 82% thwarting.
Thomas Reid
says...Hi Godless Randall. In response to my:
…you wrote:
Ah. In other words, a “10” stands for the strength of a single desire, not the number of strongly-held desires, yes? cl, is that what you meant?
But if that’s the case, some other issues emerge:
1. What if, as cl suggests, one considers tokens as the means by which desires are either thwarted or fulfilled? Further, what if individuals have a varying number of desires? Then, unless the pyramid for each individual “jumps” steps, some desires will be penalized (or rewarded) in the equation simply in virtue of one person having fewer (or more) desires than others. That doesn’t seem to work. There would have to be some kind of universal measurement of desire strength such that each individual desire could be inserted at the right level in the pyramid.
2. Does it make sense to say we could measure potential desires this way? I’m assuming that the desirist is committed to considering potential desires too. For example, the desirist probably wants to count the desires of all children potentially born to those sacrificed to Molech, before deciding on whether or not to sacrifice to Molech.
3. Looking down the road: what if the strongly-held desires are known to be bad? You potentially reward evil desires in this equation such that the result is labeled “good”.
cl
says...Pine,
Long time no see. No need to apologize; post as much as you wish. I think people’s “blog rules” can sometimes encroach on good discussion. So, no rules here. I don’t judge people as “trolls” or “blog hogs” or any of that crap.
Well remember though, we’re first considering this as a single person example. As such, there can be no disagreement, as each individual is the ultimate arbiter of where their own desires are to be placed.
I’m not sure I understand exactly what you mean. Do you mean something like, “even if we can agree on the ranking, with what degree of certainty can we state the predicted effect [that our desire(s)-in-question will have on the affected desires]?”
Are you asking what we are to do when the desire(s)-in-question don’t “fully” fulfill or thwart the affected desires, i.e. how we are to score things? If so, this is related to the “direct vs. indirect” aspect of desire fulfillment / thwarting. It’s definitely something we’ll need a clear accounting for, but I think we’re well on the way.
My first concern would be the ambiguous wording of your desire-as-ultimate-ends, “to provide for my family.” When possible, we should express our desire-as-ultimate-ends in terms that could be considered fulfilled or thwarted in Boolean terms: either are, or are not. For example, you might reword your desire-as-ultimate-ends to read, “to buy a home for my family,” or something similar. Heeding this cautionary measure will tend to eliminate ambiguity (confounding) in the evaluations.
But the more important point here is that your questions suggest you may have interpreted the hierarchy-of-desires method as a system of scoring based on whether or not [or to what degree] a desire has been fulfilled [or thwarted]. I suspect this because you said, “..and I accomplished this but not to the extent I desired…” [emph. mine]
When we assign points, neither actual accomplishment or non-accomplishment are being considered; we’re simply saying, if made true, “the desire(s)-in-question would tend to thwart this or that affected desire.” Desires are the objects of evaluation, and we evaluate them according to their relationship to other desires, i.e. their tendency to thwart or fulfill them [if acted on]. My system of scoring is based on the predicted effect of the desire(s)-in-question against the affected desires. All of this occurs in a pre-act context, in other words.
I would also basically use this same response in reply to your example of sleeping with an attractive girl. We would evaluate which of those desires you listed would tend to fulfilled or thwarted if we realized a state of affairs where the proposition, “I am sleeping with an attractive girl” has been made true.
Continuing along these lines, when you say,
I agree, but how would it be “inadequate to the task” if we run the evaluation pre-act? Perhaps it would help if you clarified exactly what you think the task is; we might be on different pages there. The task, as far as the method is concerned, is to quantify the predicted effect of the desire(s)-in-question on the affected desires.
The more I think about it, the more it’s starting to sound like you’re asking that question I mentioned earlier, i.e., “with what degree of certainty can we state the predicted effect [that our desire(s)-in-question will have on the affected desires]?” You seem to be alluding to the fact that in the real-world, the thwarting of one desire can unexpectedly entail the fulfilling of another, then suggesting that such might confound our evaluations. As in, “Oh I cut my finger off, that’s bad it thwarts so many desires.. [then, later] because the finger I lost was my trigger finger, I was exempt from the draft and that’s good, because it fulfills so many other desires.”
Is that what you mean?
10 is completely arbitrary. In reality our pyramid could top out at any number. It tops out at 10 here just for simplicity’s sake.
Not at all. My primary goal is to strain the fat, tighten things up, and get this thing working in an empirical manner. Numbers don’t lie, nor do they accuse people of being racists in the midst of disagreement. :)
I’m not sure if you’ve read the other desirism posts. I’m largely in favor of the logic; it’s the desirist definition of good that’s obscuring progress, IMO. Don’t get me wrong, I have a few other objections and criticisms of the theory, too, but this is the main aspect I’ve been addressing.
MS
says...I echo Jnester, cl: Bravo. If this thing proves out, you should definitely present it formally somehow as he suggests.
I see pine’s point, that there is probably going to be some squabbling over the numerical designations, but that is no different than when engaging commonly utilized probability theorems, and if we were using words instead of numbers, there’d be the same argument. At least you’ve taken the forward-thinking step to propose a system; it can always be tweaked as you suggest, if need be.
It’ll be interesting to see the actual results and functioning of the system. I’ll reserve comment until then.
“My suggested protocol is a simple eight-step process:”
How about, each step in the eight-step process is simple :)
Pine
says...Maybe I’m just not seeing the big picture. Looking forward to seeing your system in practice.
Dominic Saltarelli
says...Pine’s objections are a rather obvious point to raise in regards to desirism’s own claim to objectivity.
cl
says...Hi Thomas. Sorry it took so long to get back to your last comment. Completely spaced it.
Pretty much.
The “10” in this case indicates not necessarily “a single desire” as in “any desire in particular,” but the agent’s desire-as-ultimate-ends. If the pyramid had only 6 tiers, like the one in the hypothetical single-agent evaluation in the follow-up post to this one, the agent’s desire-as-ultimate-ends would be represented with a “6”. The top desire is always the most important desire, i.e. the agent’s desire-as-ultimate-ends, their least expendable desire. The bottom, or base of the pyramid, always gets a “1” and refers to the agent’s most expendable desire.
I’m going to hold on the question of tokens for now, but when you ask,
This is a valid concern. My first way of dealing with this was to default to “percentage conversion” where the particular desire is evaluated against each agent’s personal pyramid, if that makes sense. Say we’re considering smoking, just you and I, because we’re going to be roomates or something. You would run the desire to smoke against your own hierarchy of desires, and convert your ratio into a percentage. I would then do the same, and we would compare. The problem is, this doesn’t actually resolve the “varying desires” problem; it only expresses it differently. We’ll see what happens when I try the two-agent example.
As for the “potential desires” thing, I’m not really sure what Alonzo would do [he’s the authority on desirism], but I haven’t been considering them into my evaluations as of yet.
Lastly,
I don’t see what you mean there. Could you maybe rephrase that one, or show an example?
Thomas Reid
says...cl,
When I said:
…you asked:
What I mean is it seems like this arrangement will fall to a version of the “Nazi example”: desirism gets the result wrong on strongly-held desires that we know are bad intuitively. That’s all. I’ll be interested to see if this method can avoid that objection. I suspect it cannot, since it’s easy to envision an example where the strength of certain desires overwhelm the calculation.
Simple example: two people, one desire each. Person A, an adult, desires to torture children for fun, and holds this at strength 10. Person B, a child, desires to play in the sandbox, and holds this at strength 1.
Evaluate the desire to torture children. In the presence of such a desire, the ratio you suggest would be 10/1. In the absence of such a desire, the ratio you suggest would be either 1/10 (if there is a sandbox around) or 0/10 (maybe there isn’t). So since the desire in question tends to fulfill other desires overall, it’s a good desire. But obviously that’s wrong, so desirism cannot pass this objection.
Note this assumes you perform the calculation on tokens, although the desires in question could still be types. I’ve been thinking about this for a while, and to me a calculation only makes sense in the presence of tokens.
Roger3
says...Been reading your blog awhile. Good stuff.
You’re going to run into a problem with your method here of assigning values numbers to various desires. In fact, I’m quite positive that ANY method of assigning numbers to desires is going to run into the same problem. I’m not sure that it’s surmountable.
Public Policy economists have been wrestling with this problem for decades, and Public Choice literature is thick with it. The problem is that desires, like candidates in an election, are to varying degrees incommensurable and the results of elections for candidates have varying degrees of inconsistent results depending on the type of election. The mapping of candidates to desires is one-to-one: One may have a desire that a particular candidate be elected.
Since all desireism value issues can be reduced in this way to election problems, you can bring to bear the full weight of social choice theory into it. By assigning weights to desires, you’re forming preference chains, but preference chains are generally non-transitive.
This isn’t a critique of Desirism per se, because Desireism doesn’t seem to have anything to say on how one should formulate preferences, but it does provide a rather ugly road-block to anyone proposing a numerical ranking system that tries to implement Desireism.
Some light reading on the subject:
The implications of preference listing:
http://www.colorado.edu/education/DMP/voting_b.html
(the page voting_a.html is the main page for that section of the site)
Arrow’s Impossibility Theorem:
http://mindyourdecisions.com/blog/2008/02/12/game-theory-tuesdays-someone-is-going-to-be-unhappy-an-illustration-of-the-voting-paradox/
There’s a link to a .pdf in that second link that has three separate proofs of Arrow’s Impossibility Theorem. It’s a good read.
Just my $0.02.
-R
cl
says...Hey Roger3, thanks for your $0.02, although, I’d value it as at least a quarter, which – as a matter of sheer coincidence – actually serves to further illustrate the very “incommensurability of values” you allude to :)
I take it as given that different people will have varying degrees of preference for any given desire, and that such at least potentially confounds any reliable evaluation. I must say, I hadn’t really read any papers on economics or social choice theory before your comment, but it was kind of interesting to see that I stumbled upon corollary issues in conceiving of a method to rank desires. While reading the first article you linked to, I was taken aback by the similarities between the Borda Count and the hierarchy-of-desires method I’m proposing:
Borda’s suggestion seems very similar to what I proposed. Nothing in the hierarchy-of-desires method requires “registering only an agent’s top-ranked desire” or “ignoring how an agent ranks other desires.” In fact – as far as I can see – the method actually takes a step in the direction towards solving those problems by accounting for the “information dismissed” in conventional plurality elections.
Alas, enter Condorcet, Arrow, et atl.!
Still, it was interesting how, after further consideration, the Borda Count appears to have been vindicated. This gives me confidence that the hierarchy-of-desires might have some validity after all. At the very least, I think we need something like it. Though I realize it may have been intended to be purely hypothetical – in a real world context – I wouldn’t be persuaded by the vague analysis Luke and Alonzo use in Episode 9.
Though I haven’t read your second link yet, even before reading your first link I had doubts about the reliability with which we might evaluate desires. Your first link has strengthened those doubts. I agree that a numerical evaluation of desires faces what might be an insurmountable challenge. Coming up with a foolproof methodology isn’t necessarily on my life’s to-do list. It’s just that, even way back in April, I was growing tired of what I perceived to be mere philosophical posturing that simply asserted the validity of desirism. It seemed to me at that time, as it still does today, that if desirism is really the objective theory its proponents claim it to be, then demonstrating its alleged objectivity via some semi-sound empirical schema would be a strong mark in its favor. I felt that Luke and Alonzo owed readers a step in that direction, and this hierarchy-of-desires concept represented my contribution at that time.
Anyways, thanks again. The article you link to has revived my interest. This is all interesting stuff to think about.