On Prescription
Posted in Morality, Philosophy on | 3 minutes | 7 Comments →
Just yesterday I told somebody I was burnt out on blogging. Then, of course, I woke up with an inspiration. Nothing full-fledged, but more like a little mini-inspiration.
Yesterday, we traded banter over what it means to say that a moral theory is prescriptive. This got me to thinking about other instances where people use that word, for example, the doctor. When a doctor prescribes some treatment for a patient, it is an admonition declaring what the patient should or ought to do. Of course, that the patient has a desire to increase their own well-being is implicit in the scenario. So, according to that which the patient values [an increase of well-being], the doctor is justified in prescribing a treatment. If somebody wanted to be pesky and demand that the doctor justify the prescription, the doctor could reply something like, “Patient X seeks alleviation from condition Y, and there is general consensus that treatment Z is the safest and most effective means of remedy.” That would constitute a reason for the doctor’s prescription, and that reason would amount to something more than intuition or bias. I think the salient point here is that something like a social contract exists; an implicit agreement between doctor and patient that “increasing well-being” is at or near the top of the value hierarchy.
Of course, you might have already noticed that I’ve described medicine, not morality. We could still ask the moral question: was it right or wrong for the doctor to prescribe treatment Z? My question to the person who would ask such a question is, “What do you mean by right and wrong?” Are those terms being used analogously to true and false? Are we asking, “Is it true that patient X should accept treatment Z? Is it true that the doctor should prescribe treatment Z?” If treatment Z is Tylenol, the moral question seems trivial. Yet, switch treatment Z to euthanasia or stem-cell therap, and BLAMMO – we’ve got ourselves bona fide moral questions. Not to digress, but why is that?
I think the underlying concept of what entails a justified prescription seems extensible. I feel confident in assuming that most people who read moral philosophy take the word prescriptive to mean something like, “proffering a set of should statements.” Recently, I heard somebody mention the fact that the Bible is prescriptive. That person was essentially noting that the Bible contains a set of should statements. For example, Jesus said you should love your neighbor as yourself. Conversely, the Israelites were told that they should not permit a “witch” to live. Traditional utilitarian theories say we should do that act which increases the most happiness or well-being. I think it’s fair to say that in order for a moral theory to qualify as prescriptive, it must proffer a set of should statements.
What do you think?
woodchuck64
says...Hey, any time I can help. :)
I think so, too. But then I’m a little confused why desirism is said not to offer such a list, at least in theory. Consider:
1 The individual is strongly motivated to modify the desires of the community for his/her personal well-being (theft aversion, traffic laws, etc.)
2 The individual also recognizes that his/her own desires cause actions which also modify the desires of the community for or against his/her personal well-being (i.e. break the rules, get penalized by the community)
3 From 1 and 2, the individual can choose desirism as a moral theory and tool for his/her community which modifies malleable desires, including that of the individual, in order to maximize desire fulfillment and minimize desire thwarting for everyone, including the individual.
4 Then, as part of the community, each malleable desire of the individual can be evaluated (by the individual) as to whether it tends to fulfill desires on balance or whether it tends to thwart desires on balance.
5. Each action motivated by a malleable desire that tends to fulfill desires is a “should”, each action motivated by a malleable desire that tends to thwart desires is a “should not”. Each “should” or “should not” relies ultimately on 1 and 2 for its persuasive power. All possible actions along with the desire calculation verdict(*) forms the list. Is such a list a prescriptive list? It seems to be.
Now the main problem with 5 is that it doesn’t make a lot of sense in desirism to focus on actions when it is really the malleable desire that motivated the action. Better to change the desire, enhancing the ones that tend to fulfill desires on balance and attenuating the desires that tend to thwart desires on balance, proper actions then follow naturally. In more metaphorical terms, the heart must change, not the outward actions (which reminds me of Matthew 5:21-22).
(* I agree the desire calculation for all possible actions is difficult or practically impossible to know. But not, I think, theoretically impossible since it relies on more or less scientific analysis of human behavior and culture.)
cl
says...Ha! That’s right, it was you I said that to. Thanks, check’s in the mail..
I think I get the gist of what you’re saying. The problem, as I see it, is that your 3 and 5 seem reducible to a claim that Alonzo says is “almost always false” – that desire fulfillment should be maximized. More specifically,
…and,
Luke and Alonzo flatly deny the part of 3 I cited, and it seems to me that the “should” you’ve delineated in 5 is grounded on the part of 3 that Luke and Alonzo deny. Have I misread you [or them] somewhere in between?
Also,
I agree; did you ever see the calculations I tried? It really confuses me that no desirist seems interested in running objective calculations. After all, desirism is [supposed to be] an objective theory, so why not crunch some numbers?
woodchuck64
says...cl,
Yes, http://atheistethicist.blogspot.com/2010/05/value-of-desire-fulfillment.html confirms your view of Alonzo’s view, by my reading. But I understand that maximization is only rejected if it is implied that maximization has intrinsic value. It does not. Rather, it has value only as a means to fulfill the individual’s desire (i.e. 1 and 2). So to avoid confusion, I should just write 3 without the misleading phrase:
3. From 1 and 2, the individual can choose desirism as a moral theory and tool for his/her community which modifies malleable desires, including that of the individual.
I think, then, 5 follows without objection.
From my understanding, it’s coming up with the numbers that’s the hard part since it’s so difficult to quantify desire (but hopefully not impossible– happiness economics seems to be a start at objectively measuring desire fulfillment in some sense). Once you have the numbers, the rest should be more straight-forward. In reviewing https://thewarfareismental.com/2010/04/19/single-agent-evaluations, though, I think your calculations are sound, but the results aren’t helpful for the human race since the cross-section of individuals and of typical desires is too small to be statistically meaningful.
cl
says...Hey there.
If maximization has no intrinsic value, then on what grounds can we prescribe desires which tend to maximize desire fulfillment?
Regarding your rewrite of 3,
I see what you’re saying, but I see it as arbitrary. I don’t have a citation for this, but recently, when pressed with the “is desirism prescriptive” question, Alonzo referred to desirism as something like a schema for making prescriptions [I think that occurs somewhere in the comment thread of Short-List Theories Of Morality at CSA]. That sounds like what you’re describing here. The problem is, unless everybody agrees to the same values to begin with, how can the desirist ground any prescriptions that might flow from their theory?
What I mean is, if everybody in some society agrees to some axiom like, “that which impedes physical health should be condemned,” then yeah, their social contract grounds their prescriptions and they are justified in condemning violators. However, in the real world where all members of any given society rarely share the same axioms, then what?
I’ve asked Alonzo questions along these lines, and he replies that in the case of 200 that P and 200 that ~P [where P = some malleable desire], “desirism prescribes nothing.” If desirism “prescribes nothing” in such a case, is it not accurate to call the theory primarily [if not entirely] descriptive?
Else, what am I missing? I like that you’re familiar with Alonzo’s writings, though, it seems I’ve read most of the ones you cite. Still, keep ’em coming.
woodchuck64
says...cl,
Because doing so is likely in the individual’s best interest (i.e. 1 and 2). (I say “likely” because I think desirism only makes probabilistic arguments).
The one axiom everyone shares is the desire for desire fulfillment (if I can state that tautologically), so desirism tries to ground everything from that (again, 1 and 2).
In the event of 200 P and ~P, I agree that desirism can’t prescribe in that case. But it’s clearly not the norm; the majority of human desires P don’t seem to match up exactly with ~P. For example, acts to hurt/harm others always seem to thwart more desires than they fulfill on balance. Altruistic acts always seem to fulfill more desires than they thwart. Desirism can prescribe for the larger set of (malleable) desires that do not happen to exactly offset each other in the manner described above.
As I’ve mentioned before, I think desirism uses some implicit assumptions about human beings and human behavior that probably should be made explicit.
cl
says...But then, that makes desirism sound exactly like the “self-interest” theory Alonzo claims it’s not.
I guess, but I just see this as, well… you said it: pure tautology. All desirism says is, “People tend to do that which they most want,” but in more confusing language. There seems to be no prescriptive power, whatsoever. Alonzo specifically clarifies that we are not to maximize desire fulfillment. He said that the claim, “we should maximize desire fulfillment” is “almost always false.” [paraphrase]
Maybe not exactly, but generally, that’s the case: we have people that “smoking” and people that “~smoking.” We have people that “monogamy” and people that “~monogamy.” We have people that “stealing” and people that “~stealing.” Etc. Etc. Etc. all on down the line. I think people want to be able to look at a moral theory and walk away feeling like they’ve been told what is right, or what they ought to do. Desirism rejects the idea of “right” a priori, and everything I’m hearing from Alonzo amounts to lack of prescriptive power. It’s like, he calls desirism prescriptive, then says it prescribes nothing, then calls it a schema for prescription.
I understand, and so would you think Alonzo is saying, “We should have those desires that tend to fulfill other desires, and we should not have those desires that tend to thwart other desires.” In fact, I think he may have said something like that already. Okay, clear enough, but then, why would he say desirism has nothing to say to a moral agent at the time of decision?
I think desirism implies that desire-fulfillment has intrinsic value, even though Fyfe would probably vehemently deny that. But, how can we prescribe desire-fulfilling desires if desire-fulfillment has no intrinsic value? What good reason is there to have desires which tend to fulfill other desires? I honestly don’t see why people call it a “moral theory” at all, let alone “utilitarian.” Why is desirism referred to as “desire utilitarianism” if it does not maximize desire fulfillment ala traditional utilitarian theories [i.e. Bentham or Mill]? Isn’t that a recipe for confusion from the getgo?