This is Philip Robichaud's Typepad Profile.
Join Typepad and start following Philip Robichaud's activity
Join Now!
Already a member? Sign In
Philip Robichaud
Recent Activity
I take it the logic of your argument is supposed to be something like: 1) we have the intuition in your 3rd case (where the wizard seeds the DNA of primitive humans) that someone's implanted-by-wizards unwillingness to phi defeats the claim that she has a moral duty to phi. 2) there's no relevant difference between this case and the real world case. 3) so, in the real world, someone's unwillingness to phi defeats the claim that she has a moral duty to phi. Why not, as many have done in response to Pereboom's 4-case manipulation argument, just run your argument backwards? In the real world, we have the intuition that unwillingness to phi *doesn't* defeat the claim that one has a moral duty to phi, and, since there's no relevant difference between the real world and the wizard world (your third case), implanted-by-wizards unwillingness to phi doesn't defeat the claim that one morally should phi morally should phi.
Toggle Commented Feb 1, 2014 on Ought and Can; Can't vs. Won't at PEA Soup
Tamler, you say: “My only defense to your point is to say that we're all philosophers and we're supposed to be busybodies about our theories. Would that work?” I’m not sure it does, and this gets to my underlying point. There is a similarity between the reasons that support the permissibility of philosophers being busybodies and the reasons that support Saulace being a busybody. Let’s assume you think your busy-bodying around is permissible because you believe: (A) it is morally problematic to do what Saulace (and anyone else developing condition-based theories) is doing. Your meddling is justified, if (A) is true - if (A) is true, it shouldn’t matter what your interlocutors think about the permissibility of what they’re doing. It’s morally problematic and that’s why they shouldn’t do it. But, Saulace probably believes: (B) it is morally problematic to have undeserved reactive attitudes towards others. Now, Saulace’s meddling is justified, if (B) is true - if (B) is true, it shouldn’t matter what his interlocutors think about the permissibility of what they’re doing. It’s morally problematic and that's why they shouldn’t do it. Since the justifications for Saulace and your respective busy-bodying are identical – curtailing morally problematic actions – we can only resolve this dispute by figuring out who is right. But, if that’s where the dispute leads, then the force of your point in this post is lost. What you thought was a distinctive moral problem with Saulace and others actually boils down to a theoretical problem. You must think that Saulace’s conditions-based account is false. If his or some other conditions-based view were true, then tactful meddling (if that’s even a thing) would be permissible. And it would be permissible for the very same reason your own meddling seems to you to be – it promises to curtail moral wrongs. Or maybe your claim is stronger - that even if we had our hands on a true theory, it would still be morally problematic to meddle. If that’s your claim, it would be distinctively moral, but I wonder how far we should go. Should philosophers who discovered the true moral theory also just sit on it?
Tamler, Nice post. I'll respond to your probably unfair dialogue with a probably unfair one of my own. (Standing in line at Preservation Hall after having attended NOWAR.) Saulace – I just don’t see how reasons-responsiveness can address my central worries about desert. Fischpaly – Let me try again to spell this out. Like you, I think that these questions are some of the most interesting and important questions ever asked. I do wish you agreed with me, but I’m glad we are at least unified in seeing the value of the philosophical pursuit. So the whole point of discussing mechanisms is… Tamler (bursting in) – Hey, sorry to interrupt, but, did you read my dialogue post? Fishpaly – I did read it. I liked it. But I have to say I got the feeling that you were, much like you are right now, meddling in our theoretical affairs. You were, dare I say, being a busybod.. Tamler – But wait! I’m just suggesting that there might be something morally suspect about your theoretical pursuits. You’re insisting that people accept your judgments about moral responsibility whatever they happen to think. Fishpaly – Well Saulace and I and many others at the workshop are just trying to do our best to come to grips with these issues. Sometimes that effort results in the building of a conditions-based theory or two. And, then, in you come…insisting that we accept your judgments about when an intellectual endeavor is morally problematic, whatever we happen to think. So, if Saulace is being a busybody, aren’t you?
Justin Coates, I'm like the 4-place relational account of desert, but I'm not so sure about your case. It seems plausible that intuitions about whether you deserve blame from Tamler can be affected by such things as (1)the manner of blame he delivers and (2) his experiences with respect to the type of wrong that you committed. Let's grant that the moral transgression took place within a circumscribed relationship. Still, if the manner in which Tamler blames you is relatively tepid - maybe feels only mildly disappointed in you - then there might be nothing wrong with saying that you deserve blame from him. You did something wrong after all, and if he cares about morality even a little bit, he'll be disposed to at least feel some disappointment when he finds out moral reasons are flouted. Also, what if, for some reason, people in Tamler's life serially break promises they make to him. And, then here you are breaking a promise too. Wouldn't this also up the plausibility of the claim that you deserve blame from him? Even blame of a less tepid form? This may relate to the Cushman case. It might not be so weird to say either that Tamler should have a relatively tepid self-blaming response to his actions or that he is right to self-blame if he's had to repeatedly endure the same type of suffering that his neighbor is going through.
Toggle Commented Nov 6, 2013 on Blame and Perspective at Flickers of Freedom
Robert, I wasn't suggesting that Orestes was blameworthy for killing his mom. Rather I was suggesting that it was wrong (bad or whatever other negatively valenced evaluative claim you want to make here) for him not to feel a certain way about the fact that he had to perform an action that went contrary to certain strong moral considerations (which maybe were not so strong after all, given what a terrible mom she was). What seems right about the moral remainder idea is that the moral considerations that fail to win the day still have weight, and that one way of showing that you appreciate this is to feel bad to some degree, even though what you did was right. Tamler, although Clytemnestra may have been a terrible mother, the mere fact that Orestes had to kill his own flesh and blood is significant enough to establish a fair deal of moral remainder. To connect this with the punishment stuff, there may be reason to think that meting out severe punishments incurs similar moral remainder precisely because of what makes the punishment severe. It may be unjust to give people their just deserts without feeling bad about it to some degree.
Toggle Commented Nov 6, 2013 on Unjust Just-Deserts? at Flickers of Freedom
What about the notion of moral remainder? Even if killing Clytemnestra was all things considered the right thing to do, the prima facie obligation not to kill one’s mother might still ground the call for some kind of moral resolution. This resolution can come in many forms, an obvious one being the feeling of regret or guilt that follows the performance of the action. And, indeed, the stronger the outweighed prima facie duties are, the more moral remainder there is and more intense these feelings should be. So, from a certain perspective, it might be unjust (bad, less than ideal, etc.) should Orestes and Electra fail to have these emotions or otherwise fail to respect the significance of the fact that they just killed their damn mother. Indeed, given that the genre is tragedy, anything short of feeling suicidal levels of despair might be insufficient to resolve the moral remainder. Yes, Orestes felt extremely reluctant to go through with it. But, that falls well short of the kind of soul-crushing, life-destroying regret that perhaps he should have felt after having done it.
Toggle Commented Nov 5, 2013 on Unjust Just-Deserts? at Flickers of Freedom
Dana, You said that agent reasons-responsiveness view gets this case right because: “The agent could have either prevented that (non-reasons responsive) mechanism from operating, or instead put another into action.” Can’t the proponent of a mechanism-based view of reason-responsiveness respond in a similar fashion? Let’s grant that the mechanism on which the self-deceiver formed her belief wasn’t reasons-responsive. Still, the self-deceiver’s responsibility could be established by tracing to a prior mechanism that was moderately reasons-responsive. For example, imagine the woman knows that she will see her son in better light if she looks through a particular photo album that depicts him as a loving father. Just as she starts to suspect that he may be abusive, she chooses to look through the album, and, boom, she maintains her belief that he’s a great dad. We can imagine that the mechanism on which she chose to look at the album was moderately reasons-responsive – there are worlds where she would not have made that choice. Importantly, the effect of her choice is the enlistment of a non-reasons-responsive, (b/c biased) evidence acquisition system that leads her to form the false belief. Thus, her responsibility (for subsequent ignorant action on that false belief) might turn on the existence of a trace to a prior reasons-responsive mechanism that could have prevented the downstream non-reasons-responsive mechanism from operating. Of course this brings with it a commitment to tracing, but that might be something the proponent of mechanism reasons-responsiveness might be willing to take on.
I'm late to the great discussion, but I wanted to touch on an small issue that hasn't been raised yet. If blameworthiness and praiseworthiness are degreed, and if they are determined in part by another degreed notion such as quality of regard or quality of will, might someone be both blameworthy and praiseworthy to some degree for the same action? Imagine that someone is mostly selfish, but, on her best days, she just barely cares about the suffering of others. Every week she is asked to donate some percentage of her paycheck to Oxfam and every week she turns it down. This week, however, she has a small change of heart and donates what amounts to 5 bucks. Her underlying thought process was something like: "a 5 spot might help a few people, and, besides, I'll still have enough money for that new iPad Mini". For the case to work, you have to imagine that she genuinely cared for the people she would help, but, as is quite typical these days, she just didn't care that much. Is this action praiseworthy to the rather small degree that she acted from moral concern *and* blameworthy to a perhaps larger degree because she, like all the previous occasions, acted rather selfishly in giving such a small amount? If you're inclined to say she's only blameworthy in this case due to the pittance that she donated, then is there some donation amount that would trigger in you the sense that she is praiseworthy to some degree while remaining blameworthy to some degree? I realize that this question isn't relevant to the difficulty issue, but perhaps it points to another important feature of certain degreed accounts of blameworthiness and praiseworthiness.
Suberogatory actions are super helpful in this context, and I'm inclined to think that Driver's lawn mower's insensitivity and crumminess are good grounds for judging her blameworthy. But, I have two worries about this move. (1) Might the judgement that the lawn mower is blameworthy rest on her blameworthiness for *having the trait* of insensitivity or crumminess rather than her blameworthiness for *acting* insensitively? If it does, then there is space to deny that the lawnmower is blameworthy for the act of mowing the lawn (since after all it is a permissible act), while affirming that she is blameworthy for having the bad trait of being kind of a jerk. This way of thinking about the case might allow us to preserve the connection between wrongness and blameworthiness. (2) Once we allow that agents can be blameworthy for acts that are suboptimal along the axiological dimension, is there a principled way of distinguishing between those suboptimal actions for which agents can be blameworthy and those for which they cannot? My sense is that we want to be able to draw such a distinction but I wonder how that would go. My case of Pete above was supposed to raise some trouble for any attempt to draw such a distinction, given that even almost optimal actions are bad, and hence potentially blameworthy, to some degree.
Here's a candidate explanation for the intuition that, by keeping my pencils, I do something blameworthy despite the fact that I act permissibly in doing so: (1) I failed to respond to significant moral reasons that count in favor of a certain action phi (ex: lending you a pencil). (2) These moral reasons are not weighty enough to establish the obligation to phi. My action is blameworthy because of (1), but my action was permissible because of (2). A potential problem with this explanation: (1) and (2), would also render the most generous person in the world blameworthy for his of her acts of charity. Consider Pete: Pete is so committed to famine relief that he donates 98% of his net income to Oxfam. However, by donating (only!) 98%, he fails to respond to moral reasons that count in favor of giving 99% - people who would be helped by the larger donation will endure avoidable suffering. Let's stipulate that (2) is true and that it is permissible for Pete to give 98% - the fact that more people would be helped by the larger donation is not weighty enough to establish an obligation to donate 99%. For ease of application, imagine that the act in question is Pete's single act of giving away 98% of his income. By giving away 98% rather than 99% Pete fails to respond to moral reasons that count in favor of the latter, which according to (1) would render his action blameworthy. But, we've stipulated that it is permissible for Pete to give the smaller amount. So, he's blameworthy for giving away 98%, despite the fact that this action is morally permissible (indeed it is supererogatory in the extreme). Thus, the stated explanation of why I'm blameworthy for keeping my pencil commits us to the implausible position that Pete, the most generous person in the world, is blameworthy for his charitable action. This might put pressure on us to affirm Dana's thesis after all. Since it is not the case that Pete ought to have given away 99%, he's not blameworthy for giving away 98%. Of course, there might be better explanation of the intuition in the pencil case that doesn't commit us to the problem presented by the case of Pete, but I wonder what that would be.
Al and Clayton, There's something slippery here: "..my ignorance of A's moral significance might be due largely to my insensitivity and that my A-ing might manifest my complete indifference to the suffering of others. Wouldn't we want to say that I'm directly morally responsible for my A-ing because it manifests my indifference to the suffering of others?" I agree that ignorance might be due to insensitivity (which I'm reading as a kind of inability to discern relevant facts), and PK is certainly insensitive to the moral facts. But, it doesn't follow from the fact that an agent is insensitive to certain facts that she manifests "complete indifference" to those facts. Indeed, PK, is decidedly not indifferent and certainly not completely indifferent to morality and about the project of discerning moral facts. She's dedicated her life to that project and has carried it out in an intellectually honest fashion. Would we still want to say that she is directly morally responsible for her wrong actions even if she was only insensitive to the moral facts but not at all indifferent to them? I’m inclined to not say that, especially when the relevant moral facts are rather hard to get right. But, maybe I’m just a softie.
In immunization, the problem is that we think that the mom’s non-culpable ignorance about the harmfulness of the vaccine is the reason that she is not morally responsible for giving her child harmful vaccine. But Al’s suggestion is that she doesn’t freely give the child harmful vaccine, so I haven’t come up with a T-shirt worthy case. But, is there a description of action in the Poor Kantian case that will do the trick? The utilitarian will say that her act of keeping her money was wrong, so let’s try that act description first. The following seems true. Poor Kantian freely kept her money. But, intuitively, because she was non-culpably ignorant of the wrongness of keeping her money, she is not morally responsible (i.e. blameworthy) for keeping her money. So we may have a the right kind of case! Al might respond by saying that there is another description of the action according to which Poor Kantian does not act freely. What Al did in the immunization case was just add the fact about which the agent was non-culpably ignorant to the action description. Let’s do that here: Poor Kantian freely wrongly kept her money. Al might say “Why think that this is true given that the following are false?”: Poor Kantian knowingly wrongly kept her money. Poor Kantian intentionally wrongly kept her money. We seem to be in the same predicament as before. There is pressure to deny that Poor Kantian freely wrongly kept her money. In order to assess the case in the right way, we must include the moral valence of the action in the act description. I would like to raise two issues with this imagined response to Poor Kantian. (1) When we make attributions of moral responsibility, do we utilize act descriptions that include the moral valence of the action? I think we don’t. More plausibly, we ask: given that an action under some description is impermissible/obligatory or whatever, is the agent morally responsible for that action? For example, given that it is wrong for Poor Kantian to keep her money, is she morally responsible for keeping her money? If this is right, then we can stick to the first analysis above, according to which my Poor Kantian case may be T-shirt worthy after all. (2) If we must include the moral valence of the actions in the relevant act descriptions, then we have the interesting result that we only ever act freely when we are non-ignorant about the morality of our actions. Here’s a generalization of my imagined response to Poor Kantian: whenever an agent is ignorant, culpably or not, about the wrongness of her action, she does not freely wrongly act. She may, like Poor Kantian, freely A, but the fact that she doesn’t knowingly or intentionally wrongly A, supports the claim that she does not freely *wrongly A*. But, if such morally valenced act descriptions are the relevant descriptions for moral responsibility attributions, then moral ignorance will turn out to be incompatible with free action and, thus, morally responsible action. This is a rather striking implication, given the pervasiveness of moral ignorance. Sorry for the long post.
Eddy, My Poor Kantian case is set up so that the agent does something wrong. The very thing of which she is blamelessly ignorant is the fact that her action is wrong according to the true moral theory. In Immunization, the agent did something that was, if not wrong, at least bad, and I guess I’ve just always assumed that agents can be morally responsible, indeed blameworthy, for bad actions that aren’t wrong. But, maybe that's a misguided assumption. Michael, I agree that it’s not obvious that these agents in my cases (and Chandra’s) aren’t praiseworthy for their ignorant actions. But, I don’t know what to make of that. What *is* obvious (to me anyway) is that the mom in Immunization and Poor Kantian are praiseworthy for the epistemic diligence that preceded their actions. If we bracket that out, however, I’m not sure why on would think that either agent is worthy of praise for their subsequent ignorant action. You might think “Well, she did what she believed to be right!”. But, freely doing what one believes to be morally obligatory (or morally permissible) does not suffice to warrant praise. Right? So, why think that the ignorant actions themselves, separate from the laudatory investigative acts, are praiseworthy? Next, you say I assume that being moral responsibility (MR) for an action entails being either blameworthy or praiseworthy for it. You’re absolutely right. But, I have a hard time thinking about the epistemic conditions of MR in any other way. I’ll grant that MR in the attributability sense might not have this entailment. But, I take it that that is not the conception of MR that we are talking about in this thread. Indeed, I’m not inclined to think that there are epistemic conditions on the attributability sense of MR, or if there are, they are much easier to satisfy.
Here are two different ignorant agents, both of whom fail to meet an epistemic condition even though they meet the control conditions. The first case involves circumstantial ignorance and the second, moral ignorance. Immunization: A parent wants more than anything to do right by her child. In a discussion with friends about vaccines, a believed-to-be reliable source tells her that immunizations are actually harmful. Her initial skepticism of this person's claim is quickly overwhelmed after some Google searches wherein she finds that seemingly authoritative people are coming down on both sides of the debate. After weeks of intense research she realizes she's totally at sea. She finally defers to the beliefs held by her current pediatrician, who happens to believe the immunizations are safe. The parent decides to immunize. Alas, this physician turns out to be wrong, and subsequent research definitively reveals such immunizations to be harmful. This parent freely exposes her child to harmful vaccine, but, because her ignorance about its harmfulness is blameless, she is not morally responsible for doing so. Poor Kantian: Some version of utilitarianism is true, let's stipulate. Let's also stipulate that moral truths are stubbornly but not completely opaque to investigation and reflection. Poor Kantian is a philosophy grad student working in ethics. She's also idealistic and really wants to get things right. But, after earnestly considering all the arguments and after talking to all the famous Kantians and utilitarians, she just doesn't believe that utility has intrinsic value. She also fails to believe an implication of the true utilitarian theory, which, we can stipulate, requires her to give most of her money to Oxfam. Poor Kantian keeps most of her money. Poor Kantian freely keeps her money, but, because her ignorance about the moral obligation to give most of it away is blameless, she is not morally responsible for doing so.
What about the epistemic condition of moral responsibility? It's something I think about quite a bit, so I'm almost certainly biased. But, I really feel like the discussions between Rosen/Levy/Zimmerman and Sher/Fitzpatrick are a just getting going. Related to this, I think tracing literature is off to a great start, and it certainly merits further attention.
Toggle Commented Feb 16, 2012 on Crowd-sourcing my SPA Remarks at Flickers of Freedom
When it comes down to choosing whether to turn on EA, I have a hard time being moved by concerns about moral worth. If I am wrong in thinking that it is permissible to buy a BMW rather than donating that money to famine relief, then I think I want to be corrected. Sure, my right action in this case might not be morally worthy, but I didn't wrongly fail to save many lives. Switch perspectives and think about this from the standpoint of all those who you act wrongly against. How hollow would it sound if you said to them, "Well, I didn't turn on EA because it is important to me that my actions have moral worth, and if i had done the right thing in this case, which I didn't, it would have been for the right reasons." You are basically telling them that it's more important to you that your actions have moral worth in a counterfactual world than it is that you avoid acting wrongly in the actual world. Eric, have you thought about a version of EA that functions 'within deliberation' as opposed to after a decision has been made? Pereboom's latest incarnations of the manipulation argument involve just this sort of thing. Of course, in his example, the manipulators aren't correcting the deliberation but are ensuring that it's incorrect.
Toggle Commented Feb 2, 2012 on Ethical AutoCorrect at PEA Soup
Sorry...the third sentence in my comment should read: "In your deliberation, your self-interested reasons are *outweighing* the moral ones, and you are leaning toward avoiding the shame and anguish of apologizing."
Toggle Commented Feb 2, 2012 on Ethical AutoCorrect at PEA Soup
What if the mechanism by which ethical autocorrect (EA) worked was to make sure that your deliberations lead you to the right action. Let's say you are trying to decide whether you should apologize for a lie you told to a friend. In your deliberation, your self-interested reasons are being outweighed by the moral ones, and you are leaning toward avoiding the shame and anguish of apologizing. Then EA tweaks a few neurons and, automagically, your deliberations lead you in the right direction. Because of EA, you now see that the moral reasons in favor of apologizing outweigh the self-interested reasons. You do the right thing, for the right reasons, and you come to it my deliberating about where the strength of the relevant reasons lies. You just needed a little help in your deliberation. Interestingly, I don't see why EA's help in this case is substantially different from a standard case of advice. If you were to ask a friend about what you should do about the lie, she might emphasize the importance of acting morally rather than self-interestedly. In both cases, you just needed a little nudge in the right direction.
Toggle Commented Feb 2, 2012 on Ethical AutoCorrect at PEA Soup
Cardinal and ordinal proportionality in the news: http://www.npr.org/2012/01/31/146081922/gop-seeks-big-changes-in-federal-prison-sentences Although one of the main complaints seems to be that cardinal proportionality is being violated, the rhetoric slips into a call for ordinal proportionality. These legislators should read FOF!
Tamler, Interesting topic! I have a question about how the presence of a victim diminishes or eliminates the demand for ordinal proportionality. If the presence of a victim makes range-only, cardinal proportionality sufficient, does this entail that it would be fair for a *single* victim to mete out different punishments to two people who are equally blameworthy for the same crime? We readily accept the difference in Lindsey and Lauren's punishment because the victim and punisher are different people. But, would we also think that these different punishments are fair if they involved the same victim and punisher? I hate to hijack your post and introduce another new variant of your original case, but here goes: Imagine that Lindsey, Lauren, and Sam are all siblings. Lindsey and Lauren are both trying to be cool and both end up egging Sam. When the time comes, Sam gives both of his sisters a contemptuous look and they both feel really terrible and embarrassed about what they did. The next day, Lindsey leaves for camp, with the pangs of guilt still weighing on her. Sam then decides, after reflecting on the previous night's events and feeling newly angry about it, to tell only on Lauren, who is then grounded. We can imagine that Sam doesn't tell on Lindsey because she's away, and nothing will come of it. On this variant of your cases, both punishments are cardinally proportional, but Sam plays a role in punishing one more severely than the other. This seems really unfair to me, but I'm not sure what relevant difference there is between it and your original two cases. As far as I can, tell your view would entail that Sam's differential treatment is not unfair, since cardinal proportionality is satisfied in both punishments. But, I have a strong sense that something is unfair here, and that Lauren would have some grounds on which to complain. Am I alone? One way of responding to this case is to claim that Sam is acting unfairly because he simply has no reason for punishing Lauren more severely than Lindsey. But, this is already to give ground to ordinal proportionality proponents who maintain that fairness requires reacting to such reasons. A hardcore cardinal proportionalist will say that the only relevant reasons involve the proportionality of the punishment to agent's blameworthiness. If these are the only considerations that matter for fairness, then Sam's actions seem on the up and up. Maybe I'm alone in thinking there is something unfair about the variant I proposed? If I'm not, is there some way to make sense of the unfairness that doesn't give up ground to the ordinal proportionalist?
I agree with you Chandra about de-emphasizing the role of effort, and my hunch is that the idea of spontaneous emotions might work better. I was sort of accepting the effort interpretation of the results in order to quibble with David's claim that: the fact that Raised-racist Tom (Tom-D) has to make more of an effort than Not-raised-racist Tom (Tom-C) means that Tom-D's helping behavior is less concordant with his real self. My point was that in order to make sense of Tom-D actually making an effort, we seem committed to the thought that, at some deep level, he values helping the guy. Btw, is there any reason not to just explicitly state in the vignettes that both Tom-C and Tom-D are making efforts to reform their racist tendencies but that it is harder for Tom-D, given his upbringing? You would lose the ignorance vs. non-ignorance comparison between the two, since they would, presumably, both know that helping is the right thing to do. But, you would have a more direct test of the Effort Effect. I'm not a deep-selfer on these issues, btw. Like everyone here, I'm just trying to make sense of these super interesting results.
David, These results are really interesting. I'm currently finishing a dissertation on the epistemic condition of moral responsibility, so I'm thinking about these issues all the time. I appreciate your hard and subtle work on this. I'm a bit puzzled, however, about your response to Chandra, and, specifically, your claim that the results tell against deep self views. In your response to Chandra, you suggest that the fact that Racist Tom has to put more effort into helping the man might reveal that helping behavior is less concordant with Racist Tom's deep self. Indeed, the thought that Racist Tom's deep self is "bad" is enforced by the language of the vignette, since it says that he is proud of his racism. But, I find this interpretation of what's going on with Racist Tom hard to square with the possibility of him making a lot of effort to help. In fact, it's hard to square Racist Tom's proud racism with his helping at all. One way to square it is to think Racist Tom was just acting on a whim. He's racist and proud of it, he wants to spit on the guy, but, on a whim, helps him. This can't be right because it would not explain the praiseworthiness results you find. If he was acting on a whim, why would he be more praiseworthy? The better way to square it is your way: by thinking that Racist Tom had to put a lot of effort into doing the right thing. But, this is only intelligible (to me at least) if at some (deep?) level he really valued helping the guy. Getting up early to run takes a lot of effort, and I can only make sense of my doing it if, at some level, I value running. But, this means the helping behavior was concordant with part of Racist Tom's deep self after all. So rather than being at odds with DS views, the praiseworthiness data might be in line with them. Responders might be more willing to praise Tom because they think his non-racist deep self wins out over his perhaps less-deep racist tendencies. In Frankfurtese: Racist Tom strongly desires to spit on the man, but he has a higher order volition to have his less strong desire to help the man become his will. He's like the unwilling addict who quits. The worry of course is that the vignette leaves little room for this second interpretation of Racist Tom's helping behavior. He is supposed to be proud of being racist. I think my problem can be dillema-ized: if responders think Racist Tom's deep self is racist, the effort explanation for his helping behavior is unavailable. There is nothing there to fight his racist urges. But, this means Racist Tom must have acted on a whim (or something like that), which makes the data on praiseworthiness hard to interpret. On the other hand, if responders think Racist Tom’s deep self is at least partially non-racist, the effort explanation *is* available. Those non-racist aspects of deep self are fighting his racist urges. But, then the praiseworthiness data does not tell against DS views. Rather, they are in line with them. Am I missing something?
Philip Robichaud is now following The Typepad Team
Aug 31, 2011