This is Jonathan Phillips's Typepad Profile.
Join Typepad and start following Jonathan Phillips's activity
Join Now!
Already a member? Sign In
Jonathan Phillips
Recent Activity
Dylan - okay cool, I see the basic idea now. This definitely seems like it's possible to do in a study, and I agree that as long as the knowledge of the alternative possibility is of the conditional form (if I had not been manipulated by B, then I could have not done p), then it's not problematic to include it. It seems like the minimal pair we'd want is to include that conditional statement in both conditions, but only in one of them to have the agent know that conditional to be true. This should help us avoid any changes in participants' perceptions about the metaphysics of the scenario. Does that seem right to you? I was also wondering what you thought of Eddy's idea and my response to it. Between his idea and yours, it seems like we have a few further things we could try testing out. What do you think? If you guys are game, we could all put our heads together on this.
Eddy, Interesting thought! I had exactly the opposite intuition -- that these cases were so constraining that it didn't matter whether or not the agent had knowledge because they couldn't have done otherwise regardless. I'm intrigued by your idea though. I definitely agree with your thought that the cases I used aren't cases of bypassing -- the agents' decisions are always made through their normal decision making processes. Do you have a set of cases we could try to use that tend to involve bypassing (or better yet, encourage some participants to perceive the agent as bypassed, while leaving others seeing the agent as not being bypassed)? It seems like if we could find a set of cases like this, and we introduced knowledge of manipulation, then we could use the difference in perception of bypassing as a pretty clean test of the hypothesis you had. I'm thinking you and Dylan might have had some cases which did this -- is that right? What do you think about this approach?
Dylan, yeah I totally agree that the right way to try to figure this out is to think carefully about why we would have thought knowledge would matter in the first place and then begin testing whether knowledge is affecting the things we think it should be. I was thinking along the lines of it mattering for the extent to which we represent alternative courses of actions as being open to the manipulated agent. (This was why I included the question about 'had to' in the second study. Of course, it turned out not to influence answers on this question either, and so the remaining question is *why* that would be the case.) Anyway, I liked your idea for a follow-up study using deliberation to make the alternatives clearer. My one worry about that approach though is that we would want to make sure it's the agent's knowledge (not the deliberation itself) that is driving any results. To do this, I think we'd have to include the agent deliberating in all three cases, and then see if knowledge+deliberation would be enough to change the effect of manipulation. Does that seem right to you?
Florian, Nice point - you're obviously totally right that the effects here are only surprising if we took the reduction in moral responsibility between the manipulation and no-manipulation cases to be due to an undermining of some variable relevant to the agent's free will. You're alternative suggestion for what may be explaining the reduction of moral responsibility is a good one. I was actually really worried about that alternative explanation as well, and so I ran a series of studies to test it. You can see the results of those here (Study 4a and 4b): http://people.fas.harvard.edu/~phillips01/papers/Phillips_Manipulating_Morality.pdf#page=21 In brief, what we end up finding is that the difference in participants' judgments doesn't seem to arise because they are attributing more blame to the manipulator. Rather, the reduction seems to be because they more think the agent was controlled by the manipulator (which suggests that the reduction in moral responsibility is due to undermining something relevant to free will). The studies in the paper above should make this much clearer. I probably should have said this at the very beginning but the reason I used these particular cases was because they had previously been shown to be pretty clear cases or reduced moral responsibility from free-will undermining manipulation. The thought was that now that we have such cases, giving the agent knowledge, should really reduce the effect of manipulation on moral responsibility, but that's exactly what I didn't find. Anyway, I'd love to hear your thoughts about this response. I am definitely a little stumped, and the pattern here is definitely making me question whether there might be some other potential explanation of the original effect of manipulation (even if it's not a redistribution of blame).
Cool idea Ambivalent! If I'm understanding your suggestion correctly, your basic thought is that knowledge really might affect the moral responsibility of manipulated agents, but it would only do so when the agent has some way of escaping the situation that the manipulator has put them in. I think you could really be onto something here, and it'd be cool to figure out a way to test it. Can you think of a way of changing any of the scenarios we've used so far to implement this basic idea? I'd be willing to run a study on this to see if it's right -- I think it could be really helpful in understanding the psychological processes behind manipulation/moral responsibility. Here's a link to the scenarios again: https://www.dropbox.com/s/2ddij6269ziwq22/Scenarios.pdf?dl=0#sthash.DHAU2SC0.dpuf Also, I really haven't thought much about how these studies might tie in with the nudge literature, but I agree it's worth thinking about more. Does anyone know if there is already work on whether people see nudges as reducing people's moral responsibility? It seems like the sort of thing has probably been done, no?
Angra, this is a really nice set of hypotheses, and I think you're right that, taken together, they would explain why knowledge is not relevant here. I actually had a similar question about whether the basic effect of manipulation could be explained by differences in the perceived situational constraint faced by the agent. To test this possibility, we asked participants two questions: (1) whether they agreed that the 'manipulator' (e.g., the government) *made* the agent do the immoral action and (2) whether they agreed that they the *situation caused* the agent to do the immoral action. What we ended up finding kind of surprising. Participants more agreed that the manipulator made the agent do the immoral action when the manipulator acted with the intention of getting the agent to do the action. However, there was no corresponding difference in judgments about situational constraint. That is, they didn't think the situation more caused the agent to do the immoral action when the manipulator acted intentionally. The interaction was actually pretty large too (if you want to see the details of that study or the graph of the results, you can find it here: http://people.fas.harvard.edu/~phillips01/papers/Phillips_Manipulating_Morality.pdf#page=31 ). So basically, I agree that we shouldn't expect knowledge to play much of a role if the original effect were just due to straightforward situational constraint. However, I was also thinking that since the original effect doesn't seem to be about this, but instead about whether the manipulator has the intention of getting the agent do the immoral action, then the agent's knowledge might matter for her moral responsibility. I guess I was wrong though! Anyway, I'd love to hear what you think of this response, any further thoughts you have on the study I just mentioned, or even other ways of trying to make sense of all of this!
Nathan, thanks for pointing this stuff out! I think you're totally right that these things might be confusing about that particular scenario. One way we tried to address this sort of worry was by using 5 different scenarios which differed in tons of ways from each other (e.g., some were about a mother-in-law who gets her daughter-in-law to break into a pharmacy, one was about a fisherman who steals another person's boat to escape an oncoming flood, and so on. The idea behind doing this is that even if each individual scenario has a few problems, it won't be the case that all of the scenarios have the same problems, so if we find an effect across all of the scenarios, we can be pretty sure it was not due to any one particular small flaw. The pattern we found was pretty robust across these different scenarios: for each of the five different scenarios we used, we found no effect of knowledge, but an effect of whether or not the agent was manipulated. In case it's helpful, I've put up all 15 of the vignettes we used (5 scenarios, each with 3 conditions) here: https://www.dropbox.com/s/2ddij6269ziwq22/Scenarios.pdf?dl=0 These comments are really helpful though, and I'll definitely keep them in mind when we use this same scenario in future studies!
Image
This past semester I was working with Fiery Cushman and an RA (Becca Ramos) on some studies on manipulation and moral responsibility and we ended up with some findings that I'm completely puzzled by, so I thought I'd see if you all have any ideas about them. (In case you... Continue reading
Posted Jan 11, 2016 at Experimental Philosophy
17
Image
At this point, it is pretty clear is that people’s moral judgments affect a surprisingly large number of their judgments that do not seem to be straightforwardly moral (e.g., belief, causation, doing vs. allowing, freedom, happiness, innateness, intentional action, knowledge, love, and so on) The sheer number of different judgments... Continue reading
Posted Sep 3, 2015 at Experimental Philosophy
Josh Knobe recently prompted a discussion on Flickers about a forthcoming paper that offers a psychological explanation of why manipulation intuitively reduces the extent to which agents are blamed for doing immoral acts (full disclosure: I'm an author on that paper). Given that discussion, I thought it might also be... Continue reading
Posted Sep 3, 2014 at Experimental Philosophy
Sinduja, thank you for your comments. You touched on a lot of different things, but I want to address two of the main ones. First, you followed up John Wehrle's comment on the issue of the relationship between rights and freedom. My position on this is something like this: we understand ourselves as free to do things we don't have the explicit right to do, and thus, it seems that, if we really want to discuss freedom and folk intuitions, it might not be helpful to limit the discussion of freedom to just those things which we have the right to do. The second point to which I want to respond is about your idea that there is some sort of cultural relativity to this idea of rights and morality. I fully agree with you. I address this in the conclusion of my paper as well. I think one of the problems with folk intuitions about freedom is that they are going to be relative to what is moral for a given individual or culture. In this sense, the folk understanding of freedom may not be very functional if we imagine a situation in which a group of diverse individuals try to come to an agreement about the how much a certain law will diminish citizens' freedom. However, there is also some strength in this understanding of freedom-as-relative in that it allows for a flexibility which many other more rigid theories of freedom have lacked (and suffered for). I think what is most important here, is not to determine whether folk intuitions of freedom do or do not create a tenable/coherent theory of freedom, but instead to understand how the folk actually understand freedom. This understanding may give more weight to one or another philosophical theory of freedom, and at the very least will allow us to recognize when and where we are diverging from folk intuitions in further philosophical discussions about freedom.
Jussi, the concern that people are responding to these surveys in a way that doesn't truly reflect their typical use of 'freedom' is a good one to bring up. While, the surveys were clearly anonymous, there is certainly still the chance that people are responding in the way they think they should, rather than according to how they actually judge the reduction of freedom. Part of this problem seems to be natural to experimental study of intuitions which involves morality. Interestingly though, participants in this survey had no problem saying that the law did in fact diminish Tanya's freedom. Even in the case in which Tanya wanted to hurt the minority but was stopped from doing so, her freedom was reduced just over 3.5, halfway on a 7 point scale (1 was labeled, "not at all" diminished, and seven was labeled "completely" diminished). It was simply the case that when Tanya was stopped from helping the minority her freedom was judged to be much more reduced. Thank you for bringing up the point; it is a good one to discuss especially in experiments involving difficult moral situations. However, there is a similar, more general, worry that peoples' judgments were influenced by a value judgment of some part of the situation, rather than simply a judgment of freedom. One particular worry was that participants may have thought a law which restricted Tanya from helping a minority was simply a bad law, and therefore diminished freedom more. I conducted another study, (included in the paper and mentioned in response to Dr. Weinberg's comments) in which there was law enacted simply because of an irrational fear on the part of a dictator's wife. Participants were asked to judge whether or not this law was a good law, and then subsequently were given one of two situations, one in which a woman named Katya was restricted from a morally good action and one in which she was restricted from a morally bad action. While participants overwhelmingly judged the law to be a bad law, the difference in terms of the loss of freedom in the two cases was still statically significant. Given that this type of bias could not explain the survey results, I suggest that perhaps it really is something about the morality of the restricted action which is creating the difference in judgments of freedom.
Dr. Weinberg, thank you for your comments. I think your proposal about there being other laws already restricting certain immoral actions is really interesting and definitely could be influencing peoples' judgments. Fortunately, this proposal can be easily tested. If we imagine a case in which something which has no law restricting it, but is still considered morally wrong, then do the judgments of freedom differ from those in the first survey when that action is restricted? I conducted another study, (included in the paper) in which there was law enacted simply because of an irrational fear on the part of a dictator's wife. The law subsequently restricted an action which a woman named Katya was going to perform (in one variant it was a morally wrong one, and in the other it wasn't). In this case, there was no law previously restricting either action, in fact both actions had been going on for a long time and while the dictator knew about them, he didn't care enough to stop them. Participants still judged Katya's freedom to be much less diminished when she was stopped from doing the immoral action. Given that the proposal in this case could not explain the results, I propose another explanation which you mentioned in your second comment (in response to Brandon's). Specifically, participants in the surveys did not consider Tanya or Katya to be free, in the first place, to perform morally wrong actions. As to the idea of continuing to use the term "concept" I think it is an accurate critique. Do you know of a better term to describe the cognitive processes which determine how we use a particular idea? We often talk about intuitions but isn't there some understanding of the idea of 'freedom' which makes it applicable in certain situations and not in others? What should we call this latent understanding? Thank you again for your comments and suggestions.