This is Josh May's Typepad Profile.
Join Typepad and start following Josh May's activity
Join Now!
Already a member? Sign In
Josh May
Birmingham, AL
I am currently Assistant Professor of Philosophy at the University of Alabama at Birmingham. I teach a range of philosophy courses but primarily ethics. Most of my research is in ethics and epistemology, focusing on moral thought, reasoning, and motivation.
Interests: philosophy, music, technology
Recent Activity
Image
This summer I read Shaun Nichols's excellent new book, Bound: Essays on Free Will & Responsibility (2015, OUP). It includes systematic discussion of the relevant experimental philosophy, as well other empirical research, and how this relates to the problem of free will and related topics (including responsibility and punishment). I... Continue reading
Posted Aug 13, 2015 at Experimental Philosophy
As many readers have probably heard, Amazon's Mechanical Turk will soon (July 22, 2015) be charging substantially more for its services. Before, Mturk took 10% of what researchers ("requesters") paid workers. Now it will be effectively 40% (in the rare case that you need fewer than 10 distinct participants, then... Continue reading
Posted Jun 24, 2015 at Experimental Philosophy
Over at The Dance of Reason (Sac State's Philosophy blog), Dan Weijers reports an interesting experiment involving variations on the usual trolley cases. In short, he finds that: (a) in his sample many more than usual are willing to say pushing the large man in the standard Footbridge scenario is... Continue reading
Posted Feb 19, 2015 at Experimental Philosophy
One of the greatest issues of Ethics has recently been published: Vol. 124, No. 4 (a Symposium on Experiment and Intuition in Ethics). Contents include: Introduction Henry S. Richardson Principles and Intuitions in Ethics: Historical and Contemporary Perspectives David O. Brink Beyond Point-and-Shoot Morality: Why Cognitive (Neuro)Science Matters for Ethics... Continue reading
Posted Jul 27, 2014 at Experimental Philosophy
There is a well-known "mystery" surrounding free will (to use van Inwagen's term): acting freely seems to require both the truth and falsity of determinism. Now that's a serious problem! Here's a way to illustrate it. Suppose determinism is true and you're deciding whether to commit some crime, say embezzlement.... Continue reading
Posted Feb 24, 2014 at Experimental Philosophy
14
It's great to see this is out! I sure could have used this in Winter of 2009. Hopefully I'll be able to use it in the near future!
I'm jumping in here kind of late, but very neat follow-up, Angel. Jonathan makes a lot of good points following up on the pragmatic idea. I think he's right that there doesn't need to be much flesh on that alternative explanation in order for it to pose a problem. Then again, I don't think the worry devastates your study or anything; it's just pointing to room for further work to make things more clear. I'll add a minor point that's potentially more novel: I think the lack of agreement with you on the import of your results is not to do with antecedent bias against pragmatic encroachment or from you designing poor studies. This just seems to me to be a really difficult issue to adjudicate experimentally once we shift to evidence-seeking. The bank cases likely do have their problems, but at least the proponents of contextualism or IRI seemed to make clear predictions about them. These evidence-seeking vignettes, while quite ingenious and avoiding some problems with the bank cases, are dealing with tricky territory, especially since we get further and further away from semantic claims when we leave aside contextualism and just try to adjudicate the debate between intellectualists and anti-intellectualists. Jessica Brown's point in her recent paper---that contextualism is more susceptible to x-phi results than IRI---cuts both ways: intellectualism is less susceptible to them as well. Another minor point: I think your complain against the studies involving the bank cases (employing the rule or accommodation, etc.) is quite powerful. This is especially so given Wesley and Josh Knobe's new data, which might be seen as indicating a powerful effect of agreement with the protagonist. However, I was wondering: so couldn't we avoid the problem by simply not making Hannah assert anything? We could just ask subjects whether she knows (and stipulate that she believes). This of course wouldn't test DeRose-style contextualism (since he thinks they have to be evaluating truth-claims), but it could potentially make predictions about IRI and other anti-intellectualist views. Maybe this could help tip the scales one way or the other in these evidence-seeking studies. In other words, perhaps we shouldn't give up on the bank cases paradigm, so to speak, if it can be improved. What do you think?
Angel, Good point about making people think the protagonist is especially good at proof-reading. I like the probe you posed. That might get around that problem. About the means, did you report the medians? If not, I'd be interested to see what they were. In particular, if they're too close together (e.g. 3 and 4), we might not then expect clearly different predictions by the competing hypotheses.
Hi all, This is a very interesting paper and discussion! I think we all agree that no explanation of the data is *obviously* the correct one and requires *no* further testing. With that said, we can still speculate and guide further research, as Angel suggests. (1) The Mere "Should" Explanation And here's my speculation. It looks like we have the following from Angel's and Wesley's data. Angel summarizes it nicely in one of his comments above (though not for the purposes of a summary!): "I [Angel] asked another group of high and low stakes subjects this normative (non-knowledge) question: "how many times do you think Peter should proofread his paper before turning it in?" I got answers that were very close to the answers people gave to the knowledge question (just like you [Wesley] did for belief)." So what we have is evidence that people aren't, to put it fairly neutrally, treating any differently questions about these vignettes that involve asking about either: (a) belief, (b) knowledge, (c) normative non-knowledge. Again, things are unclear, but why isn't my/Wesley's "(merely) normative explanation" a nice one here? It holds that subjects are just reading the questions all as something like the questions Angel asked in his normative-non-knowledge probe, viz.: "How many times do you think Peter should proofread his paper before turning it in?" Now, Angel's main objection to this seems to be an appeal to his CRT checks. But, as I mentioned in the previous thread and Wesley mentions here, this explanation needn't hold that they are making a system-2 error or what have you. If it's a perfectly felicitous, pragmatic, or whatever reading, even for people who have high IQs, then the explanation needn't conflict with the CRT results. This admittedly isn't a fleshed-out explanation. To be fully sustainable, it would need to say more about what the alleged pragmatic phenomenon and so on is without simply saying "or whatever" like I did. But I'm just suggesting this is a reasonable hypothesis in need of some further testing. (2) A Test for Mere Should Speaking of, I thought my suggestion in the previous comment thread was a good test that could tease out these issues. It certainly isn't sure-fire, but it might help. In fact, I now have an idea for a modification given Wesley's data. Pose to subjects the following after they've read either a Low or High Stakes Typo vignette: "Peter [John] proofread his paper 2 [5] times, and he now believes there are no typos. Does he know there are no typos?" This would be a 2x2 (Stakes vs. Proof-readings) design. The number of times correspond to what, in the previous studies, participants seem to say Peter should do given the stakes. But I think this really draws out that the question is about knowledge, and it stipulates right next to it that Peter believes. So if subjects tend to say Peter and John both know (between subjects, of course, with some sort of Likert scale), then this might be evidence for the Mere "Should" Explanation. If not, maybe not. But the resulting data could open up even newer issues.
Angel, Adam Feltz confirmed that their original, non-reverse-scored mean for their High Stakes case was 3.74. Also, I didn't mention previously that it's a bit difficult to navigate this issue because their scale is the reverse of what we used! They have 7 as strongly DISAGREE while we had it as strongly AGREE. So, F&Z have 3.74 in High Stakes where Hannah denies herself knowledge and 3.68 in Low Stakes where she attributes it to herself, which are both around slightly *agree*. So it seems their subjects were on average slightly agreeing with both the self-ascription and denial. Furthermore, their mean for High Stakes then is not quite like ours since it was on our agree side of her attributing knowledge (4.6). So, to be clear, our subjects tended to agree with High Stakes Hannah's ascription of knowledge to herself while F&Z's subjects tended to agree with High Stakes Hannah's denial of knowledge to herself. So my attempted rebuttal to your objection was a failure! It would be really interesting to test this hypothesis out in a bit more detail. For example, a 2 x 2 study varying Stakes and Attribution-Valance. This could be really important for the epistemology debates but also for x-phi generally. Of course, I would think that the epistemic views on offer would at least initially predict the relevant responses regardless of the attribution's valence. That is, if stakes-sensitivity really is this obvious, widespread, common phenomenon that philosophers thought was clearly exemplified in these cases, then the predictions should have at least *initially* been independent of whether it it was attributed or denied. Maybe not for DeRose since he fairly explicitly thinks this matters. But not for Stanley and some others. It's not that they can't slightly modify the relevant argument for their views now in light of this. I just think it's still important and interesting that we didn't get the drastic change in judgments that arguably many expected. On that note, that's something that I find somewhat lacking in your method of testing for stakes-sensitivity (via evidence-seeking cases). We can't really see *directly* whether stakes changes people's ordinary *judgments* about knowledge. At least, it's not so clear. Seeing it affect their practice in some way or other is one thing; an explicit change in judgment about a case based on varying simply the stakes is another. Anyway, just some thoughts.
Hi Angel, On (1): Good point about Wesley's new data. However, it's very new and much of the details aren't out there; he just has the short write-up. So I'm not sure how much weight to put on it just yet. You're also right that F&Z aren't as clear about that reverse scoring as one would hope. I think my comment wasn't very clear either! I just noticed a couple of confusing typos. Sorry about that. But I think it came across well enough. As you point out, it just depends on whether I'm reading F&Z correctly. Maybe I'll email one of them directly and ask.
Very interesting paper! It's great to see the use of new methods for approaching the issue of folk sensitivity to stakes. Here are some thoughts, for what they’re worth. (1) Previous Studies Your main complaint about our previous failures to find significant folk sensitivity to stakes is that asking subjects about a sincere assertion of knowledge (or denial) creates a bias for subjects to just agree with the protagonist, via e.g. the rule of accommodation (p. 9). But Feltz & Zarpentine used Stanley's exact cases, which include a sincere *denial* in High Stakes; yet they found that subjects tended to slightly agree on average (see p. 41, n. 6). They’re mean for High Stakes with the denial of knowledge was *reverse-scored* (see n. 4) yielding a mean of 4.26. So presumably it would have been around 3.74 originally, which is on the *disagreement* side of the midpoint. Yet the 4.26 reverse-score is extremely close to what we got for the same sort of case with the ascription rather than denial: 4.6 (see Table 1, p. 270). The sample sizes might not be large enough to really tell a great deal here. But this is some evidence that people tend to *disagree* with the protagonist when she denies herself knowledge in just the same way as they *agree* when she attributes it. This is at least some positive evidence that the rule of accommodation is not creating a bias here. (Wesley's new study, posted recently on this blog, might be relevant here as well.) Or am I missing something? You also register, in the first paragraph of sect. 4 (p. 10), another worry about the previous studies. You worry about whether between-subjects studies can get each group of participants thinking the protagonist has the *same epistemic position*. You say this is especially difficult since subjects are usually only given one case. But we did a within-subjects design and found no difference. There was an order effect, but the juxtaposition in general didn’t change things. So I’m not sure about your raising this as an issue at all. (2) Performance Error Explanation Like Jennifer, I worry about your basic idea, i.e. about whether using "evidence-seeking" experiments do clearly support IRI over other views, especially intellectualist ones. I'm not sure that enough is done to rule out some sort of performance error or pragmatic explanations of all the results. For example, it could plausibly be, I think, that the subjects tend to glide over the knowledge part of the scenario and focus more on the practical situation instead. Although only one protagonist has high practical interests, they both face salient practical problems. Yet the only measure taken to make sure people weren't just reading it this way seems to be your comparison in Study 3 of the group’s results with the subset who did well on the CRT (n. 37, p. 22). But I’m not sure this rules out all contrary explanations. Your characterization of the potential objection in that footnote seems a bit stronger than it need be. You suggest the rival explanation has to say subjects gave an "improper reading" of the knowledge-prompt and ignored half of it. As you say, that's pretty implausible. But couldn't the objector say more plausibly that subjects were simply picking up on the more salient features of the scenario? Just to throw it out there, the context to me seems like one in which it would be rather natural to just read "How many times should Peter proofread before he knows there are no typos?" as "How many times should Peter proofread for typos?" My off-the-cuff suggestion is that there might be a plausible way to construe a kind of performance "error" model that even the high CRT-scorers might be susceptible too. I put "error" in quotes because this model holds that it's a natural pragmatic phenomenon (or something like that), which might not be much of a reprehensible error. After all, some pragmatic phenomena needn't really be construed so strongly. Perhaps a way to test this would be to separate the issues out a bit. You could ask subjects (in the first experiment e.g.), “Peter [John] proofread his paper 2 [5] times. Does he now know that there are no typos?” The number of times correspond to what the subjects previously said he should do. So if that then matches up with a question that is explicitly just about knowledge, that might be more solid evidence that they are connecting the two. Much this involves empirical claims. But they’re worries nonetheless, though perhaps not devastating or anything. In any event, you’ve gathered some very interesting data. Thanks for posting the paper!
This is a very interesting proposal, Jonathan. Here are two initial and related thoughts, for what they’re worth: 1. Pluralism It’s a bit unclear, but you seem to be denouncing pollism to a rather large extent. After all, you do say, regarding pollist arguments for contextualism, that “one need not -- *should not* -- be a pollist about these sorts of questions” (emphasis added). You also say that “for *many* epistemological debates, pollism is a mistaken way to think about a possible x-phi contribution” (emphasis added). Then again, you say that the experimentalist approach allows us to “also” argue in the experimentalist way, suggesting that both methods are useful here. Either way, I’d emphasize the importance of a thoroughly pluralistic view according to which each method has pros and cons in different contexts, even in the context of arguing about contextualism. It seems the value of each method just depends on what the goals of the researchers are. (Your focus here is on epistemology, but it sounds like you think pollism is an inappropriate way to explore many philosophical issues.) You do seem to hold a more pluralistic view, though. After all, it seems your denouncing pollism is primarily to ward off Keith's move of saying that he doesn't make predictions about specific cases. And that seems like a good point to press him on. So this is perhaps merely a call for clarification on what the scope of your critique here is. 2. Effectism While I'm a radical pluralist with respect to methodologies (well, with respect to lots of things really!), I don't think pure "experimentalism" is without its problems. Again, you might really hold a rather pluralistic view, but let me press some worries about going *solely* with an experimentalist approach (perhaps what we might label as “effectism”). That is, I worry about attempting to adjudicate epistemological (or other philosophical) debates by merely looking for effects, and not specific kinds. Simply looking for any kind of effect does seem to be the primary tack for experimental psychologists, but this often seems appropriate only because of the questions they’re interested in answering. (And, for some of their questions, I'm not so sure it *should* be their sole methodology.) For example, psychologists say that the cleanliness of a room affects moral judgments. Leaving it there is typically fine for the interests of psychologists. They’re usually just interested in effects, whatever they might be. But philosophers typically aren’t just interested in effects. Lots of things affect our “intuitions” in minor ways. Thought experiments, whether in epistemology or other areas, usually focus precisely on pairs of cases that are meant to entirely *flip* judgments. Consider the trolly cases, tactical/terrorist bombers, Gettier cases, Dretske’s zebra cases, barn cases, etc. The idea isn’t merely that in a non-zebra case we’re confident it’s knowledge while in a zebra case we’re not quite as confident. On the contrary, we’re supposed to be fully attributing knowledge and then denying it. (I know, as I did at the Buffalo X-Phi Workshop/Conference, I’m raising this worry like a broken record!) So it seems experimentalism won't always (and perhaps rarely) get the results philosophers are interested in. I mean, I'm sure many disparate things do affect the *mean* responses concerning knowledge attributions. This could likely include really arbitrary factors, such as mood, SES, glucose levels, the cleanliness of the room, etc. But if we don't have any evidence that this actually tends to change people's *judgments* about what counts as knowledge (as I'd contend effectism often leaves us), then the results are likely not of significant interest to philosophical inquiries. Again, you might not be pushing effectism (sole experimentalism). If so, I’m hoping only to raise the issue here about the limits of pure experimentalism since we’re on the topic of the limits of pure pollism. And I’m really interested in what you think about this. I imagine Keith and others would also press this kind of worry about bringing in the non-pollist methodologies to address contextualism and other key epistemological theories.
Toggle Commented Jun 22, 2010 on On Pollism at Experimental Philosophy
Ah, it looks like my comment about Mele was in the queue while his was as well! In any event, thanks to everyone so far for the discussion and to Josh K. for posting this. We've been receiving some great feedback!
Philoponus, Good point about the courage issue with Experiment 2. Al Mele (whom we stupidly forgot to acknowledge in the paper!) has actually suggested this to us too. The main worry we have with this explanation is that it can’t uniformly explain the drop in agreement: in two of the four cases Carl doesn’t actually jump (and so doesn’t exhibit courage). But it certainly could be playing some role as well. If we’re able to make more changes to the paper, perhaps we can mention this. On the skeptical view: In his paper Mele actually did ask his subjects what they think weakness of will is. And, of course, the results were all over the map. However, as we say in our paper, we’re not sure why we should put any weight whatsoever on such results in our theorizing. After all, even if ordinary folks possess a shared concept, we wouldn’t expect them to be able to articulate the principles that govern its application. (Compare a linguist who says there are no shared principles of grammar because ordinary people can’t articulate the same principles, etc.)
Richard C., Chandra, and Adam L., Thanks for the comments on Experiment 3 and the sort of Deep Self issue there. This is a really interesting alternative to consider. By way of reply: First, "overcome by a feeling of compassion" is Josh K.'s phrasing. We just said he "gives in." ("He thinks it would be better to stay home and read as planned, but he gives in and goes with them.") So I don't think we led subjects to believe much about the agent's deep self, etc. one way or the other. But, as Chandra and Adam point out, the subjects may have assumed one over the other. (Chandra, I just saw you have a forthcoming paper developing a Deep Self account of the Knobe effect, etc. I haven't read it, but it looks very interesting and relevant here!) Given the results from Pizarro et al. Adam mentions (which are very interesting ), we might have some reason to think people generally assume the best, so to speak, of people's deep selves, all else being equal. Then again, I'm not sure that all else is equal. I'd have to look at the Pizarro et al. paper, but one thing that might differentiate our case (at least in the third experiment) is that the bad agent is part of a Neo-Nazi group. I wouldn't be surprised if knowing this about the agent makes subjects less likely to give his deep self the benefit of the doubt. They might even assume he's normatively incompetent ("insane" in Wolf's technical terminology)!
This looks like a really great chapter, James! It should be very useful, as you cover a great deal of terrain. If you're looking to cut some material, I could see limiting coverage of the general methodological issues toward the end. Some of your discussion is really great, in my opinion, though. I especially like your response to Sosa's challenge (pp. 15-6). I found one typo, for what it's worth. On p. 16, just before the long Devitt quote, you have "are" twice---once in the short quote, once right before it.
I'm certainly glad to see an agency blog starting up to fill the void. Thanks for getting this going!
I was able to download the papers from home without using institutional access. A very fine limited time offer indeed!
Thanks for posting this, Jonathan! Sorry I'm sort of late to the discussion. While I share Sam's worry about the speed at which Mturk subjects work, it might not be such a bad thing. After all, most of the time we want snap, intuitive judgments based on simple cases. And these Mturk subjects have the monetary incentive to both move quickly (so they can do more HITs) as well as to comprehend the material (else they don't get paid). Given Chandra's positive experience comparing data from more usual subjects, this sounds like a good resource for more than just piloting. A major issue then is whether Institutional Review Boards would be at all wary of the use of human subjects on Mturk. My IRB at least is quite picky!
Great book, Tamler! I was compelled to write a short review of it for Metapsychology. It just got posted here if anyone wants to check it out: http://metapsychology.mentalhelp.net/poc/view_doc.php?type=book&id=5317&cn=135
I was compelled to write a short review of it for Metapsychology. It just got posted here: http://metapsychology.mentalhelp.net/poc/view_doc.php?type=book&id=5317&cn=135
I just read the book during my flights from the East Coast to the West. I just couldn't put it down! As Pinker says in his blurb, it's thought-provoking as well as entertaining. Anyway, I really enjoyed the book, Tamler! I think many other x-phiers would too.
Interesting paper, guys! I don't have a great deal to say, but here are a couple of minor things: (1) It's a bit confusing to have one of your independent variables called "Happy" when the dependent variable is judgments of happiness. In general, it's kind of weird to be inquiring in to our concept of happiness but then seemingly using it as an independent variable. Why not have the one independent variable labeled something like "Feelings" (varied as either "Feels Good" or "Feels Bad"). After all, the vignettes seem to be operationalizing it by employing terms like "feels." (2) On p. 3 of the Supplement, you seem to be missing an F value for the main effect of Mental State. And is the p value there supposed to be .03 instead of .93? The "9" on the keyboard is next to the "0" and .93 sure ain't significant! :) By the way, I've never heard of this Amazon Mechanical Turk before. I'll look into it, but it sure would be great if you guys posted something on the blog here about how to use it to solicit subjects, especially if it's quick, easy, and not very expensive.
Josh, Yes, as usual, you draw out nice testable predictions! However, it might need to be weakened a bit, I think. After all, subjects were asked about a modal claim---about whether they agree that one of the people *must* be wrong. To disagree with that statement is just to be committed to the claim that it's *possible* that they're wrong. And I think people might hold to that while still reasonably holding that it's (probably) wrong to randomly stab Pentars. (That is, they think it might be okay, but if pressed they will err on the side of caution and say it's wrong.) So, on the hypothesis I was proposing, I think the better question to ask subjects would be something like: Is it possible that it's morally okay for Pentars to stab each other to test the sharpness of their knives? And this, then, may track their judgments about disagreements between judges.