This is Mark Phelan's Typepad Profile.
Join Typepad and start following Mark Phelan's activity
Join Now!
Already a member? Sign In
Mark Phelan
Recent Activity
I’ll respond to the questions about the analyses since, in connection with experiment month, I ran them. Ray, thanks for the question. The reason I opted for running T-tests rather than an ANOVA had to do with the fact that the numbers of Millais paintings vs Kinkade paintings were so different. Remember, the study looked at sixty paintings, 12 of which were late Millais painting and 48 of which were Kinkade paintings. The relevant analyses—the means of which are captured in the second bar graph—compare via T-tests 1) mean liking ratings for the 6 single-exposed Millais paintings to mean liking ratings for the 6 multiply-exposed Millais paintings, and 2) mean liking ratings for the 24 single-exposed Millais paintings to mean liking ratings for the 24 multiply-exposed Millais paintings. In other words, the dependent variable for each comparison is an average of the liking-ratings participants gave for all the paintings in the relevant condition. But since the number of paintings in each of the Kinkade conditions was much higher than the number of paintings in each of the Millais conditions, there’s a sense in which the dependent variable for the Millais paintings is not the same dependent variable for the Kinkade paintings. True, both are average liking-ratings, however they are averages over much different numbers of paintings. Doing T-tests on each painter instead of an overall ANOVA standardized the number of paintings from which the dependent variable is constructed—for one T-test, we are comparing the average liking-score for 6 singly exposed Millais paintings to the average liking-score for 6 multiply exposed Millais paintings; for the other T-test, we are comparing the average liking-score for 24 singly exposed Kinkade paintings to the average liking-score for 24 singly exposed Kinkade paintings. (Perhaps Margaret can address the reason why the numbers were so disparate for the good and bad artist. I believe it had to do with the desire to avoid canonical good landscapes and the difficulty of finding a large number of those by the same artist.) I’m willing to consider, though, that perhaps the fact that the average is based on such dissimilar numbers of paintings is irrelevant (after all, it is the same dependent variable in the sense that it is an average liking-rating). In that case, the ANOVA is the appropriate analysis. Thus I have now gone back and run a 2 factor (Artist, Exposure) repeated measures ANOVA. Here, we find a main effect for Artist (Millais preferred to Kinkade, p=.011, Greenhouse-Geisser test), and a main effect for exposure, in the opposite direction of Cutting (single-exposed preferred to multi-exposed, p=.022, Greenhouse-Geisser), however, the interaction is not significant (p=.095, Greenhouse-Geisser). So, these results tell against Cutting's hypothesis—here mere exposure resulted in overall less preference. But these results don't, in and of themselves, reveal a difference for good and bad art. However, the overall means and also the previously conducted T-test results do suggest the difference for good and bad art. And, anyway, the real key points in favor of the hypothesis are the between participant results for unexposed controls verses the exposed test group. I believe Margaret et al have the standard errors for the means, and can report those.
Sorry for my delay in responding, Justin…it’s been a busy week. First, thanks for your candor about the valence hypothesis. One thing I like about working in experimental philosophy is the intellectual honesty of many of the researchers, yourself included. As for the point about how function could deal with the asymmetry in responses you found between seeing red and feeling pain, I was thinking that the particular result might be explained in the same way I was suggesting the differences between smelling x and feeling anger were explained—that is, the way Jimmy is specified includes a description of machinery relevant to seeing red but none for feeling pain. (I know he has “touch sensors”, but I’m suggesting touching and feeling pain are different capacities with distinctive functions and machinery.) There’s a more general point here about pain, though, that you might be making. It is after all notoriously difficult to give a teleofunctional account of pain. You might think then that the function hypothesis is vacuous when it comes to pain…it can’t say people will think S is in pain if S is designed to perform the behaviors pain experience makes possible, because there is no common understanding of what those behaviors are. That strikes me as a reasonable point—and also as a reason to avoid using pain as our paradigm of a phenomenal state! Certainly, I agree with you that multiple factors are involved in phenomenal state attribution. (Who could think otherwise?) And, “specifying a general function for all of the cases that isn’t specific to certain odors,” also seems like a good way to proceed in investigating these issues. Though it would certainly be difficult to come up with such a function for all the relevant states. (Maybe we can brainstorm about this more off the blog?) John: I am very sensitive to your concerns about the difference between what is said and what is meant. (See my paper, “Thinking Things and Feeling Things,” with Adam Arico and Shaun Nichols.) I think that the best way to study these issues is by investigating how people paraphrase different verbal ascriptions, and I’d like to devise some experiments to look into this for robots.
Thanks for the careful comments, Justin. They are very helpful to us in reformulating what—I hasten to point out—is a very preliminary discussion of our results. I’ll probably be picking up most of the discussion from now on, since Wes is on the verge of nuptial bliss. I think it’s best to leave the exegetical discussions aside here (though your points are well taken). Nonetheless, I have a few quick responses to other parts of your comments: The first concerns the issue of our testing states that belong in the third category, that is, states that have a valence as well as a “prominent perceptual component”. First of all, I’m not sure what a prominent perceptual component is such that smelling a banana has one and seeing red or feeling pain does not. It might be helpful just to explicate this. But, perhaps more importantly, I’m not sure why specifying the function of the robot in our first study would, “downplay the valence reading for odors that are relevant to that function” (emphasis mine). Why wouldn’t thinking of the function of making smothies be just as likely to remind our participants of their own past smoothie making experiences, and thereby to remind them of the valence of banana smells and etc? Could we operationalize and devise a test for downplaying (or up-playing) valence? My second comment bears on complexity. I like the distinction between general complexity (g complexity) and function-specific complexity (fs complexity). I think it’s important to note though, that g complexity and fs complexity interact in dynamic ways. Participants in our studies are asked about 1 of 2 robots that differ in general complexity: A robot designed at a State University of non-descript components, and a robot designed in the Ivies with top-of-the-line components. As you rightly note, in addition to the different levels of g complexity, a distinctive complexity may come along with function, which is a variable we also manipulate. When the robot is designed to make smoothies, dispose of biological waste, or be a friend to the elderly, participants likely suppose a competent designer who saw to it the robot had all the components relevant to its specific job. But if the designer could also avail himself of top-of-the-line components, as the Ivy-bot’s designer could, then the fs complexity would be even more complex. So, the State U bots will differ in fs complexity relative to their function—the robot designed to be a friend to the elderly will be more fs complex for the relevant functions than the robot designed to lift things. But there will be a greater disparity for the Ivy-league bots. The Ivy-bot designed to be a friend to the elderly will be even more fs complex for the relevant functions than his Ivy-bot counterpart, because the components which enable him to perform the relevant functions are just generally very complex. Thus, if greater complexity is driving our effect, we would expect a greater difference between state attributions for the robots in the Ivy conditions than in the State conditions, which we don’t find. In fact, the differences between mean responses for these categories is exactly the same. Of course, that’s all a bit speculative. Our study wasn’t designed to find such differences, and it’s just a post-hoc consideration given the findings. A more concrete point is this: Obviously, specifying that something has a function leads us to attribute more complexity relative to that function, since we suppose the thing has components relevant to performing that function. But we certainly wouldn’t want to conclude that a simple robot couldn’t occupy a particular mental state because the state is valenced when the robot in fact just lacks components necessary to occupy the state. I don’t think my remote control gets depressed when I turn on the Real Housewives of L.A., but I’m fairly certain that’s not importantly due to the valence of depression, rather it’s due to the structure of my remote. The worry we’re raising about your studies is akin to this: Perhaps your studies aren’t getting at differences in how people think about mental states. Rather they may be due to differences in how people think about the robots in your studies. I think the differences you found between seeing red and smelling banana on the one hand, versus feeling anger on the other, is very telling in this respect. Each of your robots is specified to have, “a scent detector, video camera for eyes, wheels for moving about, and two grasping arms with touch sensors that he can move objects with.” This includes machinery relevant to seeing red and smelling bananas. But there’s no mention here of any machinery relevant to feeling anger—whatever that machinery may be. Thus, it’s perfectly reasonable to suppose that the relatively low rating for anger is just due to a failure to conceive of the robot as having the relevant machinery. In this light, our study corrects for this problem, by leading participants to think of a designer who gave a robot machinery relevant to its specific task, whatever that machinery is. And, indeed, once that machinery is onboard, even a simple robot is thought capable of occupying valenced states. Thus, our studies present a problem for interpreting the evidence supposed to support the valence hypothesis. Anyway, these are dicey issues and I think we’re going to have to exercise a lot of care in operationalizing terms in this neighborhood and thinking about what studies show and how to disambiguate various hypotheses. I’m excited to see the studies you’ve thrown up, and whatever results they deliver! And I look forward to talking to you and others about this here and elsewhere in the future.
Looks like a great line-up! Wish I were around this year. (Let me head off Jonathan Weinberg in asking if podcasts of the talks could be arranged.)
Interesting post, usual. I’m not sure if people other than Josh are supposed to respond, but I will anyway. (I do take myself to be primarily a modest X-Phier.) Here are a few questions that occurred to me while reading through this: 1. What’s the relevance of the initial identification of experimental philosophy with experimental psychology? It seems that this is supposed to support the position that x-phi isn’t philosophy, but how? After all, as you point out, “the disciplinary identity of psychology is a lot clearer than that of philosophy.” So why should we assume at the outset that the purported fact that (some) x-phi is experimental psychology should lend any support to the idea that it isn’t philosophy? Are you assuming a tidy view of the academy without disciplinary overlap? Aren’t some of the questions addressed by art historians the same as those addressed by philosophers of art? Aren’t some addressed by biologists the same as those addressed by philosophers of biology? Don’t they use precisely the same methods? You start off with this idea that x-phi is experimental psychology, but as far as I can tell that is totally irrelevant. All the argumentative weight of your post rests on the well-trodden ground of defining philosophy and attempting to show that x-phi doesn’t fit the definition. So let’s turn to those projects. 2. It seems that according to your post, philosophy is the study of traditional first-order philosophical questions and certain second-order methological/epistemological questions, all of which are non-psychological. Assume I’m willing to concede that the questions that you list cannot be answered by studying how the mind works. So what? As you admit, that’s only a partial list. Lots of other first-order philosophical questions seem as though they could be addressed by studying how the mind works, questions such as: Are the objects of perception external to the mind? Are concepts all copies of percepts or are some innate? Is all human action self-interested? Could those questions be addressed by studying the mind independently of more traditional philosophical methods? I doubt it. But it’s a caricature of x-phi to suppose it attempts that. Are these also questions addressed by experimental psychology? Perhaps, but as we’ve already seen it’s a red-herring to give that argumentative weight. What I sense is supposed to be really pulling the weight here is a subtle imperialism about philosophical questions. The questions you innumerate are the more important, traditional questions—“the questions that draw most people into the discipline”. But I was drawn in (at least in part) by the pidgin questions I enumerate as well. And the philosophy courses I took that addressed these questions—before I’d ever heard of experimental philosophy—took research into the mind seriously. Lots of (misguided?) people wrote doctoral dissertations on these questions before experimental philosophy came along. 3. I think your characterization of experimental philosophy is also too narrow, though this mistaken conception has been aided and abetted by some characterizations of experimental philosophy due to experimental philosophers themselves. It is no longer correct to say (if ever it was) that all experimental philosophical projects involve the scientific study of pre-theoretic, intuitive assessments of particular cases (the “thought experiments” or “intuition pumps” of traditional philosophy). Just as philosophers have traditionally appealed to a disparate assortment of evidence in favor of the positions they have supported, so experimentalists have been systematically assessing not only intuitive judgments, but other sources of evidence as well. For example, a team led by Eric Schwitzgebel has attempted to gain insight into the ancient and oft asserted claim that philosophical reflection on ethical issues can improve one’s moral behavior by examining how professional ethicists actually behave in a variety of circumstances. In other work, Joshua Knobe and Jesse Prinz have examined how frequently people use different kinds of mental state sentences on the internet, and argued from their findings that people are ordinarily willing to ascribe beliefs and desires, but not experiences or emotions, to disembodied or distributed entities like corporations. (Adam Arico, Shaun Nichols, and I have done follow up work that uses a similar, non-intuition-soliciting approach, but argues contra Knobe and Prinz.) In my own experimental work, I have argued against the claim that it is distinctively difficult to express what a metaphor actually means by systematically comparing how people paraphrase metaphorical and literal utterances. This is a straightforwardly empirical claim and one that I argued in that paper bears much of the brunt in arguments for non-cognitivist theories of figurative language. (Incidentally, in making these arguments I don’t merely, “cobble together a few incautious quotations about "what we would say" from a more innocent time.” I think that’s an unfair characterization of the field.) I don’t deny that these other projects attempt a systematic study of the mind—or at least of human behavior. But intuition isn’t the target, so the “psychology of intuition” is an ill-fit. Nor are all these projects concerned with the psychology of philosophers, or the psychology of philosophy in any obvious sense, so the “psychology of philosophy” won’t do either. And, more substantively, your narrow conception also obviously raises trouble for your attempt to shade x-phi off from traditional philosophy. (You also seem to assume the oft repeated mischaracterization that x-phi adopts a first-past-the-post approach to philosophical questions, as though experimental philosophy papers said nothing more than: “Survey says...incompatabilism!” Experimental work has always been supported—at least in the best examples—by careful philosophical argument. As was heatedly discussed here a while back, it’s ad hominem to suggest otherwise without offering a careful analysis of experimental philosophical papers.) 4. So, as for your dilemma, I reject Modesty. (I suppose I’m an immodest philosopher after all!) And I also deny both the reasons you suggest prohibit me from denying it. You claim that “most traditional philosophy is concerned with other topics than how the mind actually works”. And I say, perhaps, but so what, some traditional philosophy is concerned with how the mind works (nor do I think it’s clear that how the mind works is irrelevant to most traditional questions, which I see as the key issue). This reason for rejecting modesty is akin to claiming that epistemology isn’t philosophy because most traditional philosophy is concerned with other than epistemological questions; or metaphysics isn’t philosophy for the same reason. Any sub-genre of philosophy will entertain a minority of the traditional questions. So what? And I deny, too, that “X-Phi hasn’t made a significant contribution to first-order or second-order questions”. Though I think there’s no point in going over the details here. (And even if it hadn’t, why isn’t the relevant question that it couldn’t? How long does a new discipline get before the bills come due in your estimation?) I recently wrote a general introduction piece about x-phi and in thinking through that I came to believe, too, that the name isn’t entirely satisfactory. But obviously not for your reasons. And I certainly don’t think psychology of philosophy or psychology of intuition are improvements.
Toggle Commented Jun 29, 2011 on A Modest Proposal at Experimental Philosophy
Just a reminder: This lab will take place tomorrow at 4.
I’m interested in Dave’s general hypothesis: no x-phi study offers primary insight on first-order philosophical questions. And think that he proposes the right method to investigate the claim: we want to both examine specific cases and engage in general reasoning. But I’m at a bit of a loss because I’m not sure what counts as direct evidence for a substantive first-order philosophical claim p. I tend to understand direct evidence in terms of non-philosophical claims, like direct evidence that there’s a desk here—I see and touch the desk. But we’re not likely to have any evidence of that sort for a philosophical claim. So I just wonder if Dave might enumerate a few examples of direct evidence for a substantive philosophical claim to help me out.
It's good that: "Subjects were firmly instructed to opt out of a given question if they had prior familiarity with experimental research that might bias their answer." But I worry still if the results might not be partly due to implicit learning. There's a lot of research into how mere exposure to a stimulus can influence future behavior and responses, even in the absence of conscious familiarity with the stimulus. This worry was brought to mind particularly forcefully with your results concerning the Knobe effect. Over 40% of your respondents avowed no familiarity with the Knobe effect...but has anyone in the industrialized world really not had at least mere exposure to the Knobe effect results?! (A bit of hyperbole there, of course...but you get the point.) And all the findings you looked at are by major figures in the movement and were discussed in fairly well-publicized papers. I think what you would really want to do to get at what you want to get at here is ask philosophers about purportedly surprising x-phi results that have not yet been publicized at all. Maybe you could team up with some of the usual authors on this blog to look at whatever work they have in progress?
The CUNY sessions on this Friday (the 25th) will take place in room 9205. (Earlier this had been listed as TBD on the website, but it is now D.)
Also, the abstracts for the experimental philosophy conference organized by MERG (announced on the 1st of January) are due in five days, on Thursday the tenth of February. These can be on any topic of experimental philosophy and can be no more than 1000 words in length.
As I mentioned in the post, the deadline for abstracts is February 10th. I probably wasn't sufficiently clear about topic. Experimental work on any topic of philosophy is invited.
Experimental Philosophy Lab Meeting in New York (October 15th) AND bonus experimental philosophy talk tomorrow (October 6th) at CUNY Oct. 15th M.E.R.G. Lab Meeting (3:30-5:30 pm) NYU Philosophy, Room 202 (2nd Floor Seminar Room) Geoffrey Goodwin University of Pennsylvania, Psychology Taking pleasure in doing harm: The influence of hedonic states on attributions of evil. How do we know whether a person is evil? Two studies investigated the hypothesis that the hedonic experience associated with harmful acts shapes whether, and to what extent, people view another person as evil. Actors who committed serious harms were seen as evil when they either anticipated experiencing or actually experienced pleasure. Further, even when people did not cause harm themselves, they were viewed as evil if they took pleasure in another’s demise. Judgments of evil, but not judgments of morality more generally, were related to support for the death penalty. Jonathan Livengood University of Pittsburgh Experiments on Causal Over-Determination Causal over-determination cases are among the most important counter-examples to counterfactual theories of causation. The force of the counter-examples rests on the intuition that all of a collection of factors count as causes of some outcome, even though the outcome does not counterfactually depend on any of the factors. Most philosophers claim to share such intuitions. However, no one knows how widespread or strong these intuitions are. I present some recent studies with an eye towards two questions: (1) do people share the dominant philosophical intuitions about cases of causal over-determination, and (2) are ordinary causal attributions in cases of over-determination influenced by the moral weightiness of the outcome? AND: Bonus Experimental Philosophy Talk, Tomorrow, October 6th, 2010: Joshua Knobe Yale University Program in Cognitive Science “Intuitions about Consciousness: Experimental Studies” Wednesday, Oct. 6 at 4:15 P.M. CUNY Graduate Center, Rooms 9204/9205 365 5th Ave. (on the corner of 5th Ave. and 35th St.)
Metaphorhacker: Thanks for the interesting post. I tend to agree with your conclusion "that there’s nothing special about metaphors when it comes to meaning, understanding and associated activities like paraphrasing" (as I discuss in my dissertation). Do you think there's something special about metaphor in other respects? Also, I'm familiar with the work of Gibbs and Glucksberg on comprehension and processing time. Do they have work on the inadequacy of generated paraphrases too? I think both are separate contributions to the conclusion that there's nothing special about metaphorical meanings. Alève: I don't think problems with paraphrase necessarily reflect a problem with comprehension. We could grasp the thought conveyed by any sentence or utterance and just be unable to find a different sentence or utterance that also captured that thought. If we thought in natural language and there were no synonymous terms this would be the case. But it could also obtain in other situations.
Toggle Commented Aug 8, 2010 on What Metaphors Mean at Experimental Philosophy
Thanks for the comments! Brandon, you're quite right about the nature of the inadequacy being asserted. As I write: "The inadequacy assumption that has been central to debates concerning metaphorical meaning is the claim that, although a metaphor and its purported paraphrase may have somewhat overlapping content, the paraphrasing utterance generally expresses content that leaves out some important idea present in, or adds in an important idea absent from, the content of the target metaphor." My experiments may fail to address this thesis, but that at least is what I was trying to get at--and I tried to get at it by asking, not only how similar the statement and its paraphrase are, but also whether one leaves out an idea that the other includes. I like how you put the inadequacy view as asserting "that paraphrases of metaphor are always inadequate in a way more detrimental to meaning than paraphrases of literal statements are." (Of course, that can't quite be the view of some proponents, since some who embrace metaphorical meanings also endorse the inadequacy...but never mind that.) What's essential to this view is that the literal and the figurative have paraphrases that are inadequate in distinctive ways. I try to argue against this sort of view in general in the paper by pointing out that the two kinds of utterances (or the two groups included in my study, at least) are inadequate to exactly the same degree. Thus, it's more parsimonious to suppose that the inadequacy of literal statement paraphrases and the inadequacy of metaphor paraphrases are explained in the same way than to suppose that these admit of different explanations. Of course, this is not a decisive case. I'm happy to shift the burden of proof (and I think the most important part of the paper for doing that is the section raising philosophical objections to previous assessments). And I'm interested in discussing better ways of getting at the purported inadequacy. (There's a lot more experimental work to be done here!) Maybe a good method would be to examine how many ideas people come up with as being left out of (or added into) the paraphrase of one kind of statement or the other in a set period of time, or to look at how long it takes participants to generate ideas that are left out between the statement and its paraphrase. Finally, I see your point about the difference between leaving an idea out versus adding one in--the paraphrase is naturally conceived as the statement that either leaves out or adds in because of the direction of fit. But I'm not sure "leaves an idea out that the other includes" (my wording) suffers from the same problem, and, anyway, participants here don't know which is the paraphrase, so shouldn't presume a direction of fit. Eddy: I don't consider that view specifically. But I do consider the view that it's hard to paraphrase the literal statement because it's so simple--thus there's no other way to put the idea it expresses; but the metaphor paraphrase is inadequate because it is open-ended. In arguing against this I make the general point (mentioned above) that the mean adequacy ratings are so similar its not plausible to suppose unique explanations. Alève: None yet. Though I'm interested in exploring the idea that we reason about the mental states of others using certain conceptual metaphors. Do you have specific proposals?
Toggle Commented Aug 6, 2010 on What Metaphors Mean at Experimental Philosophy
If you're around NYC today (Friday, July 2nd), please join us for an Experimental Philosophy Lab Meeting at NYU. Presenters will include Stephen Morris (College of Staten Island) and Brian Robinson and Mark Alfano (CUNY Graduate Center). The lab meeting will run from 3:00-5:00 pm in room 202 of the NYU philosophy department. Informal discussion will follow. You can learn more about the meeting here:
Join MERG on Friday, July 2nd, for an Experimental Philosophy Lab Meeting at NYU. Presenters will include Stephen Morris (College of Staten Island) and Brian Robinson and Mark Alfano (CUNY Graduate Center). The lab meeting will run from 3:00-5:00 pm in room 202 of the NYU philosophy department. Informal discussion will follow. You can learn more about the meeting here: MERG also has a facebook page: Best, Mark Phelan
Hello, Join MERG tomorrow, Friday, June 18th, for an Experimental Philosophy Lab Meeting at NYU. Presenters will include Keith DeRose (Yale University) and Jennifer Nado (Rutgers University). The lab meeting will run from 3:00-5:00 pm in room 202 of the NYU philosophy department. You can learn more about the meeting here: The paper Keith will be presenting is now available online at Certain Doubts: MERG also has a facebook page:
Wesley just pointed out to me that the paper Keith DeRose will be presenting at this meeting is now posted at Certain Doubts: People attending our meeting may want to take a look--and so may other readers of this blog. The paper critiques some of the recent work on contextualism and invariantism about knowledge.
Hi Anon, This is primarily for people who haven't done experimental philosophy before. The supervisory board is meant to help us assure that this is a philosophically relevant and respectable project. We have a group of psychologists and experienced experimental philosophers, led by Professors Farah and Haidt, who will help assure the quality of accepted experiments
Toggle Commented Jun 4, 2010 on Experiment Month at Experimental Philosophy
Join MERG on Friday, May 21st, for an Experimental Philosophy Lab Meeting, at NYU. Presenters will include Eric Mandelbaum (Oxford University), Geoff Holtzman (City University of New York), and David Brax (Lund University). The lab meeting will run from 1:00-3:00 pm, in room 202 of the NYU philosophy department. You can learn more about the meeting here: MERG also has a facebook page:
Please join us today for a MERG Experimental Philosophy Lab Meeting. Presenters will include James Andow (University of Nottingham), Michael Brownstein (New Jersey Institute of Technology), Jill Cumby and Craig Roxborough (York University), and Richard Kamber (College of New Jersey). The lab meeting will take place from 1:00-4:00 pm, in room C205 at the CUNY Graduate Center (not 1-5 as previously advertised). You can learn more about the meeting here: MERG also has a facebook page: Also, please come out to the MERG Metaethics and Experimental Philosophy Workshop, on May 1st: Hope to see you there.
On Friday, April 30th, a special MERG Experimental Philosophy Lab Meeting will take place at CUNY. Presenters will include James Andow (University of Nottingham), Michael Brownstein (New Jersey Institute of Technology), Jill Cumby and Craig Roxborough (York University), and Richard Kamber (College of New Jersey). Please join us for this lab meeting from 1:00-5:00 pm, in room C205 at the CUNY Graduate Center. You can learn more about the meeting here: MERG also has a facebook page: This lab meeting is connected with the May 1st MERG Metaethics Workshop:
Mark Phelan is now following tnadelhoffer
Apr 22, 2010
Everyone is welcome. Please stop by if you're free!
On Monday, March 12th, we will resume lab meetings at NYU. This meeting will feature presentations by Stephen Stich (Rutgers University) and Wesley Buckwalter (City University of New York, Graduate Center), and Simon Cullen (Princeton University). Please join us for this lab meeting from 6:30-8:30 (NOTE THE SPECIAL TIME) in the second floor seminar room, 202, in the NYU Philosophy Department (5 Washington Place). As usual, feel free to come even if you can only join us for part of the meeting. You will soon be able to see details about our meetings (including directions) here: MERG also has a facebook page: