This is Jonathan Weinberg's Typepad Profile.
Join Typepad and start following Jonathan Weinberg's activity
Join Now!
Already a member? Sign In
Jonathan Weinberg
Recent Activity
A little while back, we had a post on David Chalmers' critique of Herman Cappelen's recent (and infamous!) book, Philosophy Without Intuitions. Herman now has a response up, and is looking for some feedback on it. So, please give it to him! Continue reading
Posted Nov 26, 2013 at Experimental Philosophy
"My own preferred approach is both to engage with already existing literature in the empirical sciences, and in case the particular philosophical question one is interested in has never been approached as such in these disciplines, to team up with the relevant researchers to conduct novel experiments." I'm failing to see how this is different from how we experimental philosophers construe what we are doing, especially given Schwitzgebel's formulation. Is it just that the teaming-up is made into a constitutive element, so that if (e.g.) you ended up running one of the studies on your own, or only with other members of departments of philosophy, it wouldn't count as empirically-informed philosophy? But then I fail to see why such explicit interdisciplinarity at the level of the departmental affiliations of the researchers should make such a difference.
"Personally, I much prefer the notion of 'empirically informed philosophy', i.e. the idea that philosophers can and should engage with material from the empirical and social sciences to discuss philosophical questions, possibly also to collaborate with researchers in these other areas." I think that this is an idea that every self-identifying x-phile would agree with heartily, and it seems to me to capture a key part of the animating spirit of x-phi. We would just add: sometimes, there is not existing work in the sciences that addresses the empirical questions that one thinks are of some philosophical moment, and when that happens, it is legitimate for philosophers to engage in the scientific projects themselves. (Though maybe this is already included in your "collaboration" clause; certainly lots & lots of x-phi is done in such an interdisciplinarily collaborative fashion.) Fwiw, I don't find the debates about the term "intuition" particularly helpful, and Williamson's attempts to do away with the term have turned out to raise more confusions than may have accompanied the term itself! I take it that when people talk about "philosophical intuitions", etc., that they are ostending a significant, though hardly exhaustive, piece of philosophical practice, and that there's really not much question as to just what is being picked out by that ostension. Williamson knew it was things like the Gettier case that needed to be defended, after all, and not, say, Goedel's second incompleteness theorem.
@Eric - Totally with you on the interest of the community-level question here. I believe that we do mention that possibility in the introduction to the expertise paper, but it's not something that has been discussed at any length in the literature. I do think that such an approach is going to be the most likely candidate, but it's clearly one whose success (or not) will not be evaluable from the armchair. @Max - I think that your initial observation is astute. The debate has indeed taken the form that you suggest it has, and I think that that's because, in a very important way, the debate so far hasn't _really_ about the question, "Is philosophers' use of intuitions sound or not?" Rather, it has mostly been about the slightly different question, "Given the x-phi results, can philosophers still presume their use of intuitions to be sound _without substantially appealing to any further scientific work_?" So the arguments of the defenders will, of necessity, be cast at a rather general and abstract level, and I and my co-authors have been responding to those arguments in the form they have taken. And it is because the debate is cast at that level, that the defenders really need an appeal to super-duper-expertise: one generally can only establish that a given population is substantially free from these sorts of biases, by using scientific methods. This all makes for a very different debate than those over the more specific theses like the ones you envision -- specific theses which, once formulated clearly, will generally not be of a sort that can be well-addressed from the armchair. Scientific defenses of the armchair will have a much wider space of moves available than are available to armchair defenses of the armchair. (For what it's worth, I would much rather we were all having the sorts of discussions you are urging us to have. But we have to pry folks out of their armchairs, first, if only so that they can take a closer look at where they've been sitting.) Now, I'm not sure what you mean by "epistemically fragile", but what we have in mind is, in part, pretty close to the move that you dismissed: "Should we fear that those who do share the anti-reliabilist Truetemp intuition are likely to do so because they happened to first encounter the thought experiment shortly after having considered a clear case of knowledge?" Yep, we definitely should fear things along those general lines; just go back and read that first Ariely link, about doctors and default biases, if you think that this sort of thing sounds preposterous, because it's far more plausible than our armchair psychology will lead us to think. Keep in mind that it will be enough if the _strength_ of that intuition, as ultimately deployed by philosophers, is nontrivially a function of the conditions of early exposure to the case. E.g., maybe those who become externalists experience the intuition as a less forceful seeming (or, a less confidence-worthy judgment, in Williamsonian terms) than those who become internalists do. But you are right that it is not this very particular bias all by itself that should be the total source of concern, but also the likelihood that still other contextual factors might influence our intuitions/judgments inappropriately; and furthermore, this is exacerbated by our current lack of knowledge about where & how we are susceptible to such biases. Where there's smoke, there's likely to be fire; and where there's fire, there's likely to be even more fire, especially if it's a surprise to everyone that there's fire there in the first place. I would also caution strongly against this sort of reasoning: "But, doesn’t the fact that philosophers generally agree on the Truetemp case show that philosophers here really do have some expertise and are not susceptible to whatever bias has afflicted the study’s stubjects?" First, I don't think we should take ourselves to have much confidence _at all_ as to what "philosophers generally agree on". I have a post I want to write about all the factors that go into what cases and intuitions/judgments about them go into what gets published about and what doesn't. So consensus in the published literature does not come close to entailing consensus in the profession on the whole. (And I have definitely heard anecdotal reports of trained philosophers not sharing the Truetemp intuition.) Second, as I noted, it's enough for my purposes if the strength of intuition gets modified in these ways, so even if all philosophers do have the intuition to a greater or lesser extent, it doesn't follow that context effects haven't been problematic. And third, any consensus in the profession itself might not be epistemically virtuous, since maybe only those with the "right" intuitions -- perhaps by luck of the draw, as to what context they were considering them in initially -- go on with further philosophy coursework. So, no, I would not draw the moral you want to draw from the putative consensus on Truetemp in the available literature.
Ah, the new comment-moderation scheme, though a good one, does lead to an increased chance of "comment-crossing". My response above to Daniel also should suffice as a response to Jonathan. Chris, that's an interesting hypothesis (and it's all hypothesis-wrangling at this point, about the politicians). One thing to look at would be whether it is, as Yglesias seems to suspect, the electorally somewhat vulnerable who one sees doing this sort of maneauvering more than the electorally safe. Also, it's important to remember that an electorally safe district _for a party_ need not be electorally safe _for a candidate_ -- such districts may perhaps breed just as much gamemanship from people afraid of primary challenges.
Hi Daniel, I agree with everything you say, but as you note, it's just not likely that what you're talking about there is what we x-phi types have in mind. You're right that absolute freedom from any & all epistemic risk is an unattainable standard, and trying to impose a demand for it would, I think, be a form of pouring old skeptical wine into new terminological wineskins. But that's precisely why I try to speak in terms like "real epistemic danger". Some sorts of epistemic risks we appropriately do not concern ourselves with, in our methods -- in particular, extremely small ones that would be too costly or difficult to address, given the size of the threat; or very diffuse and abstract ones that we just can't really do anything about anyway. (The possibility of some rare quantum event making our measuring apparatus give a funny reading exactly in every case that we use it would exemplify both of those, I would think.) And there may be other good reasons to disregard some bit of epistemic risk. But, very importantly, there are lots of cases where it is _not_ appropriate so to do disregard. When there is a specific threat (not just an abstract chance of error), and has a non-negligible probability of occurring, then it is something that one has good prima facie reason to take seriously. And when one could likely find some not-unbearably-expensive way of minimizing or mitigating that threat, then one has a prima facie reason both to find it & deploy it. Note that this is what goes on across other fields of inquiry all the time. So, whenever you see me (or another xphile) use a phrase like "real epistemic danger", or "a decent immunity to non-truth-tracking biases", and so on, you should assume that what I have in mind is this latter sort of thing. I take it that those who run the expertise defense have in mind an argument along these lines: the x-phi results that may indicate a substantial likelihood of specific foibles in ordinary folks' philosophical cognition, nonetheless do not make non-negligible the likelihood that trained philosophers' cognition suffer from the same specific foibles, because the trained philosophers are experts and the ordinary folks aren't. And what I'm saying in this post is that it's really a rather extraordinary form of expertise that is needed in those premises -- not to rule out any and all epistemic risks, which I agree would be an unreasonable demand, but even for the more modest and appropriate task of ruling out the specific, real epistemic dangers that it looks like we philosophers might actually face.
I basically agree with Mark's take on things, except I don't think that this needs to be taken as semantic vs. pragmatic, so much as multiply-semantic, i.e., there are a range of different meanings for "believe" in ordinary parlance, and different ones will get cued up in different contexts. And really, this needn't even be different _meanings_ per se; it could just be different stored exemplars, or different "concepts on the fly", that get cued up, even if all have the same ultimate semantic value, i.e., beliefs. Also, fwiw, I find "believes on some level" to be pretty idiomatic, and a touch of googlistics backs this up somewhat: 31,300 hits for "believes on some level", 329,000 for "believe on some level". About 3.4 million hits for "on some level, I believe", but scanning the top 20 a lot of them are spurious, like "...on some level. I believe...". I do find belief-on-some-level more intelligible than in-between-belief; the former is rendered sensible by taking the mind to be substantially disunified, and one system can represent p as being the case even when other systems don't, and so one can believe p at the level of the one system, while not believing it at the other. We can cash out different levels in terms of different representational systems, but I don't know how to cash out in-between belief. Also, I don't see why it should be a problem for epistemologists if, in fact, some weaker sort of belief than determinate belief is sufficient for knowledge. The epistemological community on the whole is generally committed to _some_ sort of belief requirement on knowledge, but why take them to have a similarly strong commitment to any particuluar form of belief? Which brings me to one disagreement with what Mark said, here: "maybe your conclusion is just that epistemologists shouldn’t take it for granted that it’s intuitively obvious that belief is required for knowledge. But I wouldn’t have thought anyone needed an argument for that" Actually, I think that many philosophers do indeed take it to be intuitively obvious that belief is required for knowledge. So clearly those philosophers, at least, deserve an argument!
Looks great! Any chance of the papers, or at least abstracts, being publicly available?
Hey, congrats on the _Cognition_ paper! That's great news, for you and for x-phi. Having given the second paper a quick read, I guess I'm not sure I totally understand the experimental design. It looks like, in both studies, you present the subjects with further evidence about the cases, in the form of information about proportion of expert judgments on the question. Overwhelmingly, your subjects seem to make rational use of the extra information you provide them, adjusting their beliefs and confidence in the direction indicated by the extra information. But instability is, as you note, about influence of irrational factors. Wouldn't it be weird on anyone's account of intuitions if your subjects had behaves substantially differently than they did?
Though I would question the use of the singular there; x-phi is no one thing whose validity can be evaluated as _a_ research program.
Sounds about right to me.
Hi Chandra! Hey, I totally agree with everything you just said. (My EEL group was really excited by your paper with Konrath, btw!) Actually, Josh Alexander, Ron Mallon and I made some similar points in our recent "Accentuate the Negative" paper, but it does seem to me now that we may have made a mistake in emphasizing a need for different kinds of _experiments_ when as you are noting here to some extent different kinds of _analyses_ are also called for. I do think that there is a growing trend towards using more sophisticated statistical tools in x-phi (growing, basically, as more x-philes get up to speed on such tools). This is all of a piece with the second part of what I said to Drew: one nice thing about x-phi is that, inasmuch as it succeeds in borrowing good epistemic norms from the sciences where appropriate, it will continue to look very actively for new ways to improve its attempts at sucker-proofing. I'll ask you this, though: do you think that "M&T methods" are at least of some use in helping answer what one might call "extensional" questions? E.g., who makes what attributions, and under what circumstances? The causal/algorithmic/intensional questions do require something more, but do you think that these simpler, more psychologically shallow, questions do as well?
Joachim: I think that there was an unintended ambiguity in the first sentence of the main post, that you interpreted differently than what I had in mind. I am making a claim about the value of x-phi for philosophy more generally -- not a claim about the value of philosophy for inquiry, culture, etc. more generally. Your concerns will not apply to that version of what I am saying, I hope! Drew: Two main responses. First & foremost, I don't really mean to be appealing here to just the generic foibles of the human mind, but rather the specific set of "tricks" that have been documents & addressed in the history of science and its methods. (And not just science, I am happy to add -- the history of _formal_ methods tells a similar story, albeit for a different set of "tricks".) For any of these particular sources of error, like, e.g., experimenter bias, we can point to the particular things that scientists do to try to compensate for them, e.g., double-blind methods. Second, I do believe there is an important difference at a more general level between the practices of the sciences and current analytic philosophical practice, in that the former seem to me to have a much more active set of norms than the latter for rooting out & compensating for errors. This asymmetry is at least largely because the sciences have all sorts of tools at their ready disposal for checking for such sources of error, that philosophers do not. For example, Wesley's recent paper discusses gender differences in some intuitions. Traditional philosophical methods are by and large just not up to the task of determining whether there are any such differences or where they lie, but it's not an especially tricky question for the methods of psychology and the like. If we broaden our toolbox (as x-phi advocates that we do), then there's no reason for this asymmetry to persist over time.
I don't know why you think I'd want to say anything as extreme as what's in your first paragraph. Note that a scientist advocating using scientific methods would not at all need to say that ordinary observation & inference are anything so bad as _worthless_. One can have an X that is sufficiently better than Y to justify using X instead of Y, even though Y is not anything like worthless; and, similarly, Y can be good for many things without being up to some particular task one might have thought it could do. So I think you've got a straw man there -- or at least a red herring. I don't think I mean anything at all unusual by "scientific methods" here. It's things like: operationalization of variables; methodical recorded observations as shielded as possible from experimenter interference; use of controls where appropriate and possible; taking various sorts of active steps to avoid sample biases; using statistics for any inferences about correlations, and so on & so forth. "Logic and critical thinking" I take to be an appropriate just part of pretty much _any_ inquiry. I agree with you about that ordinary human cognition has a great many virtues. But I don't see that we're anywhere near making the error that you speak of in your last paragraph. Indeed, you'll recall that in my paper on hopefulness I cite our ordinary, everyday perceptual practices as a paradigm example of a hopefuly epistemic practice! There's exactly nothing that is even a teensy bit "reason-skeptical" in any of this, and more than it is "perception-skeptical" to point out that we can't see through walls, or make out colors very well at distance in the dark.