This is Mohan Matthen's Typepad Profile.
Join Typepad and start following Mohan Matthen's activity
Join Now!
Already a member? Sign In
Mohan Matthen
Philosophy professor
Interests: Philosophy of perception, philosophy of biology
Recent Activity
On David Wallace's reading, the theorem is somewhat similar to what has occasionally been called Li's "theorem:" The average return on a portfolio of investments will increase in proportion to the variance (i.e., degree of diversity) of individual returns. The reason is that the higher return investments grow faster and hence occupy, over time, a larger proportion of the whole. (Li's theorem has been used as a simplified proxy for Fisher's Fundamental Theorem of Natural Selection.) Like the Hong-Page theorem, Li's theorem depends simply on the fact that in a diverse group, the average performance is better than that of the collective consisting of the worst member(s) of the group. But as David points out, the corollary of this observation is that the collective consisting of the best members of a diverse group will, though it is not diverse, do better than the average.
I realized upon further reflection that it is assumed that every guess results in an improvement on a previous guess it takes as a starting point. So the right guess is recognizable by the fact that it will be an equilibrium: all guesses that start with the right solution ping back to the input. Obviously, these are very strong assumptions.
I regret the hour or so I spent trying to understand the Hong-Page "theorem." As far as I can tell, the "agents" they define don't have "skill-sets". Each agent is just a set of guesses. (Each agent is a function from problems to solutions.) Some guessers are better than others: their iterated guesses result in improvements. As I understand it, the theorem states that some large number of randomly picked guessers will do better than a smaller number of good guessers. (I think this is true in part because the large set of randomly picked guessers will include the small set of good guessers.) As Professor Thompson says, this has very little application to the real world because the numbers involved are much larger than the number of distinct guessers. She writes: "you must be willing and able to make large numbers of clones of each of your job applicants, and you must be interested in picking from this army of clones a staff of tens of thousands, or the theorem has nothing to say about your hiring process." (I think you can simulate cloning by allowing each guesser to try multiple times to find a solution.) The theorem has, as she rightly complains, no mathematical interest and no practical value. You could draw from it the injunction: "Brainstorm a lot." But this wouldn't really help in the end. Brainstorming "diversely" (i.e., randomly) will get you to the right answer given enough time, but you wouldn't necessarily recognize the answer when you found it. I don't feel very confident of this analysis. I am just one guesser among many. But if the original post attracted enough answers, a correct answer will be included.
P.S. I too wrote about the paper on Idealism and Greek Philosophy. It was one of my earliest publications, and Myles was tremendously encouraging.
Myles was a friend. He was quite a remarkable individual. He had the ability, that you allude to Eric, to take from you what you were capable of giving. That sounds as if he was selfish. But actually, it was the opposite. He tried to find out what you were good at and gave you a chance to tell him about it. I learned a lot about by talking about sense-data with him, to give you an example. That's one reason I am surprised you think that he exemplified "a wider limitation of that generation of analytic, 'ancient' philosophers to have overly firm, and in part mere fashionable, views about what mattered." Actually, I think he took ancient philosophy into channels that it hadn't occupied, at least not in Anglo-American philosophy departments.
A very charitable take! I see what you are getting at (but would Hoffman?).
I don't think it's Kant, because Kant doesn't believe in lies with payoff. It's basically sceptical about anything beyond the immediate evidence. Even if you treat a dark shadow on your screen as a steering wheel (in Grand Theft Auto), it's still not a steering wheel . . . just a visual construct. So also the real steering wheel when you are actually driving. That's the "argument, I think.
Chris Stephens has a great piece on better-safe-than-sorry scenarios. (“When is it Selectively Advantageous to Have True Beliefs? Sandwiching the Better-Safe-than-Sorry Argument,” Philosophical Studies 105/2, (2001): 161-189.) The bottom line, as I remember it, is that examples like Peacocke's (cited by Ned Block in comment 3) work fine if the appearance of closeness if it's a special purpose perceptual appearance restricted to possible predation events. (Something like startle.) But if it's a general-purpose perception of distance, then it wouldn't work so well—it wouldn't be great, for instance, if everything (including food) looked closer by than it really is. The advantage of exaggerating predator proximity would be counter-balanced by the disadvantage of over-estimating the ease of getting provisioned.
I agree with Jonathan Cohen. Hoffman's work, which goes back to the early eighties, is about the tricks that the visual system uses to reconstruct the real world. For example, he memorably and very convincingly showed how concave discontinuities in a two-dimensional projection reliably indicate a three-dimensional occlusion, i.e., one 3d object standing in front of, and partially hiding, another. But instead of putting the point in the way I just have, he suggests that the visual system literally creates the three-dimensional objects, and that evolution "hides the truth from our eyes." (Crazy! What truth does it hide? That there are only two-dimensional objects?) Hoffman is a clever psychonomist (if that's a word) but a lousy philosopher. He has an ontology of the proximal: the eye "creates" anything that goes beyond the two-dimensional pattern on the retina.
I am not sure if this addresses your concern, and maybe I have failed to grasp the nub of the issue, but it seems to me that there are (at least) two sorts of cases of replication failure. In the systematic-error case, the facts are inherently variable and any positive result is just a fluke. In the less pernicious, or more redeemable, kind of case, replication failure is the result of bad methodology and could be avoided given better experimental design. As I understand van Bavel et al, they studies of a certain type are more likely to fall into the first kind of case. In other words, they are saying that given optimal methodology, many cases of replication failure must be traced to variability of "time, culture, or location" and cannot be remedied. So, they are not saying that you can never place any faith in generalizations about the human mind. Rather, they are saying that only some (not all) facts about the human mind are highly variable. Does such unreliability in one area infect every other area? Not if we have good criteria of demarcation. I was suggesting (comments 1 and 4 above) that perceptual psychology was not fatally infected because these phenomena are more uniform across history and across culture.
Hurwich and Jameson used exactly two "observers," one entitled H and the other J! (And, of course, their four-part paper was considered revolutionary, and though their theory has been challenged, their results haven't been significantly challenged.)
I'm not an optimist (or a pessimist either). I am just pointing out that the replication crisis, as exposed by such authors as Aarts et al, isn't as wide as sometimes advertised. (And of course there have been widely publicized instances of fraud in some of the problematic areas, which has increased public perception of unreliability.) There is no justification for saying that in cognitive psychology in general, "our gains are puny; our science ill-founded." I don't particularly want to go into substantive arguments about methodology in psych of perception, and I certainly agree that there are pressures to publish (just as there are in medicine and in physics, and even in moral philosophy, for that matter). But I do want to point out that lower sample sizes are tolerated in some fields than others . . . perhaps because the phenomena themselves are assumed to be more uniform across subjects. So for example with metacontrast or colour opponency studies, the sample is sometimes just the authors themselves plus a few grad students. Look at the classic studies by Hurwich and Jameson. These studies have never been challenged, and are considered well-established.
This is a very broad attack, seemingly motivated by broad scepticism regarding ANY attempt to understand the mind empirically. It's worth noting the area of psychology in which replication has proved to be a problem: studies of unconscious influences on personal attitudes such as belief, evaluation, motivation, and social behaviour. Thus, the Open Science Collaboration (Aarts et al) write that their sample of 100 problematic psychology articles were "coded as representing cognitive (n = 43 studies) or social-personality (n = 57 studies)." Is there a corresponding problem in e.g. perceptual psychology? For example, in the experiments that use statistical methods (and excluding brain-probing methods such as single neuron or MRI studies) on visual attention, perceptual illusions, cross-modal effects etc? I think not. Psychology isn't in crisis; social psychology isn't the whole of the discipline.
I'd like to second Chris. Whenever I have written to Ed and/or a subject editor, they have responded thoughtfully and made conscientious efforts to look into my points. I think revisions have ensued, at least on one occasion. There's no reason simply to live with what you're given.
It seems to me that if you pay for a service, you are injured if the service is not performed with due care. So, you must be saying either that Stanford did act with reasonable diligence and due care (and that the injury was just bad luck), or that the law, as written, does not prevent them from injuring people in this way. I wonder which.
I can't see how these specific results threaten our self-understanding any more that the ancient realization that the mind is realized in the brain. John O'Keefe, Steve Nadel, and the Mosers showed how the hippocampus records memories spatially. So it's true that, as Alex Rosenberg says, they demonstrated that some of our thoughts are encoded in a way that is essentially different from the "language of thought." But showing that (some) beliefs are non-linguistically encoded is not the same as saying that we don't have beliefs. I have a belief about how my living room is arranged. This belief is embodied in an image, not in multiple sentences of the form "There is a couch between two armchairs." Even so, I still have a belief about how my living room is arranged. I could express this belief by drawing a labelled picture; I can't fully express it in sentences. (Liz Camp has some great work on this.) Alex makes a further claim: "Experimenters decode firing patterns. Rats don’t." This might be true, or it might not. But it has nothing to do specifically with the spatial coding of the hippocampal formation. Imagine that somebody looked into Broca's area and found firing patterns there that correspond somehow to sentential structure. It would be equally true that "Experimenters decode firing patterns. Rats (and humans) don’t." What I am saying in short is that there is no new threat posed by these discoveries. Or at least, I am not getting it from Alex.
Self-nominations are welcome for APA leadership positions. Why not nominate yourself for a member-at-large seat on the Board of Officers? Since the nominating committee puts forward three candidates for each vacancy, it's likely that your name would appear on the slate. For what it is worth, I don't think your chances of being on the slate would be damaged by self-nomination. Election would, of course, depend on name recognition. But maybe Brian could help with that.
It's truly strange that she hasn't been promoted to Professor. She got her PhD in 1988, so there was certainly time and occasion. The only explanation I can think of is that she didn't find time to apply. In Ontario universities, it takes time and effort to put together a promotion case, and a lot of this falls on the applicant. And there is no financial reward, so some are not incentivized to do it.
There's a certain asymmetry in this particular "debate." The transgender side feels (with some degree of justification, it might be said) personally threatened by the discourse that emanates from the gender-critical side, reasonably civil though the latter may be. But the trans side has been notably uncivil and even abusive in its response. Lack of civility is an issue, here, but not the only issue.
My very first job was a one-year gig at Claremont Graduate School, replacing Chuck Young, who went on what must have been his first leave. My colleagues then were Jack Vickers and Al Louch. What a great first year of my professional life! I heard over the years that things were changing for the worse, but look where it has all ended up. I am very sorry to hear about this, and about Professor Yamada and Chuck being unceremoniously fired without advance warning. When this post appeared, they did not know; three weeks later, they were dumped. What a shameless way to treat people. I am so very sorry. Mohan
The piece is very shallow. The only idea it contains is that Jordan Peterson is a deferential and restrained critic, unlike many Americans. Well, he's sly and subversive, not strident and in your face. I guess that makes him different from Ann Coulter and Steve Bannon. But does it it make him typically Canadian? I don't really find this worth having an opinion about.
I don't think that the dispute between Chirimuuta and colour-realists is relevant to the issue about eliminativism.
Sorry for the late response; I hadn't been monitoring this thread over the weekend. I think Dennett would agree that you have visual experience as of a coloured after-image, but he doesn't want to reify the thing you see. You agree that after-images aren't mental particulars, and that's half way there, but you reify the visual field, and he wouldn't (I think). Still, you get the idea.
Thanks. We seem to be on the same page, more or less. But do you really want to say that afterimages are things that you see, or that these things—the afterimages themselves—are in fact coloured? Anyway, that's what Dennett rejects, and I'm with him on that. I'd be surprised if Galen Strawson wanted to say anything else.
You can acknowledge what is clearly a fact, but deny that anybody knows what it is, precisely—deny that it is clear in this sense. That's not silly; it can actually be profound. Dennett is somewhere in this territory; not in denial territory. (See comments 9 and 10.) It's probably a bit silly to go overboard and deny the fact, but hardly Greatly Silly.