This is Mohan Matthen's Typepad Profile.
Join Typepad and start following Mohan Matthen's activity
Join Now!
Already a member? Sign In
Mohan Matthen
Toronto
Philosophy professor
Interests: Philosophy of perception, philosophy of biology
Recent Activity
This is very unfortunate. Sympathies to your correspondent. I find the university's action illogical, don't you? All of our duties are being conducted remotely. Many students have left campus and have gone home. What does it matter to them, or to the university, where exactly a remote lecturer physically is? Is the J1 automatically revoked if the holder is out of the US and unable to return? If so, I suppose the university can't legally pay the holder--it's not their choice--but if the visa hasn't actually been revoked, why should the university assume that it is?
I hope that as philosophers we all realize that data like this MAY be useful for scientific purposes, but is completely lacking in value for purposes of personal prediction. The authors say that the odds ratio for A-type is 1.2. This means (if I understand correctly) that if the chances of being infected across the population is 30%, then the chances for a A-type subject is 36%. That's already hard to translate into predictions--there is no certifiable way to translate probabilities of this kind into action. On top of that, there is a measure of uncertainty associated with this number. The authors say you can be 95% sure that in the 30% infection scenario, A-types have a chance of infection between 30 and 42%. Plus, of course, these chances vary hugely with behaviour, where you live, and your antecedent health condition. In short, you're safe to forget about this. Unless you are a medical researcher, you know nothing more that is of any relevance to you than before you heard of it.
This blog's readers are largely highly-educated, sober-thinking people who have the public good at heart. And the same can be said of you, Brian. But by and large, most of us lack expertise in public health and epidemiology. So, allowing that I am unaccustomed to the ways of the internet, I am still seriously sceptical that this kind of speculation is useful. Where do the above options come from? If from you, Brian, then why should they be taken seriously as exhausting the possibilities? If from experts, then you should cite your sources. That said, many of us know quite a lot about university finances. So comments like Steven Hales' (and yours, Brian, about paying salaries) are informative. BL COMMENT: Happy to stand corrected, which is why I opened comments. I am reading various things by public health experts, but as noted, I've not seen anyone address this question.
No "mistakes" in particular, but a curiously unsatisfying string of well-hedged non sequiturs. To wit: A complaint about sample size (though it acknowledges that a sample of 1000 is quite normal in studies like this). Doubts that racial disparity in first-class degrees demonstrates bias, given a lack of disparity in second class degrees. (Of course, this doesn't show anything of the sort--indeed, it suggests that there is a downgrading of would-be non-white firsts to second class degrees.) A suggestion that the equal (or better) representation of Asian students is evidence that non-Asian non-whites are treated equally. (Those Asians are non-white, after all.) And general incredulity that there could be any hints of racism in what the authors proudly proclaim is the least racist society in the world--albeit one where racist advertisements were quite effective during the Brexit campaign. The authors end with the suggestion that "it’s socio-economic status, not race, that accounts for the attainment gap," but they don't attempt to explain why, in this non-racist utopia, "a majority of black students come from state schools," where students tend to do worse at university.
I have been using Andy Clark's Mindware in my second year "Minds and Machines" course. It is a wonderfully comprehensive and insightful introduction to the question of how artificial intelligence, simulation, and robotics are changing the traditional computational conception of mind. But it is a difficult read. I would love to know of any non-technical treatments of the these topics. Any suggestions?
On David Wallace's reading, the theorem is somewhat similar to what has occasionally been called Li's "theorem:" The average return on a portfolio of investments will increase in proportion to the variance (i.e., degree of diversity) of individual returns. The reason is that the higher return investments grow faster and hence occupy, over time, a larger proportion of the whole. (Li's theorem has been used as a simplified proxy for Fisher's Fundamental Theorem of Natural Selection.) Like the Hong-Page theorem, Li's theorem depends simply on the fact that in a diverse group, the average performance is better than that of the collective consisting of the worst member(s) of the group. But as David points out, the corollary of this observation is that the collective consisting of the best members of a diverse group will, though it is not diverse, do better than the average.
I realized upon further reflection that it is assumed that every guess results in an improvement on a previous guess it takes as a starting point. So the right guess is recognizable by the fact that it will be an equilibrium: all guesses that start with the right solution ping back to the input. Obviously, these are very strong assumptions.
I regret the hour or so I spent trying to understand the Hong-Page "theorem." As far as I can tell, the "agents" they define don't have "skill-sets". Each agent is just a set of guesses. (Each agent is a function from problems to solutions.) Some guessers are better than others: their iterated guesses result in improvements. As I understand it, the theorem states that some large number of randomly picked guessers will do better than a smaller number of good guessers. (I think this is true in part because the large set of randomly picked guessers will include the small set of good guessers.) As Professor Thompson says, this has very little application to the real world because the numbers involved are much larger than the number of distinct guessers. She writes: "you must be willing and able to make large numbers of clones of each of your job applicants, and you must be interested in picking from this army of clones a staff of tens of thousands, or the theorem has nothing to say about your hiring process." (I think you can simulate cloning by allowing each guesser to try multiple times to find a solution.) The theorem has, as she rightly complains, no mathematical interest and no practical value. You could draw from it the injunction: "Brainstorm a lot." But this wouldn't really help in the end. Brainstorming "diversely" (i.e., randomly) will get you to the right answer given enough time, but you wouldn't necessarily recognize the answer when you found it. I don't feel very confident of this analysis. I am just one guesser among many. But if the original post attracted enough answers, a correct answer will be included.
P.S. I too wrote about the paper on Idealism and Greek Philosophy. It was one of my earliest publications, and Myles was tremendously encouraging.
Myles was a friend. He was quite a remarkable individual. He had the ability, that you allude to Eric, to take from you what you were capable of giving. That sounds as if he was selfish. But actually, it was the opposite. He tried to find out what you were good at and gave you a chance to tell him about it. I learned a lot about by talking about sense-data with him, to give you an example. That's one reason I am surprised you think that he exemplified "a wider limitation of that generation of analytic, 'ancient' philosophers to have overly firm, and in part mere fashionable, views about what mattered." Actually, I think he took ancient philosophy into channels that it hadn't occupied, at least not in Anglo-American philosophy departments.
A very charitable take! I see what you are getting at (but would Hoffman?).
I don't think it's Kant, because Kant doesn't believe in lies with payoff. It's basically sceptical about anything beyond the immediate evidence. Even if you treat a dark shadow on your screen as a steering wheel (in Grand Theft Auto), it's still not a steering wheel . . . just a visual construct. So also the real steering wheel when you are actually driving. That's the "argument, I think.
Chris Stephens has a great piece on better-safe-than-sorry scenarios. (“When is it Selectively Advantageous to Have True Beliefs? Sandwiching the Better-Safe-than-Sorry Argument,” Philosophical Studies 105/2, (2001): 161-189.) The bottom line, as I remember it, is that examples like Peacocke's (cited by Ned Block in comment 3) work fine if the appearance of closeness if it's a special purpose perceptual appearance restricted to possible predation events. (Something like startle.) But if it's a general-purpose perception of distance, then it wouldn't work so well—it wouldn't be great, for instance, if everything (including food) looked closer by than it really is. The advantage of exaggerating predator proximity would be counter-balanced by the disadvantage of over-estimating the ease of getting provisioned.
I agree with Jonathan Cohen. Hoffman's work, which goes back to the early eighties, is about the tricks that the visual system uses to reconstruct the real world. For example, he memorably and very convincingly showed how concave discontinuities in a two-dimensional projection reliably indicate a three-dimensional occlusion, i.e., one 3d object standing in front of, and partially hiding, another. But instead of putting the point in the way I just have, he suggests that the visual system literally creates the three-dimensional objects, and that evolution "hides the truth from our eyes." (Crazy! What truth does it hide? That there are only two-dimensional objects?) Hoffman is a clever psychonomist (if that's a word) but a lousy philosopher. He has an ontology of the proximal: the eye "creates" anything that goes beyond the two-dimensional pattern on the retina.
I am not sure if this addresses your concern, and maybe I have failed to grasp the nub of the issue, but it seems to me that there are (at least) two sorts of cases of replication failure. In the systematic-error case, the facts are inherently variable and any positive result is just a fluke. In the less pernicious, or more redeemable, kind of case, replication failure is the result of bad methodology and could be avoided given better experimental design. As I understand van Bavel et al, they studies of a certain type are more likely to fall into the first kind of case. In other words, they are saying that given optimal methodology, many cases of replication failure must be traced to variability of "time, culture, or location" and cannot be remedied. So, they are not saying that you can never place any faith in generalizations about the human mind. Rather, they are saying that only some (not all) facts about the human mind are highly variable. Does such unreliability in one area infect every other area? Not if we have good criteria of demarcation. I was suggesting (comments 1 and 4 above) that perceptual psychology was not fatally infected because these phenomena are more uniform across history and across culture.
Hurwich and Jameson used exactly two "observers," one entitled H and the other J! (And, of course, their four-part paper was considered revolutionary, and though their theory has been challenged, their results haven't been significantly challenged.)
I'm not an optimist (or a pessimist either). I am just pointing out that the replication crisis, as exposed by such authors as Aarts et al, isn't as wide as sometimes advertised. (And of course there have been widely publicized instances of fraud in some of the problematic areas, which has increased public perception of unreliability.) There is no justification for saying that in cognitive psychology in general, "our gains are puny; our science ill-founded." I don't particularly want to go into substantive arguments about methodology in psych of perception, and I certainly agree that there are pressures to publish (just as there are in medicine and in physics, and even in moral philosophy, for that matter). But I do want to point out that lower sample sizes are tolerated in some fields than others . . . perhaps because the phenomena themselves are assumed to be more uniform across subjects. So for example with metacontrast or colour opponency studies, the sample is sometimes just the authors themselves plus a few grad students. Look at the classic studies by Hurwich and Jameson. These studies have never been challenged, and are considered well-established.
This is a very broad attack, seemingly motivated by broad scepticism regarding ANY attempt to understand the mind empirically. It's worth noting the area of psychology in which replication has proved to be a problem: studies of unconscious influences on personal attitudes such as belief, evaluation, motivation, and social behaviour. Thus, the Open Science Collaboration (Aarts et al) write that their sample of 100 problematic psychology articles were "coded as representing cognitive (n = 43 studies) or social-personality (n = 57 studies)." Is there a corresponding problem in e.g. perceptual psychology? For example, in the experiments that use statistical methods (and excluding brain-probing methods such as single neuron or MRI studies) on visual attention, perceptual illusions, cross-modal effects etc? I think not. Psychology isn't in crisis; social psychology isn't the whole of the discipline.
I'd like to second Chris. Whenever I have written to Ed and/or a subject editor, they have responded thoughtfully and made conscientious efforts to look into my points. I think revisions have ensued, at least on one occasion. There's no reason simply to live with what you're given.
It seems to me that if you pay for a service, you are injured if the service is not performed with due care. So, you must be saying either that Stanford did act with reasonable diligence and due care (and that the injury was just bad luck), or that the law, as written, does not prevent them from injuring people in this way. I wonder which.
I can't see how these specific results threaten our self-understanding any more that the ancient realization that the mind is realized in the brain. John O'Keefe, Steve Nadel, and the Mosers showed how the hippocampus records memories spatially. So it's true that, as Alex Rosenberg says, they demonstrated that some of our thoughts are encoded in a way that is essentially different from the "language of thought." But showing that (some) beliefs are non-linguistically encoded is not the same as saying that we don't have beliefs. I have a belief about how my living room is arranged. This belief is embodied in an image, not in multiple sentences of the form "There is a couch between two armchairs." Even so, I still have a belief about how my living room is arranged. I could express this belief by drawing a labelled picture; I can't fully express it in sentences. (Liz Camp has some great work on this.) Alex makes a further claim: "Experimenters decode firing patterns. Rats don’t." This might be true, or it might not. But it has nothing to do specifically with the spatial coding of the hippocampal formation. Imagine that somebody looked into Broca's area and found firing patterns there that correspond somehow to sentential structure. It would be equally true that "Experimenters decode firing patterns. Rats (and humans) don’t." What I am saying in short is that there is no new threat posed by these discoveries. Or at least, I am not getting it from Alex.
Self-nominations are welcome for APA leadership positions. Why not nominate yourself for a member-at-large seat on the Board of Officers? Since the nominating committee puts forward three candidates for each vacancy, it's likely that your name would appear on the slate. For what it is worth, I don't think your chances of being on the slate would be damaged by self-nomination. Election would, of course, depend on name recognition. But maybe Brian could help with that.
It's truly strange that she hasn't been promoted to Professor. She got her PhD in 1988, so there was certainly time and occasion. The only explanation I can think of is that she didn't find time to apply. In Ontario universities, it takes time and effort to put together a promotion case, and a lot of this falls on the applicant. And there is no financial reward, so some are not incentivized to do it.
There's a certain asymmetry in this particular "debate." The transgender side feels (with some degree of justification, it might be said) personally threatened by the discourse that emanates from the gender-critical side, reasonably civil though the latter may be. But the trans side has been notably uncivil and even abusive in its response. Lack of civility is an issue, here, but not the only issue.
My very first job was a one-year gig at Claremont Graduate School, replacing Chuck Young, who went on what must have been his first leave. My colleagues then were Jack Vickers and Al Louch. What a great first year of my professional life! I heard over the years that things were changing for the worse, but look where it has all ended up. I am very sorry to hear about this, and about Professor Yamada and Chuck being unceremoniously fired without advance warning. When this post appeared, they did not know; three weeks later, they were dumped. What a shameless way to treat people. I am so very sorry. Mohan