This is Mohan Matthen's Typepad Profile.
Join Typepad and start following Mohan Matthen's activity
Join Now!
Already a member? Sign In
Mohan Matthen
Philosophy professor
Interests: Philosophy of perception, philosophy of biology
Recent Activity
Not if there is always some light that reaches the eye from every direction in space. That's the thing about Vantablack: it's so non-reflective, it is said to look like a hole. (Put that way, I guess that deep, narrow holes would be Vantablack, except that they are so small that they are smaller than minimum visibilia.)
Responding to Curtis Franks: The so-called impossible colours are good candidates for novel shades. In the early 80s, Piantinida and Crane famously produced an experience of a reddish green by stabilizing a red-green boundary on the retina (using an eye-tracker)—reddish-green is supposed to be an impossible shade. It sounds as if the Hoffman shade is something of the same kind, where the brain is induced to produce a yellow that is darker than black (which should be impossible). However, YInMn blue is a different kind of case than either of these because it is a pigment . . . a substance with a stable colour. (Same for Vantablack.) No fooling with the brain needed. Responding to Stephen Rive: any colour can be produced by mixing lights of different wavelengths, or by mixing different pigments. And this can be done in different ways . . . different mixtures look colour-equivalent. Because of this, we moderns have seen just about every possible shade, because our colour monitors can mix up the RGB appropriately. (This was not always true: a couple of thousand years ago, I figure, nobody had seen a truly saturated red.) What is impossible is getting certain colours to exceed certain levels of brightness or darkness relative to other colours seen simultaneously. It sounds as if YInMn blue is perceptibly novel because it is "unnaturally" brilliant. I wish I could see it! (Photographic reproductions wouldn't do it justice. You can't take a photograph of Vantablack either, because the dyes on the photographic surface don't absorb enough light.)
If fatalities are under-reported, as is likely, the fatalities could be quite a bit higher than 1% in NYC. But the fatality rate seems to vary quite a bit by country . . . whether that's variance in reporting or variance in response is hard to say. It's certainly not a reliable predictor in itself.
I thought you might say that, Brian. I do think there's a greater level of social trust and cohesion in Canada (judging from American news sources), but that's not my point. I imagine that the Universities of Arizona and Wisconsin (along with virtually every other public university) are going to have to make a lot of people suffer, and I don't think it becomes a relatively well paid tenured professor to complain about a mild furlough that eases the burden put on colleagues and students a little bit. (Arizona doesn't sound very mild, it's true.) I am not offering an argument here, merely a sentiment. And that is all it is worth, I admit.
Walking around my neighbourhood, I encounter lots of small business people who are either suffering very gravely or, in some cases, out of business altogether. My dry-cleaner has lost her business; our local grocer is barely hanging on and won't be for much longer; my hair-stylist (who happens to live in the same building) has no income at all. Closer to home, my daughter, who is a therapist, has a large number of clients who cannot afford to pay her . . . and so her income is affected. The epidemic has also brought huge personal risks to many low paid workers, including front-line staff at hospitals and care facilities, and supermarkets, gas-stations, etc. There's a huge amount of loss all over. Just think of all the graduate students who won't get jobs next year, and all of the untenured faculty who will face very difficult situations. I hope employees who are being furloughed one day a month don't feel that they are being hard done by. I honestly can't understand why tenured faculty should not be willing to share in such a mild constriction.
This is very unfortunate. Sympathies to your correspondent. I find the university's action illogical, don't you? All of our duties are being conducted remotely. Many students have left campus and have gone home. What does it matter to them, or to the university, where exactly a remote lecturer physically is? Is the J1 automatically revoked if the holder is out of the US and unable to return? If so, I suppose the university can't legally pay the holder--it's not their choice--but if the visa hasn't actually been revoked, why should the university assume that it is?
I hope that as philosophers we all realize that data like this MAY be useful for scientific purposes, but is completely lacking in value for purposes of personal prediction. The authors say that the odds ratio for A-type is 1.2. This means (if I understand correctly) that if the chances of being infected across the population is 30%, then the chances for a A-type subject is 36%. That's already hard to translate into predictions--there is no certifiable way to translate probabilities of this kind into action. On top of that, there is a measure of uncertainty associated with this number. The authors say you can be 95% sure that in the 30% infection scenario, A-types have a chance of infection between 30 and 42%. Plus, of course, these chances vary hugely with behaviour, where you live, and your antecedent health condition. In short, you're safe to forget about this. Unless you are a medical researcher, you know nothing more that is of any relevance to you than before you heard of it.
This blog's readers are largely highly-educated, sober-thinking people who have the public good at heart. And the same can be said of you, Brian. But by and large, most of us lack expertise in public health and epidemiology. So, allowing that I am unaccustomed to the ways of the internet, I am still seriously sceptical that this kind of speculation is useful. Where do the above options come from? If from you, Brian, then why should they be taken seriously as exhausting the possibilities? If from experts, then you should cite your sources. That said, many of us know quite a lot about university finances. So comments like Steven Hales' (and yours, Brian, about paying salaries) are informative. BL COMMENT: Happy to stand corrected, which is why I opened comments. I am reading various things by public health experts, but as noted, I've not seen anyone address this question.
No "mistakes" in particular, but a curiously unsatisfying string of well-hedged non sequiturs. To wit: A complaint about sample size (though it acknowledges that a sample of 1000 is quite normal in studies like this). Doubts that racial disparity in first-class degrees demonstrates bias, given a lack of disparity in second class degrees. (Of course, this doesn't show anything of the sort--indeed, it suggests that there is a downgrading of would-be non-white firsts to second class degrees.) A suggestion that the equal (or better) representation of Asian students is evidence that non-Asian non-whites are treated equally. (Those Asians are non-white, after all.) And general incredulity that there could be any hints of racism in what the authors proudly proclaim is the least racist society in the world--albeit one where racist advertisements were quite effective during the Brexit campaign. The authors end with the suggestion that "it’s socio-economic status, not race, that accounts for the attainment gap," but they don't attempt to explain why, in this non-racist utopia, "a majority of black students come from state schools," where students tend to do worse at university.
I have been using Andy Clark's Mindware in my second year "Minds and Machines" course. It is a wonderfully comprehensive and insightful introduction to the question of how artificial intelligence, simulation, and robotics are changing the traditional computational conception of mind. But it is a difficult read. I would love to know of any non-technical treatments of the these topics. Any suggestions?
On David Wallace's reading, the theorem is somewhat similar to what has occasionally been called Li's "theorem:" The average return on a portfolio of investments will increase in proportion to the variance (i.e., degree of diversity) of individual returns. The reason is that the higher return investments grow faster and hence occupy, over time, a larger proportion of the whole. (Li's theorem has been used as a simplified proxy for Fisher's Fundamental Theorem of Natural Selection.) Like the Hong-Page theorem, Li's theorem depends simply on the fact that in a diverse group, the average performance is better than that of the collective consisting of the worst member(s) of the group. But as David points out, the corollary of this observation is that the collective consisting of the best members of a diverse group will, though it is not diverse, do better than the average.
I realized upon further reflection that it is assumed that every guess results in an improvement on a previous guess it takes as a starting point. So the right guess is recognizable by the fact that it will be an equilibrium: all guesses that start with the right solution ping back to the input. Obviously, these are very strong assumptions.
I regret the hour or so I spent trying to understand the Hong-Page "theorem." As far as I can tell, the "agents" they define don't have "skill-sets". Each agent is just a set of guesses. (Each agent is a function from problems to solutions.) Some guessers are better than others: their iterated guesses result in improvements. As I understand it, the theorem states that some large number of randomly picked guessers will do better than a smaller number of good guessers. (I think this is true in part because the large set of randomly picked guessers will include the small set of good guessers.) As Professor Thompson says, this has very little application to the real world because the numbers involved are much larger than the number of distinct guessers. She writes: "you must be willing and able to make large numbers of clones of each of your job applicants, and you must be interested in picking from this army of clones a staff of tens of thousands, or the theorem has nothing to say about your hiring process." (I think you can simulate cloning by allowing each guesser to try multiple times to find a solution.) The theorem has, as she rightly complains, no mathematical interest and no practical value. You could draw from it the injunction: "Brainstorm a lot." But this wouldn't really help in the end. Brainstorming "diversely" (i.e., randomly) will get you to the right answer given enough time, but you wouldn't necessarily recognize the answer when you found it. I don't feel very confident of this analysis. I am just one guesser among many. But if the original post attracted enough answers, a correct answer will be included.
P.S. I too wrote about the paper on Idealism and Greek Philosophy. It was one of my earliest publications, and Myles was tremendously encouraging.
Myles was a friend. He was quite a remarkable individual. He had the ability, that you allude to Eric, to take from you what you were capable of giving. That sounds as if he was selfish. But actually, it was the opposite. He tried to find out what you were good at and gave you a chance to tell him about it. I learned a lot about by talking about sense-data with him, to give you an example. That's one reason I am surprised you think that he exemplified "a wider limitation of that generation of analytic, 'ancient' philosophers to have overly firm, and in part mere fashionable, views about what mattered." Actually, I think he took ancient philosophy into channels that it hadn't occupied, at least not in Anglo-American philosophy departments.
A very charitable take! I see what you are getting at (but would Hoffman?).
I don't think it's Kant, because Kant doesn't believe in lies with payoff. It's basically sceptical about anything beyond the immediate evidence. Even if you treat a dark shadow on your screen as a steering wheel (in Grand Theft Auto), it's still not a steering wheel . . . just a visual construct. So also the real steering wheel when you are actually driving. That's the "argument, I think.
Chris Stephens has a great piece on better-safe-than-sorry scenarios. (“When is it Selectively Advantageous to Have True Beliefs? Sandwiching the Better-Safe-than-Sorry Argument,” Philosophical Studies 105/2, (2001): 161-189.) The bottom line, as I remember it, is that examples like Peacocke's (cited by Ned Block in comment 3) work fine if the appearance of closeness if it's a special purpose perceptual appearance restricted to possible predation events. (Something like startle.) But if it's a general-purpose perception of distance, then it wouldn't work so well—it wouldn't be great, for instance, if everything (including food) looked closer by than it really is. The advantage of exaggerating predator proximity would be counter-balanced by the disadvantage of over-estimating the ease of getting provisioned.
I agree with Jonathan Cohen. Hoffman's work, which goes back to the early eighties, is about the tricks that the visual system uses to reconstruct the real world. For example, he memorably and very convincingly showed how concave discontinuities in a two-dimensional projection reliably indicate a three-dimensional occlusion, i.e., one 3d object standing in front of, and partially hiding, another. But instead of putting the point in the way I just have, he suggests that the visual system literally creates the three-dimensional objects, and that evolution "hides the truth from our eyes." (Crazy! What truth does it hide? That there are only two-dimensional objects?) Hoffman is a clever psychonomist (if that's a word) but a lousy philosopher. He has an ontology of the proximal: the eye "creates" anything that goes beyond the two-dimensional pattern on the retina.
I am not sure if this addresses your concern, and maybe I have failed to grasp the nub of the issue, but it seems to me that there are (at least) two sorts of cases of replication failure. In the systematic-error case, the facts are inherently variable and any positive result is just a fluke. In the less pernicious, or more redeemable, kind of case, replication failure is the result of bad methodology and could be avoided given better experimental design. As I understand van Bavel et al, they studies of a certain type are more likely to fall into the first kind of case. In other words, they are saying that given optimal methodology, many cases of replication failure must be traced to variability of "time, culture, or location" and cannot be remedied. So, they are not saying that you can never place any faith in generalizations about the human mind. Rather, they are saying that only some (not all) facts about the human mind are highly variable. Does such unreliability in one area infect every other area? Not if we have good criteria of demarcation. I was suggesting (comments 1 and 4 above) that perceptual psychology was not fatally infected because these phenomena are more uniform across history and across culture.
Hurwich and Jameson used exactly two "observers," one entitled H and the other J! (And, of course, their four-part paper was considered revolutionary, and though their theory has been challenged, their results haven't been significantly challenged.)
I'm not an optimist (or a pessimist either). I am just pointing out that the replication crisis, as exposed by such authors as Aarts et al, isn't as wide as sometimes advertised. (And of course there have been widely publicized instances of fraud in some of the problematic areas, which has increased public perception of unreliability.) There is no justification for saying that in cognitive psychology in general, "our gains are puny; our science ill-founded." I don't particularly want to go into substantive arguments about methodology in psych of perception, and I certainly agree that there are pressures to publish (just as there are in medicine and in physics, and even in moral philosophy, for that matter). But I do want to point out that lower sample sizes are tolerated in some fields than others . . . perhaps because the phenomena themselves are assumed to be more uniform across subjects. So for example with metacontrast or colour opponency studies, the sample is sometimes just the authors themselves plus a few grad students. Look at the classic studies by Hurwich and Jameson. These studies have never been challenged, and are considered well-established.
This is a very broad attack, seemingly motivated by broad scepticism regarding ANY attempt to understand the mind empirically. It's worth noting the area of psychology in which replication has proved to be a problem: studies of unconscious influences on personal attitudes such as belief, evaluation, motivation, and social behaviour. Thus, the Open Science Collaboration (Aarts et al) write that their sample of 100 problematic psychology articles were "coded as representing cognitive (n = 43 studies) or social-personality (n = 57 studies)." Is there a corresponding problem in e.g. perceptual psychology? For example, in the experiments that use statistical methods (and excluding brain-probing methods such as single neuron or MRI studies) on visual attention, perceptual illusions, cross-modal effects etc? I think not. Psychology isn't in crisis; social psychology isn't the whole of the discipline.
I'd like to second Chris. Whenever I have written to Ed and/or a subject editor, they have responded thoughtfully and made conscientious efforts to look into my points. I think revisions have ensued, at least on one occasion. There's no reason simply to live with what you're given.
It seems to me that if you pay for a service, you are injured if the service is not performed with due care. So, you must be saying either that Stanford did act with reasonable diligence and due care (and that the injury was just bad luck), or that the law, as written, does not prevent them from injuring people in this way. I wonder which.