This is Mohan Matthen's Typepad Profile.
Join Typepad and start following Mohan Matthen's activity
Join Now!
Already a member? Sign In
Mohan Matthen
Toronto
Philosophy professor
Interests: Philosophy of perception, philosophy of biology
Recent Activity
John, I have no reason to contest what you say. And in fact the technology got a start by asking locked-in patients to imagine playing tennis if the answer to a question was 'yes' but imagine doing math (if I remember correctly) if the answer was 'no.' (So, pre-motor cortex activation means 'yes' and pre-frontal means 'no.') This task mixes imagined and affirmed content--the patient can't imagine doing math without actually doing it, but clearly they can't play tennis. Still, I'd be surprised if the technology has gotten so fine-grained as to figure out the meaning (within certain parameters) of your thought but not whether you were affirming it.
We can find out a lot about people and their thoughts by observing the outside signs. But this technology (supposedly) goes directly to the brain. You can dissimulate by "acting;" perhaps this is possible for the brain as well. With regard to your question, I read attentively to find out if the claim is individual-dependent. I think it isn't, or isn't wholly. Finally, it is scary even if scanners weigh tons: an interrogator can pop you into one and ask you questions.
Ned, I would certainly stop short of saying that you can read mental content off an MRI scan. But what the New Yorker article reports is that there is a space of meanings such that (many?) thoughts map onto points in the space, and that you can read the meaning-coordinates of thoughts off an MRI scan. I don't know whether claim is correct, or correctly reported, but it would contradict philosophical orthodoxy if true. And it's quite scary. For example, if I (occurently) have the thought, "The Government is committing war-crimes," the scan will reveal at least that I am NOT thinking "The Government is fighting a just war." And maybe it would reveal something more positively descriptive of my thought, like "A powerful organization is being unjustly violent." (I am trying to stay close to the parameters of meaning-space here.) Am I wrong?
abx has it right, I think. Where there is Murdoch, there is a shift to the right. And that includes the UK. (A lot more people read newspapers in the UK, by the way, judging just from how it looks on public transport.) I don't think you have to puzzle over why Australia has state-sponsored medicine and other social welfare institutions: like the UK, it had them before the Murdochs became as dominant as they now are. The same goes for the UK. But as in the UK, Australian politics have shifted to the right and have become a lot more confrontational and less institution-compliant than they were. Am I wrong? Anyway, my thought is that a shift to the right in the US is a shift to the extreme nearly-fascist right.
Franz, I don't agree with one of your crucial assumptions. You seem to suggest that the Government of Canada determines SSHRC policy. As far as I know, this is inaccurate both de jure and de facto. De jure it is false because SSHRC is meant to be at arm's length from GOC. Its policy is meant to be determined by SSHRC Council acting independently, just as the Bank of Canada is supposed to act independently. Of course, this can be subjected to multitudes of abuses. Nevertheless, GOC is not "within its rights" to exert direct influence. Which brings me to my second point. De facto, I don't believe that Government of Canada does try to influence SSHRC policy. I could be wrong, of course, but I believe that policies such as the one we have been discussing are in fact put into effect by Council, which consists mostly of academics. So, if you don't like SSHRC's diversity direction, fine, and I agree with you. But don't blame it on the Government of Canada.
Like Andrew and OP, I think it would be unwise for an applicant just to say that diversity is irrelevant to their research. There might be an academic on the adjudication committee who would take offence; indeed, let's say there's bound to be at least one.(And even if the diversity thing is not supposed to be a part of the merit review, who needs a snarling opponent adjudicating your work?) That said, I am sticking to my position that you shouldn't be terrified to criticize SSHRC for this requirement. Maybe I'm being over-sanguine, but things don't seem to me to have gotten quite so bad that every chance remark puts you at risk of being "cancelled by a social media mob."
SSHRC is at arms-length from the Government of Canada, so one would assume that a requirement like this was internally generated and approved by Council. So, what one can reasonably surmise about the diversity "module" is that it was likely developed by a committee, presumably an academic committee, that reports to Council. One can easily imagine how this came to pass. (I think it is possible to find out by asking if somebody wants to do the work.) I'm a little puzzled by the claim that academics in Canada are "terrified" to criticize this. Are we supposed to be scared of SSHRC or of our own colleagues? I don't think we should be worried about SSHRC. It is a highly decentralized organization and its adjudication committees have autonomy of action. The academic members of these committees are not going to hold it against you that in some public forum you criticized SSHRC. (Why would they care?) And the secretariat is not supposed to voice any non-procedural opinion about adjudication. But I guess I see why people are worried about blow-back from their academic peers. Academic peers are a judgemental lot these days.
Not if there is always some light that reaches the eye from every direction in space. That's the thing about Vantablack: it's so non-reflective, it is said to look like a hole. (Put that way, I guess that deep, narrow holes would be Vantablack, except that they are so small that they are smaller than minimum visibilia.)
Responding to Curtis Franks: The so-called impossible colours are good candidates for novel shades. In the early 80s, Piantinida and Crane famously produced an experience of a reddish green by stabilizing a red-green boundary on the retina (using an eye-tracker)—reddish-green is supposed to be an impossible shade. It sounds as if the Hoffman shade is something of the same kind, where the brain is induced to produce a yellow that is darker than black (which should be impossible). However, YInMn blue is a different kind of case than either of these because it is a pigment . . . a substance with a stable colour. (Same for Vantablack.) No fooling with the brain needed. Responding to Stephen Rive: any colour can be produced by mixing lights of different wavelengths, or by mixing different pigments. And this can be done in different ways . . . different mixtures look colour-equivalent. Because of this, we moderns have seen just about every possible shade, because our colour monitors can mix up the RGB appropriately. (This was not always true: a couple of thousand years ago, I figure, nobody had seen a truly saturated red.) What is impossible is getting certain colours to exceed certain levels of brightness or darkness relative to other colours seen simultaneously. It sounds as if YInMn blue is perceptibly novel because it is "unnaturally" brilliant. I wish I could see it! (Photographic reproductions wouldn't do it justice. You can't take a photograph of Vantablack either, because the dyes on the photographic surface don't absorb enough light.)
If fatalities are under-reported, as is likely, the fatalities could be quite a bit higher than 1% in NYC. But the fatality rate seems to vary quite a bit by country . . . whether that's variance in reporting or variance in response is hard to say. It's certainly not a reliable predictor in itself.
I thought you might say that, Brian. I do think there's a greater level of social trust and cohesion in Canada (judging from American news sources), but that's not my point. I imagine that the Universities of Arizona and Wisconsin (along with virtually every other public university) are going to have to make a lot of people suffer, and I don't think it becomes a relatively well paid tenured professor to complain about a mild furlough that eases the burden put on colleagues and students a little bit. (Arizona doesn't sound very mild, it's true.) I am not offering an argument here, merely a sentiment. And that is all it is worth, I admit.
Walking around my neighbourhood, I encounter lots of small business people who are either suffering very gravely or, in some cases, out of business altogether. My dry-cleaner has lost her business; our local grocer is barely hanging on and won't be for much longer; my hair-stylist (who happens to live in the same building) has no income at all. Closer to home, my daughter, who is a therapist, has a large number of clients who cannot afford to pay her . . . and so her income is affected. The epidemic has also brought huge personal risks to many low paid workers, including front-line staff at hospitals and care facilities, and supermarkets, gas-stations, etc. There's a huge amount of loss all over. Just think of all the graduate students who won't get jobs next year, and all of the untenured faculty who will face very difficult situations. I hope employees who are being furloughed one day a month don't feel that they are being hard done by. I honestly can't understand why tenured faculty should not be willing to share in such a mild constriction.
This is very unfortunate. Sympathies to your correspondent. I find the university's action illogical, don't you? All of our duties are being conducted remotely. Many students have left campus and have gone home. What does it matter to them, or to the university, where exactly a remote lecturer physically is? Is the J1 automatically revoked if the holder is out of the US and unable to return? If so, I suppose the university can't legally pay the holder--it's not their choice--but if the visa hasn't actually been revoked, why should the university assume that it is?
I hope that as philosophers we all realize that data like this MAY be useful for scientific purposes, but is completely lacking in value for purposes of personal prediction. The authors say that the odds ratio for A-type is 1.2. This means (if I understand correctly) that if the chances of being infected across the population is 30%, then the chances for a A-type subject is 36%. That's already hard to translate into predictions--there is no certifiable way to translate probabilities of this kind into action. On top of that, there is a measure of uncertainty associated with this number. The authors say you can be 95% sure that in the 30% infection scenario, A-types have a chance of infection between 30 and 42%. Plus, of course, these chances vary hugely with behaviour, where you live, and your antecedent health condition. In short, you're safe to forget about this. Unless you are a medical researcher, you know nothing more that is of any relevance to you than before you heard of it.
This blog's readers are largely highly-educated, sober-thinking people who have the public good at heart. And the same can be said of you, Brian. But by and large, most of us lack expertise in public health and epidemiology. So, allowing that I am unaccustomed to the ways of the internet, I am still seriously sceptical that this kind of speculation is useful. Where do the above options come from? If from you, Brian, then why should they be taken seriously as exhausting the possibilities? If from experts, then you should cite your sources. That said, many of us know quite a lot about university finances. So comments like Steven Hales' (and yours, Brian, about paying salaries) are informative. BL COMMENT: Happy to stand corrected, which is why I opened comments. I am reading various things by public health experts, but as noted, I've not seen anyone address this question.
No "mistakes" in particular, but a curiously unsatisfying string of well-hedged non sequiturs. To wit: A complaint about sample size (though it acknowledges that a sample of 1000 is quite normal in studies like this). Doubts that racial disparity in first-class degrees demonstrates bias, given a lack of disparity in second class degrees. (Of course, this doesn't show anything of the sort--indeed, it suggests that there is a downgrading of would-be non-white firsts to second class degrees.) A suggestion that the equal (or better) representation of Asian students is evidence that non-Asian non-whites are treated equally. (Those Asians are non-white, after all.) And general incredulity that there could be any hints of racism in what the authors proudly proclaim is the least racist society in the world--albeit one where racist advertisements were quite effective during the Brexit campaign. The authors end with the suggestion that "it’s socio-economic status, not race, that accounts for the attainment gap," but they don't attempt to explain why, in this non-racist utopia, "a majority of black students come from state schools," where students tend to do worse at university.
I have been using Andy Clark's Mindware in my second year "Minds and Machines" course. It is a wonderfully comprehensive and insightful introduction to the question of how artificial intelligence, simulation, and robotics are changing the traditional computational conception of mind. But it is a difficult read. I would love to know of any non-technical treatments of the these topics. Any suggestions?
On David Wallace's reading, the theorem is somewhat similar to what has occasionally been called Li's "theorem:" The average return on a portfolio of investments will increase in proportion to the variance (i.e., degree of diversity) of individual returns. The reason is that the higher return investments grow faster and hence occupy, over time, a larger proportion of the whole. (Li's theorem has been used as a simplified proxy for Fisher's Fundamental Theorem of Natural Selection.) Like the Hong-Page theorem, Li's theorem depends simply on the fact that in a diverse group, the average performance is better than that of the collective consisting of the worst member(s) of the group. But as David points out, the corollary of this observation is that the collective consisting of the best members of a diverse group will, though it is not diverse, do better than the average.
I realized upon further reflection that it is assumed that every guess results in an improvement on a previous guess it takes as a starting point. So the right guess is recognizable by the fact that it will be an equilibrium: all guesses that start with the right solution ping back to the input. Obviously, these are very strong assumptions.
I regret the hour or so I spent trying to understand the Hong-Page "theorem." As far as I can tell, the "agents" they define don't have "skill-sets". Each agent is just a set of guesses. (Each agent is a function from problems to solutions.) Some guessers are better than others: their iterated guesses result in improvements. As I understand it, the theorem states that some large number of randomly picked guessers will do better than a smaller number of good guessers. (I think this is true in part because the large set of randomly picked guessers will include the small set of good guessers.) As Professor Thompson says, this has very little application to the real world because the numbers involved are much larger than the number of distinct guessers. She writes: "you must be willing and able to make large numbers of clones of each of your job applicants, and you must be interested in picking from this army of clones a staff of tens of thousands, or the theorem has nothing to say about your hiring process." (I think you can simulate cloning by allowing each guesser to try multiple times to find a solution.) The theorem has, as she rightly complains, no mathematical interest and no practical value. You could draw from it the injunction: "Brainstorm a lot." But this wouldn't really help in the end. Brainstorming "diversely" (i.e., randomly) will get you to the right answer given enough time, but you wouldn't necessarily recognize the answer when you found it. I don't feel very confident of this analysis. I am just one guesser among many. But if the original post attracted enough answers, a correct answer will be included.
P.S. I too wrote about the paper on Idealism and Greek Philosophy. It was one of my earliest publications, and Myles was tremendously encouraging.
Myles was a friend. He was quite a remarkable individual. He had the ability, that you allude to Eric, to take from you what you were capable of giving. That sounds as if he was selfish. But actually, it was the opposite. He tried to find out what you were good at and gave you a chance to tell him about it. I learned a lot about by talking about sense-data with him, to give you an example. That's one reason I am surprised you think that he exemplified "a wider limitation of that generation of analytic, 'ancient' philosophers to have overly firm, and in part mere fashionable, views about what mattered." Actually, I think he took ancient philosophy into channels that it hadn't occupied, at least not in Anglo-American philosophy departments.
A very charitable take! I see what you are getting at (but would Hoffman?).
I don't think it's Kant, because Kant doesn't believe in lies with payoff. It's basically sceptical about anything beyond the immediate evidence. Even if you treat a dark shadow on your screen as a steering wheel (in Grand Theft Auto), it's still not a steering wheel . . . just a visual construct. So also the real steering wheel when you are actually driving. That's the "argument, I think.
Chris Stephens has a great piece on better-safe-than-sorry scenarios. (“When is it Selectively Advantageous to Have True Beliefs? Sandwiching the Better-Safe-than-Sorry Argument,” Philosophical Studies 105/2, (2001): 161-189.) The bottom line, as I remember it, is that examples like Peacocke's (cited by Ned Block in comment 3) work fine if the appearance of closeness if it's a special purpose perceptual appearance restricted to possible predation events. (Something like startle.) But if it's a general-purpose perception of distance, then it wouldn't work so well—it wouldn't be great, for instance, if everything (including food) looked closer by than it really is. The advantage of exaggerating predator proximity would be counter-balanced by the disadvantage of over-estimating the ease of getting provisioned.