This is Gunnar Björnsson's Typepad Profile.
Join Typepad and start following Gunnar Björnsson's activity
Gunnar Björnsson
Recent Activity
Many thanks for posting this, Thomas!
The full ad, with link to the online application system, can be found here: http://bit.ly/1C46Nd7
Deadline for applications is February 12.
Postdoctoral Fellow(s) in Practical Philosophy
The department of Philosophy, Linguistics and Theory of Science, FLoV, was created 1 January 2009 and consists of the subjects linguistics, practical andtheoretical philosophy, logic and the philosophy of science. The University of Gothenburg received a 10 year grant from the Swedish Research Co...
Many thanks for posting this, Thomas!
The full add, with link to the online application system, can be found here: http://bit.ly/1C46Nd7
Deadline for applications is February 12.
Postdoctoral Fellow(s) in Practical Philosophy
The department of Philosophy, Linguistics and Theory of Science, FLoV, was created 1 January 2009 and consists of the subjects linguistics, practical andtheoretical philosophy, logic and the philosophy of science. The University of Gothenburg received a 10 year grant from the Swedish Research Co...
Hi Josh,
As Marcus suspected, my worry was meant to be neutral with respect to the possibility of zombies. What I don't see, and the reason I don't have a clear intuition that Zom lacks moral understanding, is that the purely qualitative aspect can be said to add understanding in light of the fact that it doesn't guide normal acquisition of the belief that pain is bad.
(I also very much doubt that the badness of pain lies in any purely qualitative aspect of it. If the aspect is purely qualitative, it should be able to be in a state having that aspect but being completely disconnected from the ordinary sources of pain and from any direct propensities to avoid or dislike the state. But then I don't see how that state would be bad, at least not in the way pain is bad.)
Here Be Zombies, and Their Normative Ignorance
Regarding the connection between consciousness and FW/MR, a good many of y’all seemed pretty happy ditching phenomenal consciousness in favour of an explicitly functionalized notion. Maybe that’s because you haven’t thought about Zom. Zom is a pretty normal dude. He wears hats backwards. He trie...
Hi Josh,
Late to the party, and I see that Neil has just raised my main worry here: I'm not so sure that the sense that Zom is excused survives a full functional duplication of an ordinary person. In particular, it seems that Zom's moral beliefs would be causally influenced by just the sort of things that influences the moral beliefs of an ordinary person. If we form beliefs about the badness of pain in response to being in pain or seeing people in pain, for example, one would think that Zom would form judgments about the badness of pain in response to being in the functional counterparts of such states and by seeing people in pain.
Because of this, I'm not sure exactly how best to understand what we are supposed to imagine. Perhaps you intend Zom to be functionally unlike ordinary people in some restricted ways, or perhaps the idea is that understanding has a purely qualitative aspect to it that eludes Zom. Or perhaps these are worries you share, and among your grounds for thinking the argument flawed.
Here Be Zombies, and Their Normative Ignorance
Regarding the connection between consciousness and FW/MR, a good many of y’all seemed pretty happy ditching phenomenal consciousness in favour of an explicitly functionalized notion. Maybe that’s because you haven’t thought about Zom. Zom is a pretty normal dude. He wears hats backwards. He trie...
I'll second that: a very rewarding conference!
What a wonderful Pacific APA
I just wanted to comment on the last Pacific APA and to thank Kevin Timpe for doing an incredible job with it. I'm still high on agency, free will, and moral responsibility talks. I have always loved the Pacific meetings, but Kevin, no doubt with the help of many others, did an extraordinary j...
I might perhaps add that in one of our studies subjects were as willing to attribute moral belief in an explicit "inverted commas" scenario as in a regular amoralist scenario. (The inverted commas scenario was like the regular amoralist scenario with the exception that the agent's putative moral judgments were explicitly concerned with whether others would say that the action in question was morally wrong.) This might indicate that folk attributions of moral belief to amoralists are not attributions of the sort of moral belief that metaethicists have been interested in: everyone agrees that inverted commas beliefs are possible, whatever the correct metaethical theory.
Recent Work on Motivational Internalism
Imagine a person who is not at all motivated to help others. I don't just mean a person who doesn't care about others as much as she should; I mean a person who is literally not motivated at all, not even to the tiniest degree. Now comes the question: Could such a person genuinely believe tha...
I might perhaps add that in one of the studies in "motivational internalism and folk intuitions", subjects were as willing to attribute moral belief in an explicit "inverted commas" scenario as in a regular amoralist scenario. (The inverted commas scenario was like the regular amoralist scenario with the exception that the agent's putative moral judgments were explicitly concerned with whether others would say that the action in question was morally wrong.) This might indicate that attributions of moral belief to amoralists are not attributions of the sort of moral belief that metaethicists have been interested in: everyone agrees that inverted commas beliefs are possible, whatever the correct metaethical theory.
Recent Work on Motivational Internalism
Imagine a person who is not at all motivated to help others. I don't just mean a person who doesn't care about others as much as she should; I mean a person who is literally not motivated at all, not even to the tiniest degree. Now comes the question: Could such a person genuinely believe tha...
Brad, in our studies (in "Motivational internalism and folk intuitions") we tried scenarios relevant to versions of conditional internalism, thus going beyond the simplest forms of internalism. Still, for anyone interested in whether particular versions of internalism resonates particularly well with non-philosophers, there is plenty left to do. Of course, the subtler the differences between the views, the more difficult it will be to work out the relevant implications for cases and to get the details of such cases across to subjects. We found it difficult already to avoid what we took to be the more obvious confounds when investigating some more basic differences.
Recent Work on Motivational Internalism
Imagine a person who is not at all motivated to help others. I don't just mean a person who doesn't care about others as much as she should; I mean a person who is literally not motivated at all, not even to the tiniest degree. Now comes the question: Could such a person genuinely believe tha...
Oops. I was a bit too quick in writing down that suggestion. The sentence
"The aggressor is of course responsible for the aggression, but as the aggression has begun, it is now in the hands of the agent."
should have said
"The aggressor is of course responsible for the aggression, but if the agent resorts to violent defence, the aggressor cannot then avoid being harmed by it."
A worry regarding the necessity of defensive force
I've got, well, a worry regarding the "necessity" requirement on the legitimacy of self- or other-defensive force. I don't really work on this stuff, so it's entirely possible that there's an easy, pat answer to this worry. Anyway, I'd be interested to hear what you all think. It's traditionally...
Hi Andrew,
A quick suggestion:
One difference between the two types of cases that I think might be driving the different intuitions is this: In the Type II case, the agent's action of going to the place of aggression leaves it in the hands of the aggressor whether he will be harmed or not: even if the agent goes there, the aggressor has a perfectly legitimate way of avoiding the harm, namely by not attacking the agent, and the aggressor would be responsible for not choosing this way. In Type I cases by contrast, the aggressor would not be able to avoid the unnecessary harm inflicted on the aggressor by the agent. The aggressor is of course responsible for the aggression, but as the aggression has begun, it is now in the hands of the agent.
One way to bring out the role of responsibility is to think of a Type II* case, which is like the Type II case, except the aggressor would not be responsible for his aggression should the agent show up. Perhaps, through no fault of his own been given a drug. It is the drug that causes the aggressive disposition, and it does so in a way that removes the aggressors responsibility for his reaction. If the agent knows this, the agent doesn't seem to me to have the right to go to that place, knowing that he will have to harm the aggressor to avoid being harmed himself.
A worry regarding the necessity of defensive force
I've got, well, a worry regarding the "necessity" requirement on the legitimacy of self- or other-defensive force. I don't really work on this stuff, so it's entirely possible that there's an easy, pat answer to this worry. Anyway, I'd be interested to hear what you all think. It's traditionally...
Hey Wesley,
Sorry for the delay, and thanks for your thoughtful questions. Let's see if I can assuage some of your worries.
You asked: ”You observed in several studies that participants answered belief questions at rates that would be expected simply by chance. Could you say a little bit more about how that particular finding supports internalism in your view over externalism?”
First, I should say that we do not take the results to show that people in general have one unique uniform conception of moral beliefs (or more precisely beliefs about moral wrongness, which is our specific focus) that is internalist in nature. At least some of us authors think that people might have different conceptions of various closely related kinds of states, and that different conceptions might be more in the forefront for some people rather than others.
With that said, there are three steps to the argument. First, we find it plausible that people understand the vignettes in the intended way, i.e. as involving the characteristic cognitive processes of moral judgment but no hint of moral motivation. (More about that assumption in connection to another question of yours.)
Second, in the scenarios where motivation was completely absent (as opposed to temporarily suppressed or disengaged) between 54 and 64% of subjects withheld attributions of wrongness belief. Given the first assumption, it is unclear why anyone would withhold belief attribution unless they took motivation to be a necessary requisite of moral belief. If people merely took the absence of motivation to provide prima facie evidence of absent moral judgment (as externalist claim), the explicit mentioning of the cognitive processes associated with moral belief should have counteracted this. Based on this, then, we seem to have pretty good reason to think that a majority operates with an easily triggered internalist conception of moral belief.
Third, we also think that we have some evidence that many attributions of wrongness belief in cases of completely absent motivation are attributions of something other than the states that internalists have been theorising about. Faced with the explicit ”inverted commas” scenario, 45% percent of subjects were willing to attribute wrongness belief, even though only 20% would say that the agent ”herself thinks” that what she did was wrong. (It is striking that the percentage of attributions of moral belief in the inverted commas scenario wasn’t significantly different from what we saw in the standard scenarios where motivation was completely absent.) This suggest that a large group of subjects are attributing moral belief in a wider sense than that which has concerned at lest internalists. And the latter sort of attributions do not obviously speak against internalism (as no internalist has denied that we can make *that* sort of judgment without being motivated).
***
You asked: ”Could you explain the purpose of the inner struggle study? You say it was conducted to rule out a worry that subjects of previous studies weren’t paying attention when taking those studies. But how could a completely different study accomplish this, or rule out other explanations of chance rates?”
Now, we actually had several reasons not to think that subjects answered the belief questions randomly. First, in our pilots, we had asked subjects to provide for free motivations of their answers, and the answers all seemed to make sense in light of the scenario given either internalist or externalist conceptions of moral belief. Second, we asked subjects how confident they were about their answers to the attribution questions (”very low” to ”very high” on 7-point Likert question), and got overall very high scores (I don't have the complete data at hand, but a quick look at the studies in SurveyMonkey and across a number of studies the mean confidence was around 5.4), in particular among those who gave negative answers to attribution questions. Third, the fact that subjects attributed different states (”understands”, ”believes”, ”herself thinks”) suggests that the locutions matter, and that there is something about attributions of moral belief in particular that makes for the distribution of answers rather than general confusion. Still, we wanted to make sure that the vignette did not confuse subjects and decided to further test this by providing a scenario with most elements of the other scenario (thought processes, action), but with motivation present in such a way that most philosophers would be willing to attribute moral belief. Thus the inner struggle study. The scenario is of course different in various ways, but similar to the scenarios without any hint of even latent motivation in the respect that we worried might confuse subjects.
***
You asked: ”What do you make of the listless case, one of the strongest results of the paper, in which 70% of participants attribute moral belief without any motivation? For the second strongest result, the no reason case, you get 64% denying belief. Stacking that up against the listless case, and the other results at chance, it seems like your evidence is pretty split over which view has intuitive support, is that correct?”
A quick look now suggests that we could have been much clearer on this point. (Perhaps we can add some clarification here when sending the final version.) Most internalists these days reject strong forms of internalism demanding that one is motivated to act on one’s moral beliefs whatever one’s psychological state. So most who think that an individual’s moral beliefs have a necessary connection to her motivation accept some form of conditional internalism: moral judgments motivate under conditions of practical rationality or suitable psychological normality (some go for an even weaker, communal, form of internalism, where it is enough that enough people in their linguistic community are motivated in the right way, but we were concerned with individual forms in our studies). On one sort of view, for example, moral beliefs are states that disposes one to be motivated (in the colloquial sense), or that have as their function to produce motivation, but the expression of these dispositions can be blocked in various ways, when the normal routes by which these states perform their function are not properly operating. A general motivational disorder like listlessness would be a prime example of what can block their expression, but doesn’t show that the state itself is absent. If subjects understand moral beliefs along these lines, we should expect fewer attributions of belief in cases of listlessness than when there is nothing that explains why the moral beliefs don’t motivate. And this is what we see. By contrast, subjects should be maximally reluctant to attribute belief when motivation is clearly missing and there is no plausible explanation of how a disposition is blocked. This is what the No Reason case was supposed to represent, and indeed here subjects were most reluctant to attribute wrongness beliefs.
***
You asked: ”Almost all of the studies included some variant of a very long set up that described protagonists as highly abnormal, unfeeling psychopaths who ‘classifies actions using expressions like ‘morally right’ and ‘morally wrong’’ but that this doesn’t “in any way influence her choices”. Derek points out above that maybe the specific psychopath details are playing a role in people’s judgments. I was wondering, were you worried that saying the protagonists make moral classifications and then asking about their moral beliefs or their moral understanding, while a staple of the philosophical debate, might genuinely confuse people?”
We were a little worried before we ran the studies, which is why we asked for free-form motivations of attribution answers in our pilots, and looked at the other aspects mentioned above in my answer to your question about the inner struggle case. It is of course always possible that people were confused by details in the study, and the descriptions of psychopathic traits are no exception. For the moment, though, it is not clear to me in what way or why one should expect confusion here. One such way might have been that people would be eager to blame the psychopath, and that this would affect attributions. But we did test the hypothesis that blame was doing work but found no evidence of that. Of course, we also ran the No Reasons case, which doesn’t contain the details, and that made for significantly different answers. Still, I am a little unsure why we should attribute this to confusion. I’m more inclined to think that some subjects might have taken the egoism and lack of empathy to signify features blocking existing dispositions to be moved by moral beliefs, as the hypothesis that attributions are affected by perception of such features gets some support from the distribution of answers across Listlessness, Psychopath and No Reason scenarios. But it is admittedly speculative.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Thanks Wesley, those are good questions – much appreciated! I've been travelling and have a tight schedule until tomorrow night, but hope to be back with questions then. Cheers,
Gunnar
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hey Wesley,
I agree with your point that people are both willing and unwilling to attribute belief (or belief-related states, depending on how ”belief” is understood) absent motivation. Indeed, this is closely related to what we found in our studies, where people were more willing to attribute ”understanding” than to attribute ”belief”, and was one of the hunches leading us to ask for attributions of related states. I also agree that *willingness* to attribute belief in the absence of motivation is evidence that people have *externalist* intuitions. My question concerned whether *unwillingness* to attribute belief would be evidence that people have *internalist* intuitions. The worry here was that people might be unwilling to attribute belief not because they accept an internalist, conceptual, requirement that moral belief motivates, but because they take the absence of motivation to provide strong prima facie evidence for the absence of sincerely held moral belief, i.e. because of something that externalists are happy to acknowledge. Now, I think that *there is* evidence that people have internalist intuitions about the sort of state(s) tracked by questions about whether someone believes that such-and-such: I think that the studies we present in our paper provides such evidence, as we try to make sure that non-motivational aspects of moral judgment are represented in the scenario. My concern was with whether the results in your study provided additional evidence to that effect, thus corroborating our findings.
With respect to my concerns about what your probes for thin belief reveal, I think that you are right that we have reached the end of the road here. I look at the probes used in your various studies and my impression is that what is picked out varies and doesn’t track some unified kind of state of holding true, but sometimes relations to earlier holdings-true, sometimes to holdings-true of a different content than what is picked out by the thick belief probe, sometimes to one sort of restricted behavioral competence associated with paradigmatic cases of believing, and sometimes to another. I also think that much of this variation is displayed by uses of ”on some level” in relation to other locutions than ”thinks that”. You have a different impression of how your thin belief probes will be interpreted. I think that's fair enough. Even if you guys don't think that my worries are any cause for concern or need to be followed up, I do find it helpful to know the source of our differences.
And as I said before, I am curious to hear more about your worries about biasing subjects in relation to the complex vignettes used in our studies. Bias is of course always a worry (and a problem that we were anxious to avoid), but I got the impression that you might have had something more definite in mind. If so, I’m all ears, as that would be something I'd want to follow up.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hi John,
Yeah, this has been more difficult than I expected and I suspect that we have long since exhausted the patience of any would-be readers. Thanks though for giving it another try. Also, apologies for getting your last answer wrong; I thought you were saying that you would get the upshot you wanted even if ”on some level X thinks that” picks out the sort of things I suspect it picks out. Back, then to that issue.
Symptomatically, my response too begins with a clarification of an apparent misunderstanding. My worries do not concern the locution ”thinks that”; as I’ve said, it concerns the job of ”on some level”. The locution can operate on a variety of expressions: we might ask whether someone we disagree with is nevertheless right on some level, and ask whether, on some level, someone enjoyed an ordeal. We might also say that someone was relieved on som level, disappointed on another, or appreciates his parents’ sacrifices on some level, but is angry with them for not caring on another. If I’m getting you right, you think that when this locution operates on the ”thinks that” locution, it has a definite effect: it leaves the content indicated by the that-clause intact (compared to the corresponding expression without ”on some level”) and leaves the attitude towards that content a mere holding true, stripping away further commitments. To me, it seems that the locution can do and does other things too, when attached to ”thinks that” and in other contexts: it can change the content (”I guess that’s right on some level” ≈ ”I guess that’s right if understood in a certain way, [a way that’s different from what first comes to mind, or different from what’s most centra relevant in the context]”) or indicate that the attitude in question is in some sense partial rather than all-told (”he enjoyed it on some level” ≈ ”there was an element of enjoyment [not implying that it was enjoyable all told]”). These were also the sorts of interpretations that came to mind when I read the cases and questions you work with in this paper and the paper with Rose on belief and knowledge.
Now, you have two replies. The first is that your linguistic intuitions tell you that my readings are very remote possibilities. I think that your linguistic intuitions carry some weight, and this gives me some reason to bracket my own intuitions a little. Of course, it is easy to have one’s interpretation coloured by one’s theory, but I have theories too (though perhaps not ones with very clear implications in this case) and have the epistemological disadvantage of not being a native speaker of English. Still, I think that the general pattern of use of the locution suggest that the possibilities are not so remote.
The second reply is that there is a pretty robust experimental track-record ”very well explained by people interpreting the relevant words in the ordinary way”. Of course, I’m inclined to agree with this, though we have different views about what the *ordinary way* of understanding these words is. We probably agree sufficiently about the ordinary interpretation of ”thinks that”, but we disagree about ”on some level”. Now, as you also say, I’m providing different interpretations for different cases, and you think that my ”alternative interpretation strategy” is unsustainable, presumably because it is less general than yours and ad hoc.
Here I should first say that I don’t have a *strategy*, strictly speaking, because I don’t have a goal: I’m not trying to save some alternative theory, and your hypothesis is compatible with my other commitments. In earlier writing I have myself proposed that the conceptual question itself cuts little ice exactly because there might be different conceptual commitments and that more fruitful questions concern the nature of actual moral thinking. Rather, what has been driving my questions is that I have a intuitive sense of how I understand the thin belief probes in your studies, and these understandings seem to diverge from the understanding that you operate with. Moreover, looking at the wider use of ”on some level” locutions, it seems to me that analogues of my way of understanding the thin belief probes are well represented. So, apart from your intuition that what strikes me as the most natural interpretations are generally farfetched (which, I admit, carry some weight) I don’t yet see why subjects wouldn’t read thin belief probes my way, or why they would read them in the Dretske sense of merely holding true.
Of course, the fact that a variety of uses of ”on some level” figure in ordinary language doesn’t show that the Dretske sense of thin belief isn’t the one operative in subjects when they respond to your probes. Perhaps there is a highly plausible general account of how ”on some level” works that predicts the intended reading when applied to the cases you are working with. I've tried to see what such an account might be but haven't found one yet.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hey Wesley, thanks for contributing! I also think that our studies are complementary – which is why I’m fretting over just how much I can take away from your study and how much it might leave open.
I think that responses to the Elitist Politician effectively deals with the sort of worry that you mention in the paper, i.e. the worry that on some level everyone thinks that we ought to help the poor. But it is less clear to me that the responses rule out that subjects understand thin belief attributions in the other cases in line with the interpretations I had found most salient:
(a) (in the politician cases) the agent’s retrospectively accessible prior thick beliefs
(b) (in the liar case) the belief that the agent has a mere prima facie obligation not to lie to the employer (as opposed to an all-things-considered obligation, as the agent thinks that the employer has mistreated him).
Judging from the story of the Elitist Politician, EP (a) has no prior thick belief to access, and (b) accepts no general principles of helping that fail to apply on the occasion. Consequently, subjects would have no reason to ascribe thin belief to the EP on either of these interpretations. So it is unclear how these proposed readings would be ruled out by responses here.
By the way, you mention worries about bias in connection to our complex vignettes. If you have specific worries, I’d be very interested in hearing what you have in mind.
(edited for clarity)
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hi John,
I don’t deny that ”on some level” is a perfectly ordinary phrase, or claim that it cause difficult interpretive problems. I think that when we use it, it will typically be *clear enough* in context what we have in mind. What I wonder is why I should think that, in the context where you use it, it carries the precise content needed for your thin belief probes to probe for thin belief in the specific sense that you I took you to be after: a mere holding true. The reason that I wondered was that, intuitively, in those contexts, the locution seemed to me to carry contents other than that. Since I found your claim really interesting – it would complement our findings in a really nice way and perhaps explain the difference in attributions of belief and understanding that we had come across – I hoped to hear more about why your proposed interpretation would be the one that most subjects went for.
But your latest answer suggests that you don’t care whether this is how subjects understand the locution. Then I have misunderstood the exact nature of your claim. Perhaps your claim is merely that when people are asked whether, at least on some level, an agent thinks that he ought to do something, the state of mind they will consider tends to be one that they take to be compatible with the absence of motivation: they are externalists about that state, whatever it is. Then that’s fine, though it seems to me that the implications of your results will be less clear in the absence of a clear account of what that state is.
Regarding the worry about missing motivation that I mentioned in passing in my previous reply to you, I tried to spell it out a little more in my comment to Derek (second paragraph). But I should add here that I think that your Very Jaded Politician case goes quite some way to address this worry.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hi Derek, thanks for chiming in. We all seem to share the sense that everyday talk of belief is rich in some way, involves some sort of endorsement or commitment. Talk about someone’s belief or beliefs using the noun ”belief” seems especially prone to trigger the sorts of reactions you mention, and be typically restricted to matters political, moral, or religious. At the same time, attributions of belief using ”believes that” seems less confined. At least it is commonly used is in cases where we recognize uncertainty or controversy, as we might naturally say that someone believes that the Euro will survive the crisis, that the GOP will eventually accept gay marriage, or that Xabi Alonso has already decided to leave Real Madrid. My sense is that all these cases involve a personal commitment or endorsement, going beyond what is straightforwardly given by the evidence. Becoming clearer about what goes on here would be very helpful.
Also, thanks for addressing some of my worries. You are right that the control questions go some way to address worries about motivation, and I didn’t spell out the sort of worry I had in mind more specifically. The problem I am sometimes worrying about here is that the everyday notion of ”motivation” might not capture what at least some internalists have had in mind. Internalists have typically had in mind something than actually being or feeling moved to do something: the relevant state is one that will move one under normal circumstances (when one is thinking clearly, not afflicted by general listlessness, etc). But in everyday talk, saying that someone is motivated to do something typically has more implications, and I might even say, colloquially, that I have no motivation to do what I am currently doing. Of course, we did go to some lengths to make salient the complete absence of motivation even in weaker philosophical senses of motivation, and much of the disagreement seen in other studies remained, suggesting that simpler formulations might suffice for certain purposes at least.
It is interesting to hear that attributions of belief and questions about the possibility of belief yielded very similar results. Worries about this – worries that we would miss out on the relevant modal element of internalism – was what lead us to a design where we could hope to capture the modal element without actually using modal locutions. But there are some drawbacks with that design too, naturally: to make plausible that there is a non-defeasible requirement of motivation, we needed to make sure not only that motivation is absent but also that people get that the non-motivational features commonly associated with moral belief are in place. One might worry that this makes the vignette too complicated for people, or introduces other problems. We tried to control for some such problems, but there are no doubt more.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Right; I get the proposal that the debate is explained in ambiguity, and various internalists (including high-profile non-cognitivists) have similarly thought that externalism is true about various (secondary or derived, they would say) kinds of moral judgment or belief. And I sympathise with the idea that enduring purportedly conceptual disputes between highly intelligent parties might be best explained by ambiguity, though the fact that intelligent parties take it to be a dispute might also suggest that it is indeed a dispute. (My former student Ragnar Francén Olinder http://goo.gl/vTwz7i has done highly interesting work on the ambiguity line, much of my work on issues of disagreement has been geared towards showing how it might make sense even if parties operate with different concepts, and my own preferred way http://goo.gl/N9ohVb of accounting for the connection between moral judgment and motivation acknowledges that some judgments unaccompanied by motivation might sensibly be understood as moral wrongness-judgments.) Indeed, one of my questions in this post was concerned with similarities in our results: we saw a difference between attributions of moral understanding and moral belief that seemed to match the difference you saw between answers to probes for thin and thick belief.
I also get that the ”on some level” locution was intended to allow for belief that is not conscious or occurrent. My second worry concerned whether this was all it would do when people are asked about whether agents who lack motivation to do something think that they ought to do it. I suggested that it might lead them to think about what the agent had historically believed, or to about what the agent believes about his prima facie duties. You say that this is purely speculative and inconsistent with other studies of thin/thick belief. I agree that this is speculative: it is not based on specific studies, but primarily on my own interpretation of the questions. Still, I wonder what reason we have to think that subjects understand the questions in the way you intended. That’s what I have been asking for.
Now I understand you as answering that this interpretation, unlike the interpretations I have proposed as possibilities, coheres with or are consistent with prior studies. But I wonder about that. In fact, it seems to me that my proposals are as consistent with earlier results as the ”holding true” proposal.
Here’s why I think this. Suppose that I’m asked whether, *at least on some level*, Agent believes or thinks that P. The extra locution clearly allows for positive answers in more cases than the plain question of whether Agent believes or thinks that P. But what sort of alternative interpretations are likely to come into mind? Talk about ”levels” doesn’t have any obvious content here, so we can expect context to do quite some work in pointing us to what might not be adequately unqualifiedly described as believing or thinking that P, but is suitably closely related. To see whether it points us to thin belief in the Dretske sense, we need to look at cases.
Start with an example from one of the prior studies that you take to support your interpretation of ascriptions of thin belief. Here, the context is one where Agent holds, on her parents authority, that the earth is at the center of the of universe but has been a good student and writes on the physics exam that the earth revolves around the sun. A large majority of subjects were willing to say that *on some level*, Agent thinks that the earth revolves around the sun, and this might seem like a plausible thing to say in light of the fact that Agent knows that this is what physics says. But I don’t see that in this is a clear case where Agent thinks that the earth revolves around the sun *in the Dretske sense of merely holding true*. Perhaps she thinks this ”on some level” in the sense that she suspects that it might be true, or in the sense that she accepts that it is supported by scientific evidence, or in the sense that she has received information to this effect and is capable of acting on it in the present (exam) context, or in the inverted commas sense that she thinks that this is what science says. Just as in the case of moral belief, I don’t see why we should assume that subjects who answer the thin belief probe in the positive attribute thin belief in the Dretske sense (assuming that I have understood what this sense is).
Likewise for the Dog case from ”Belief through thick and thin”, where a dog can respond to basic arithmetic questions by barking the right number of times: I don’t see why we should think that subjects who attribute thin belief that 2 + 2 = 4 are attributing thin belief in the the Dretske sense in particular, rather than, say, a disposition to reliably act, under constrained circumstances, as if believing that 2 + 2 = 4. (Perhaps one would want to say that having a reliable disposition to act, under certain very constrained circumstances, as if believing that P *is* a kind of holding true that P. But then externalism about thin moral belief would fall far short of what externalists want and what internalists are eager to deny, and it is unclear why empirical investigations would be needed: everyone agrees that people might be disposed to behave *in some ways* as if having moral beliefs without having the corresponding motivation.)
Notice that I’m not denying that there might be cases where the Dretske interpretation is exactly right, nor am I completely ruling out that it is the right interpretation of the moral belief cases. But I don’t yet see any reason to think that it is: the interpretation you propose still seems to me as speculative as the ones I have proposed, and not better supported by earlier studies.
Though my worries about the interpretation of responses to the ”thin belief” probe remain, I think that our remarks have now begun to connect. But I feel that we are still talking past each other in relation to the first worry, i.e. my worry that your probes fail to measure *internalist* intuitions. Perhaps it might be helpful for me to distinguish between two things to test for:
INTERNALIST INTUITIONS: Intuitions expressive of an understanding of moral belief on which it conceptually or metaphysically requires the presence of motivation.
EXTERNALIST INTUITIONS: Intuitions expressive of an understanding of moral belief on which it is conceptually and metaphysically compatible with the complete absence of motivation.
If people seem to attribute moral belief in a case where they clearly do not attribute motivation, this is prima facie evidence that they have externalist intuitions. Now, it might be that the belief in question isn’t the sort of belief that metaethicists have been concerned with – perhaps it is merely a form of inverted commas belief (a worry intensified by one of our studies) – and perhaps subjects do not really think that all motivation is absent in the sense of ”motivation” that internalists have had in mind (we go through some lengths in our study to rule out this worry).
These two worries, I think, needs to be taken seriously, but the worry that I have focused on doesn’t concern tests for externalist intuitions, but rather tests for *internalist* intuitions. The point I have been trying to make is that the mere fact that people withhold attributions of moral belief in a case where motivation is missing is not in itself evidence that people have internalist intuitions. Everyone, externalist and internalist, accepts that absence of motivation can be strong evidence that moral belief is absent. On an internalist view, the absence of motivation provides conclusive evidence of absent moral belief, but on the externalist view, it can be very strong evidence that the holding true or making a judgment part of moral belief is absent (as Svavarsdottir and other externalists have insisted in explaining away seemingly internalist intuitions). One way of trying to avoid this problem – the way we try in our studies – is to make it as explicit as possible in the vignette that there is some holding true or making judgment going on of the sort characteristic of moral belief and judgment, without prejudging whether it constitutes a moral belief. If this is indeed made clear and people still do not attribute moral belief in absence of motivation, it would appear that people do operate with an independent indefeasible requirement that moral belief is accompanied by motivation.
My worry, then, has been that your studies fail to avoid this problem. Of course, whether it is a problem for you depends on whether you take your results to be independent evidence of internalism about thick belief. Your last reply makes me think that maybe you are satisfied to show that some clear cases of externalist intuitions are tied to thin belief.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Thanks John, that helps me see where I am not making myself understood.
So, to clarify: My first worry doesn't concern the difference you get between thin/thick probes. Clearly those probes probe for different things – thus far I am on board. Instead, the worry concerns whether the probes are probes for *internalist* intuitions (i.e. intuitions explained by the existence of a *necessary* link between moral belief and motivation, as opposed to a reliable but non-necessary tendency for moral beliefs to come with motivation). Much of the metaethical debate about internalism has been concerned with understanding which of these two sorts of connections obtain between moral judgment and motivation, so from the point of view of that debate, this is a crucial distinction.
Regarding the second worry, it again doesn't concern the difference you see between thick and thin probes. Instead, it concerns whether your probes are tracking attributions of thin belief in the Dretske sense, or perhaps tracks something else. If I understand the way you draw the distinction between thin and thick belief, if concerns two different kinds of attitudes one can have to a content. Thin belief is a mere holding true, whereas thick belief involves more, in particular dispositions to act on the belief. Generally, my worry is that the use of the weakening qualified "on some level" opens up for ways of thinking that do not involve straightforwardly holding true the content in question.
In the cases you use in these experiments, it seemed to me that it might be tracking a historical, perhaps nostalgic, way for the agent of thinking about his previous (thick) commitments. Since you bring up the case of thin and thick liar, I should say that my worry there is more that the content believed changes between probes for thin and thick belief: that in the thick case, Michael is understood as not thinking that he ought to tell the truth *all things considered* (because he has been treated badly); in the thin case, he is understood as thinking that as an employee, he has a *prima facie* obligation to tell his employer the truth about how much overtime he works (but that because of the lack of respect on part of the employer, this is not an obligation all things considered).
For all these cases, though, the worry is that thin/thick probes fail to track attributions of think/thick beliefs with the same contents: the thin probe might not track a holding true, or might track a holding true of something other than what the thick probe is tracking.
(By the way, it's getting late in this time zone, so apologies in advance if I will be slow in moderating comments for the next few hours.)
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
Hi John,
Apologies for being unclear. I'll see if I can do better this time.
I mentioned two worries.
The first concerns what sort of connection between moral belief and motivation you test for. Motivational internalism (in its simplest forms) postulates that, by conceptual necessity, whenever there is a moral belief of the relevant kind, there is corresponding motivation. But whereas externalists deny this, they agree that we can *generally expect* people to be motivated to to what they think that they do (though they insist, of course, that there are or can be exceptions to this general tendency). If we want to check for intuitions resulting from an internalist concept of moral belief, we thus need to be sure that we are not just testing for intuitions resulting from this general expectation. My worry is that your vignettes plus questions fail to distinguish between these two kinds of intuitions.
You present a case with an agent who lacks motivation to φ and ask subjects whether that agent believes that he ought to φ (or whether, on some level, he thinks that he ought to). If I understand you correctly, you want to take reluctance to answer these questions in the positive to indicate internalist commitments on part of subjects, and your suggestion is that subjects are (more) internalist about thick belief than about thin belief. But how should a subject answer that question given that she has the general expectation that moral judgments come with motivation? Well, given this expectation, she should take absence of motivation to provide (non-conclusive) evidence of absent belief. Whether people have internalist commitments or not, this expectation seems to be enough to generate reluctance to attribute belief in the cases you present. If so, such reluctance doesn't tell us that the subject has internalist commitments. That's the first worry.
The second worry (which might be less serious) concerned what the prompt for thin belief was actually tracking. The way you describe thin belief, with reference to Dretske, suggests that it involves holding true the relevant proposition. My worry is that talk about what someone thinks "at least on some level" need not be tracking what the person holds true, and so need not be tracking attributions of thin belief. In the cases you presented, the agents had at some point been motivated to do the right thing and presumably at that time thought that the action in question was the right thing to do. When I am thinking about whether, "at least on some level," such an agent thinks that he ought to do this, what springs to mind (insofar as my introspection is reliable here) is the agent's historical (thick) commitments and his memories of these, rather than what the agent presently holds true in the thin sense. (A related worry about belief reports comes from our study, where people were quite willing to attribute moral belief in explicit inverted commas cases. But for me at least, the existence of a pre-history of moral belief introduces a specific worry about attributions of thin belief, a worry that doesn't figure in the same way with respect to your knowledge-belief study.)
Please let me know if this makes my worries any clearer, or let me know where I lose you.
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or l...
More work on motivational internalism – and some questions
In a recent post here on the Experimental Philosophy blog, Wesley Buckwalter presented some recent studies of his and John Turri’s on motivational internalism. Motivational internalism postulates a necessary connection between moral judgments and motivation (often conditional on rationality or lack of psychological defects). In arguing for and against internalism,... Continue reading
Posted Jan 27, 2014 at Experimental Philosophy
Comment
22
I have found your work on the relation between attributions of knowledge and belief really interesting. This looks interesting too, but I do have a few worries; let me mention two.
First, and perhaps most importantly, I don’t yet quite see how this is testing for internalist intuitions. Most people in the internalist debate – internalists and externalists alike – think that people in general have strong default expectations that belief that one (morally, rationally) ought to do something will come with some motivation. Suppose that this is right. Then if A is told that B lacks all motivation to do a certain thing, we should expect A not to attribute to B the belief that B ought to do it unless A has strong positive evidence that B has the belief in question. This, I think, is common ground between internalists and externalists. But now I am a little bit unsure why anything more than this is needed to explain subjects’ reluctance to attribute ‘thick’ moral belief to agents without motivation.
It seems to me that for subjects’ reluctance to attribute moral belief to reveal internalist inclinations, the cases in question would have to be ones where motivation is clearly absent but all other evidence would have suggested that the agent in question has the belief. Such cases should thus strongly indicate that the agent has the relevant purely cognitive (non-motivational) dispositions associated with moral belief (in particular the disposition to make an explicit judgment naturally expressed as “I ought to do such-and-such"). But I don’t see anything in your vignettes that does this. Am I missing something? (My colleagues John Eriksson, Caj Strandberg, Ragnar Francén Olinder, Fredrik Björklund and I try one way of avoiding this problem in a paper forthcoming in Philosophical Psychology, “Motivational Internalism and Folk Intuitions, https://www.academia.edu/5823340/Motivational_Internalism_and_Folk_Intuitions.)
If subjects withhold attributions of thick belief because they operate with a default expectation that moral belief comes with motivation, then this withholding provides no evidence that they are intuitive INTERNALISTS about THICK moral belief. A second worry concerns attributions of thin belief. Here, I am not so sure that subjects’ willingness to say that the agents in question “think at least on some level that they ought to this-or-that” provides evidence that subjects are intuitive EXTERNALISTS about THIN moral belief. Here I worry that this sort of question isn’t tracking attributions of thin belief in the sense talked about by Dretske, i.e. as a mere holding true. For example, subjects might be attributing “thoughts that this-or-that on at least some level” because agents in these scenarios used to have the beliefs in question and so might still be able to take their old point of view and think their old thoughts (on a nostalgic level, as it were). But agents can do this without therefore actually and presently holding their old thoughts to be true. So the question is why we should think that the question here actually tracks thin belief (in the Dretske sense)?
In the Thick of Moral Motivation
Suppose you had a genuine moral belief about something, and all that abiding by it required was pushing a little blue button on your desk. It would literally cost you nothing to do it, other than lazily lifting up your arm and tapping it with your index finger. Most people would probably press t...
This is very interesting. I wonder if you could say a little more about the relation between attributions of responsibility, attributions of desert, and expressions of reactive attitudes. One might think, for example, that exactly because it is difficult for PD patients to avoid certain behaviors, they are not *fully* responsible for what they are doing, and so not appropriate targets for the full range of reactive attitudes. One might also think that even though these individuals are in some sense appropriate targets of reactive attitudes because of what they do, that doesn't mean that it is appropriate to express indignation towards them: even if someone is the appropriate target of an attitude, that doesn't mean that the circumstances are appropriate for expressing that attitude.
Responsibility and Blame in the Clinic
To begin, some background: I work part of the week in a Therapeutic Community (which is a distinctive kind of group treatment programme) for patients with personality disorder (PD). PD is diagnosed by extreme and overwhelming emotions, maladaptive and somewhat irrational beliefs, and “problem” b...
Hi Jamie,
Enoch offers the following sort of reply to the corresponding challenge to non-naturalism:
(1) There is some respectable naturalistic explanation of why we tend to make the sort of normative judgments that we do.
(2) Together with some plausible assumptions about what the normative truths are, this explanation gives us an explanation of why we tend to make accurate judgments.
I tend to think that this is a good reply to the (or at least a) cosmological explanatory question, and it seems that quasi-realists could make equally good use of it. But I take it that you disagree. If so, what is missing?
(I also think that this sort of reply leaves semantic and epistemological worries untouched. But you set the epistemological worry to the side, and quasi-realism might perhaps be able to avoid the former by not requiring any (substantial, external) account of reference.)
Featured Philosopher: Jamie Dreier
I am pleased to introduce this month's featured philosopher: me. Please join me in welcoming me. [Added Monday morning 18 November by Shoemaker: Because of some random spamming difficulties, all comments will now be moderated. Please be patient, as comments must now be read and approved prior t...
Thanks Tamler, interesting stuff. A quick clarificatory question: how are we to evaluate the virtuousness of retributive emotions? (Depending on the answer, the method in question would seem to point in quite different directions.)
Moore's Way of Justifying Retributivism
The challenge for retributivists is to explain why offenders deserve to suffer when the punishment has no benefit to overall well-being. Rationalist justifications for retributive punishment haven't met with much success. What Michael Moore says about utilitarian justifications—“bad reasons fo...
More...
Subscribe to Gunnar Björnsson’s Recent Activity