This is 's Typepad Profile.
Join Typepad and start following 's activity
Join Now!
Already a member? Sign In
Graduate Student, Stanford University
Interests: philosophy
Recent Activity
Specifying sufficient conditions for an action's being intentional, on the basis of intuitions drawn from various cases, seems to me to be an unreliable way to get at what's going on here, because our intuitions about whether an action is intentional or not are notorious for getting pushed around by factors (seemingly) extraneous to human agency (the "Knobe effect"). I, for one, don't share Chandra's intuition that the Willing Addict acts more intentionally than the Unwilling Addict. But, if you want to ask whether or not the Ella character exemplifies morally responsible agency, in the sense with which Davidson and Frankfurt and others were concerned, it seems that whether or not the "effect" of the curse leads Ella's behavior to be an output of a process that bypasses, or proceeds through, Ella's "control system" (whatever you think that to be ... desires and beliefs, intentions, deliberation, decisions, etc.; simply grasping the meaning of a command and then behaving in accordance with this command is not sufficient to employ such a control system, on any plausible account -- consider Davidson's worries about "deviant causal chains") is the relevant matter to settle. If it always involves the employment of a control system, then it seems that all the curse has done is make Ella deferential. And deferential people, no less than rebellious ones, are morally responsible for what they do. And if it always "bypasses" the control system, then it is hard to understand how Ella could display any measure of agency in behaving as she does in response to commands. Note that both the bypassing scenario and the control-system-employing scenario are compatible with Ella resisting her obedience to the commands (after all, sometimes we resist what we ourselves have made up our minds to do!).
Toggle Commented Feb 16, 2011 on Enchanting Causes at Flickers of Freedom
I don't share Tamler's intuition, and I find it difficult to understand. But, even if we do have differing intuitions about the deservedness of the fate John endures in these two cases, shouldn't this lead us to see if some other (set of) moral factor(s) is present, influencing our reactions, instead of supposing that we have a datum for thinking that the justness of desert is a function, in part, of the partiality of its execution? For instance, perhaps we think that the kid's Dad has some sort of natural right to avenge his child's death. And so, even though John doesn't deserve death (in either case) we're somewhat inclined to have a favorable attitude towards the actions taken by the father. Thus, we conclude that it is more OK for John to die at the hands of the father than at the hands of Utah. This seems, anyway, to be a much more sensible interpretation of the differing intuitions people apparently have about these cases.
One question ought to be: In what way is the state of affairs you describe the "opposite" of free will? I guess I'm not sure what you think the opposite of being designed and/or controlled by an alien is (other than _not_ being designed and/or controlled by an alien). Surely it need not be true that, for us to exercise free will, we somehow design ourselves! So, the next question is: If there is pressure created by your case, why is this sort of design -- agent induced or agent-controlled design -- more anti-free than regular old cultural, parental, and biological (CPB) design? A natural answer might be that, in the super alien case, _all_ of your actions are preset, in some sense, whereas in garden-variety CPB design, it doesn't seem to be true that all of our actions are predetermined by our design. But it seems to me that this is just as much an epistemological fact as it is a metaphysical one. That is, what might be intuitively unnerving about the alien case is that we know, with near certainty, that all of our actions are predetermined. But when we are just CPB designed, we have no way of knowing if/how 100% of our actions are predetermined. And so we are led to a third set of questions: What if the alien was only right 70% of the time? And what if social scientists and biologists became really good at predicting the actions of agents on the basis of their CPB design -- say, they were 70% accurate? Would it then be true that we were equally free (or unfree) in both the alien-design and CPB design cases? If so, then it is not the metaphysical fact of design that is lessening our freedom. Or do you think it must be the case that the designer and the predictor are the same entity in order for freedom to be lessened? In that case, CPB design, even if scientists could use it to predict 100% of our actions, would not be sufficient for lessening our freedom. In either case, it is not the metaphysical of design, by itself, that is sufficient for counteracting free willing.
Mark, I will certainly check out your previous posts (and Eddy’s paper) very soon. Since I have not done so (sadly) as of yet, I will respond only to those points that seem explicitly laid out in this post and series of comments, setting aside, for the moment, discussion of the strategy you forward for how agents might resolve their torn decisions. First, it was probably a mistake on my part to use the term “irresolvable.” What I meant was not that the agent was without any autonomous means to making a decision, but rather that the content of the agent’s long-held commitments relevant (see my note below about content-relevance) to this decision could not spell out what the agent should do (and so could not result in strong autonomy). You suggest that, if the agent has a coin-flip resolution strategy in store, or if Z comes to adopt one on the spot, that the decision will be sufficiently determined by an endorsed component of his self. And so you suggest, further, that such action on the part of Z entails strong autonomy. My qualm is this: The coin-flip strategy does not seem to take into account the import this decision has for Z’s life (and that Z understands this decision to have for his life). This is not a matter of ordering chicken or fish. This is a matter of having to favor one set of deeply-held commitments at the expense of forsaking another cluster of commitments, with the result that neither set of commitments will guide Z in the same manner afterwards (supposing that Z takes this decision seriously). That is, if Z decides to vote for the Republican, he will no longer be someone whose self-governing policy of environmentalism is adequate to determine his decision-making in all pertinent instances. And so I don’t think that a coin-flip strategy, either previously endorsed or adopted on the spot, addresses this forward-looking component of Z’s (unavailable) strong autonomy. It seems rather to miss the point, as Z would see it. That is, it doesn’t address (or resolve) the issue concerning the content of Z’s commitments to Republican loyalism and environmentalism, and the changed influence these commitments will have on Z’s future agential efforts. And so the way it “resolves” the conflict between them is not adequate to identify Z fully with the result and its implications for his future as an agent. It simply allows Z to make a decision and move on, leaving the clash between the contents of these two features of his self unresolved. I take this to imply that Z’s decision qua the result of a coin-flip, perhaps endorsed insofar as he makes it so that he can leave behind the voting booth and its bothersome difficulties, is still not one that is strongly autonomous with regards to its relation to his longstanding commitments and their fate as components of his self. Josh, I’m not quite sure what to do with the possibility of Z becoming resigned to his difficult decision and using it to claim that Z can make this decision in a manner that is strongly autonomous. It still seems that the difficulty lies only in the fact that Z’s “relevant” commitments (to Republicanism and environmentalism) cannot, by themselves, determine his decision. And so, supposing that Z becomes resigned or defeated by the irksome character of his world, what still can we say about how Z decides one way or the other? If he feels defeated, and so “just decides,” it seems as though we run into the problem I posed to the so-called “libertarian” view – how is it that Z decides in a strongly autonomous fashion, if it is just as well to Z that he decide for the Republican or the Democrat, given that the world is unfair or impossibly difficult? Just who or what is doing the deciding here? You ask fair and stimulating questions. With regards to the 1], my underdeveloped argument is this: It is important to Z that he make a decision to which he is committed and with which he can live. In other words, even though he is completely torn between his two options, he still wants to pick between them in such a way as to own his decision. This, by my lights, motivates imputing to Z at least the possibility of strong autonomy. The short answer to 2] is yes. But, given how I’ve constructed Z’s case, I don’t think his decision, even if he is able to incorporate it as the result of autonomous agency, should be labeled as randomly made (that is, given the import Z attributes to this decision). Unless you think that Z’s voting decision will result from previous randomly made decisions? Paul, What you describe would be a theory of strongly autonomous agency that could avoid the trouble I think follows from Z’s case. But it seems troubling to admit that since (1) Z’s decision is very important to him, and (2) There are two “maximally” consistent options available to him, and so (3) It is equally likely that, if Z just decides or does a coin flip, he will choose either candidate, then (4) Z can be strongly autonomous in making a decision about which he cares very much but whose outcome comes not as the result of employing his content-relevant* commitments but rather some other, decision-producing strategy (coin-flipping). But maybe this is a bullet that’s ok to bite. *Content-relevant, meaning that the Z’s choice between the Republican and the Democrat relates directly to his commitments to the Republican party and environmentalism, and only indirectly to decision-producing strategies such as coin-flipping. George, Thanks for the concise clarification of my (controversial) premise. Appreciated! John, Thanks for the reference to your new book. It further motivates me to find it and read it (but, of course, no further incentive was needed!). The puzzle you describe is just that – very puzzling. And it is very related, in my mind, to the problem I attempt to draw from Z’s case. I look forward to reading your discussion of it.
Eddy, Thanks for pointing me in the direction of your paper -- I'll have to take a look. Mark, First, I think it is important to distinguish "weak" and "strong" forms of autonomy. I take it as uncontroversial that Z is weakly autonomous -- that is, he meets the standard requirements for being morally responsible for his decision, however it occurs. But strong autonomy, which traditionally requires that agents own their decision by having it be determined by the endorsed components of their agential self, seems necessarily absent from Z's case. The reason why hinges on my belief that there are more than the two options you present in a dichotomous fashion above. For sake of argument, let's say that Z does not have a pre-established self-governing policy about what to do in the voting-booth scenario. But the two prospects with which he is presented are not wholly unprecedented to Z, either; it's the fact that they result in an irresolvable conflict that is novel to Z. And so the "pressure" that is placed on autonomy is this: This decision is very important to Z. He wants to take it seriously, and he wants to make it in a way to which he can be committed and with which he can identify himself (i.e. strong autonomy). However, this seems impossible, given that current theories of strong autonomy require something unavailable to Z -- namely, that the relevant features of his endorsed self be sufficient to determine his decision. This is why I suggested above that Z, if he is to avoid a coin-flip or some other trivial strategy (like blindfolding), must seemingly create a new commitment or policy on the spot, and he must do so in a way that is both different than those found in his agential past but somehow in line with this history as well. But I am not quite sure what this would look like, or if it is possible. Kant's notion of reflective judgment (3rd Critique) might help us, though. Again, if Z "resolutely" resorts to a coin-flip, he will be autonomous insofar as he will be responsible for this act and its accompanying policy. But I don't think this sense of "autonomous" entails _strong_ autonomy, and it is the strong forms of autonomy that I wish (but am confounded in trying) to impute to Z. George, I'm not quite sure I understand the analogy you use to explain the "illogical argument or belief" you attribute to philosophers who would claim that Z is not fully or strongly autonomous. Perhaps you could elaborate, for my sake?
Thanks, all, for your incisive responses. Hopefully my brief responses below (a day of customer service work beckons!) are adequate to keep the ball rolling. Zac, That’s an interesting question. I think you might be right that in cases where we are “without enough information or enough time to get it and find we have to choose anyway,” no one, compatibilist or not, would want to admit that we find there an example of full autonomy. But I’m not so sure that this describes Z’s case. To be sure, Z is, as you point out, morally responsible for his eventual decision, autonomous or not. However, a unique feature of cases such as Z’s is, I think, that they hold long-term significance for the shape and content of agents’ identity i.e. significantly different characterizations and expectations follow from voting for the Republican vs. the Democrat, given Z’s history. Should Z realize this, it seems as though he would want to make this decision in a manner such that he could really commit to it or own it, which itself would seem to entail some measure of strong autonomy. Whether or not compatibilists are motivated to flesh out this point is, I suppose, a different issue. So perhaps I shouldn’t invoke the name without their permission :) George, It may well be true, as you point out, that Z eventually decides on the basis of his “strongest, or most compelling, motive.” It may also be true that Z blindfolds himself, makes a mark on the ballot for one of the candidates, and settles the issue that way. My concern (that, to my shame, I did not state clearly enough in the original post) is that Z’s decision cannot follow from, or be sufficiently determined by, the endorsed components of his self. That is, theories of autonomous agency tend to qualify some desires, policies, deliberations, as ones the agent endorses (think of Frankfurt’s internality/externality distinction). Given that, for Z, the pertinent endorsed features of his self conflict entirely as he attempts to make the voting decision, even if he finds some other motive for making his decision (“Uggghh – let’s just get this over with” or “The word ‘Republican’ looks nicer on the ballot”), he still won’t have made a decision that many philosophers would feel comfortable terming autonomous. I guess I don’t see why it is necessarily true that, eventually, either Z’s republican loyalism or environmentalism wins out. They are equally weighted, and so Z must find some other (non-endorsed or not-yet-endorsed) motive or volition in order to make his decision, not knowing how to order these two long-held commitments. Mark, My apologies for not having read your previous posts; I’ve been following the Garden only since January of last year. Based upon what I can glean from your response, then, my thoughts are this: It seems as though Z’s decision is torn precisely because he does know how much each of his relevant commitments mean to him. And what he comes to know about them is that they are equally significant; they carry equal weight in his practical deliberations and thus, when pitted against one another (as they are in this case), they are impotent to guide Z to a decision. And so I’m not sure how any further self-knowledge would help, nor am I entirely sure why we still must insist that Z, once he knew more about himself, would be able to overcome his indecision. Now, you may be right that, once Z “just makes” a decision, he will reflect upon it and incorporate its implications for his selfhood. But my concern is whether or not and how Z can make this decision autonomously. And it seems as though one of the main threats to Z’s (strong) autonomy with regards to this voting decision is his inability to side himself with one option or the other. So, my question, re-worded, is this: Given that Z cannot identify himself with or fully own his decision, as identification and ownership have traditionally been conceived, how can he still make this decision fully and strongly autonomously? Neil, Perhaps I assumed an inference I should have made more explicit. For compatibilists (as I’ve characterized them here), one is a fully free agent if and only if one is fully autonomous i.e. one’s decisions are appropriately controlled by volitional structures, critically-reflective mechanisms, etc. It doesn’t seem as though libertarians would subscribe to this biconditional statement; and this is why, to them, Z could be a fully free agent (but for reasons that are mysterious to me), but for many compatibilists, as I perceive them, Z is not a fully free agent since he cannot be fully autonomous in how he makes his decision. Additionally, Z could be “weakly” autonomous insofar as his decision is made “in light of,” but is not “determined by” the relevant features of his self (i.e. his commitment to the Republican party and environmentalism). And so a libertarian might be entitled to say something like that Z’s decision is freely made because it is undetermined and autonomously made because it is made “in light of” prior deliberations and commitments. But this is not the sort of autonomy that I’m after. With regards to your question about how this relates to the question of compatibilism, I didn’t mean to suppose that Z’s case is relevant to the compatibilist/incompatibilist debate in its traditional form. What I was attempting to point out was that, for many compatibilists, free agency consists in the determination of actions by the endorsed components of oneself, and that, for libertarians, this doesn’t hold true (at least by itself). Z’s case is puzzling because his decision doesn’t seem strictly determined by the endorsed components of his self (and so doesn’t satisfy standard compatibilist criteria), but yet libertarians don’t seem to have an adequate answer to the question of how Z might still retain some measure of strong autonomy in his decision.
Neil, Thanks for the clarification regarding Kane's theory. Hopefully the larger point remains intact. I don't think that a compatibilist necessarily has to think that torn decisions (of Z's sort) are made in the midst of weak autonomy. My suggestion is simply that, given the structure of current prominent compatibilist theories of autonomous agency, this seems to be the result that follows. Personally, I think that a perfectly good compatibilist account of strongly autonomous torn decision-making is possible (and even beneficial). A note about your argument concerning deep commitments. I'm not sure this view avoids the difficulties raised by Z's case. It is precisely because Z is deeply committed to both Republican loyalism and environmentalism that he cannot make his decision, no? The issue for what I perhaps have ill-termed the compatibilist view is _not_ whether or not Z's self is involved (for surely it is, unless he ceases to care about the decision); the problem is that the endorsed components of Z's self, as they stand, cannot guide Z to a decision, even though they are involved in Z's appraisal of his options. This is what I take to necessitate the conclusion that compatibilists (again, as I categorize them) must admit that Z is not fully autonomous. And so it seems that a compatibilist account of torn decision-making, in this case, must somehow make sense of how Z creates new agential commitments, markedly different (though perhaps related to) the ones he currently holds. For without such commitment-creation, Z (to compatibilists)either remains frozen in indecision or else he makes an arbitrary and/or random (and so non-autonomous) decision.