This is Oisin Deery's Typepad Profile.
Join Typepad and start following Oisin Deery's activity
Join Now!
Already a member? Sign In
Oisin Deery
Recent Activity
Hot on the heels of Nadelhoffer and colleagues' recent Free Will Inventory (FWI), I'd like to announce publication of the Free-Will Intuitions Scale (FWIS), which my colleagues Taylor Davis and Jasmine Carey and I have developed over the past few years. "The Free-Will Intuitions Scale and the Question of Natural... Continue reading
Posted Mar 14, 2014 at Experimental Philosophy
Hi Joe, Thanks for this post. I'd love to see the paper too, so I'd be grateful if you could send it along. For what it's worth, I think one of the most interesting tasks facing compatibilists today is to explain *why* a good deal of ordinary thinking is implicitly incompatibilist, once compatibilists fess up to the fact that it *is* incompatibilist. Here's how I think about this stuff. There's a descriptive question about what people actually tend to believe, or what the implicit commitments of people's beliefs actually are. Answer: people's thinking is partly incompatibilist, at least implicitly. Then there are two further questions: a normative question about how best to theorize about free will and human agency, and a substantive question about what our abilities and capacities actually happen to be. And there isn't any reason why the answer to the normative question can't be compatibilist, notwithstanding a partly incompatibilist answer to the descriptive question. (Therein lies the revisionism, or at least one form of revisionism.) In fact, I would think the answer to the normative question should plausibly be guided more by how we answer the substantive question than by how we answer the descriptive question (although it can also be addressed prior to answering the substantive question). After all, surely there can't be any advance guarantee that all our beliefs about free acts will match what's actually going on in acts that we call free. But of course, the "normative" (read: revisionist) compatibilist here inherits the task of explaining *why* people have incompatibilist thoughts. I'm not convinced that shouldering this task is *required* for the normative theorizing, but it strikes me as an interesting project in terms of understanding people's psychology. Although they set it up slightly differently, there are at least three people in the current free-will debate who frame things in roughly the way I've outlined: Manuel (obviously), but also Shaun Nichols and Mark Balaguer. This is interesting because even though these three people accept that ordinary thinking (and agentive experience) is partly incompatibilist, they each represent new forms of the three age-old positions about free will: respectively compatibilism, skepticism, and libertarianism. The differences between them lie mainly in how they think the normative theorizing should go, which may in part depend on how we answer the substantive question.
Oh, and briefly, I agree that prospection and imagination in deliberation are exactly where the action occurs when it comes to why people *believe* in free will (particularly libertarian free will — for those who believe in it).
Hi Peter, Great post! I also enjoyed chatting with you about all this stuff last week in Tallahassee. Just a thought about something you say under (3). As you know, I like your idea that in order to conclude that A causes B, reparameterizations of B might need to be taken into account. However, you say: "On this view, standard interventionist and Newtonian models of causation are a special case where B places no conditions on input from A." If by standard Newtonian models you mean something like the physical connection theories of Dowe and others (i.e., theories that emphasize transfer of energy, etc.), then maybe you're right. But I'm not sure what you say is inconsistent with interventionism. All that interventionism requires, roughly speaking, is that that there's some value A can take in a model such that for at least some state of the model there's an intervention on A that would result in a change in the value of B. And that might partly depend on a reparameterization of B. But I think you're right that less roughly speaking, in terms of the details of a specific interventionist view like Woodward's, this isn't made explicit. In philosophical terms, here's one way of stating the sort of requirement that I take it you want for your "criterial causation." When we ask whether A is an actual cause of B, Woodward says we must first "screen off" the causal influence of other variables — like C — that feed into B by holding them fixed at their actual values. Then, if an intervention on the actual value of A results in a difference in the actual value of B, we conclude that A causes B. Christopher Hitchcock has a nice way of putting this. He says that in normal counterfactual (difference-making) accounts of causation, we ask whether the following counterfactual is true: "If A had not occurred, then B would not occurred." Here, the antecedent of the counterfactual only makes a stipulation about a single event (A). But in interventionist difference-making accounts, we ask: "If A had not occurred, but C stayed the same as it is, then B would not have occurred." Here, the antecedent of the counterfactual makes a stipulation about *more* than one event (A and C). I take it you're saying we need to include a third conjunct in the antecedent, namely one that says something about how B is parameterized and whether it is parameterized in such a way as to be receptive to patterns in A. I think that's a legitimate amendment that might need to be made to Woodward-style interventionist views.
Hi Carolina, Yes, presumably, the Big Bang is (in some sense) a cause of everything that we do, even if it's difficult to know how to model it, and thus even if the Big Bang isn't an identifiable direct cause (in Woodward's sense) of what we do now. I guess what I'd like to push, though, is a sort of contextualism according to which a model with the Big Bang as the single exogenous variable in a large-scale causal model of the entire universe isn't an appropriate model to adopt when trying to explain Danny's decision. In the case of Manny and Diana, by contrast, it does seem appropriate to include Diana as an exogenous variable in the model. Maybe that suggests a soft-line reply to the manipulation argument (not sure). Anyway, thanks for all your input on this! I'm sure there will be plenty to say shortly over at the new blog, so I won't drag things out here. Thanks too to Manuel, Joe, Kip, and Derk for comments! They've given Eddy and I plenty to think about in taking these ideas further (which I hope we do).
Hi Derk, Yes, Hall does say that. Christopher Hitchcock also argues that there are three conceptions of causation: scientific (in which causal modeling is interested), folk-psychological (on which someone like Tania Lombrozo does great work), and metaphysical (with which, as philosophers, we’re all familiar). If I recall correctly, Hitchcock argues that the metaphysical conception is an unworkable mixture of the other two. For what it’s worth, the interventionist framework is deeply *informed* by where our folk-psychological notion comes from: the emergence in infancy of causal thinking at around the same time we learn that we can influence the world around us in stable ways by acting on it (that is, by “intervening”). What the framework does is to take this notion and systematize it for scientific purposes. It starts with the folk-psychological notion, but then tries to refine it. So in a way, the interventionist framework *doesn’t want* causation to be too different from what we ordinarily think it is. And note: it is happy to grant that the Big Bang is a cause of Danny’s decision, as long as we can model it that way, and it serves some purpose to do so. Talking about causes as things we can’t model, or intervene on, or that serve no purpose in terms of manipulation and control ends up being *very far* from the folk-psychological root of our notion of cause.
Joe, Thanks for the encouragement! I must have a look at your 1997 paper.
Hi Carolina, The Sharks case is interesting. Here’s what I’m inclined to say. Analogously with the case of the two fielders I described in my reply to Manuel, I think it might be okay to have the agent’s deciding not to jump in and save the child be an actual cause of the child’s drowning, in an appropriate causal model of the situation, since it doesn’t seem like too remote a possibility to consider that the sharks wouldn’t have prevented him from saving the child. For what it’s worth, I think ordinary intuitions go this way too. People tend to say things like, “But *maybe* the sharks wouldn’t have eaten him!” However, I’d need to think more about this. For one thing, I’m not sure what to say about moral, as opposed to causal, responsibility here. To follow up on your reply to Eddy: The way I read Woodward, the idea is that remote deterministic causes are not normally taken as useful variables within a model of a situation of choice (after all, then the only appropriate model of any choice would be one that included variables representing all the prior causes of the choice going back to the beginning of the universe). By contrast, there isn’t any way to avoid having Diana’s manipulation be a variable in the model of Manny’s choice; her being a cause of his choice is a relevant variable in an appropriate model of the situation. So the claim isn’t that the state of the world at the Big Bang isn’t a cause of everything we do, but rather that we don’t model choices (or much else, for that matter) as including a variable that represents that cause. We *can* model anything that way, but it isn’t normally very useful.
Toggle Commented Jul 30, 2013 on Causal Modeling and Free Will at Flickers of Freedom
Hi Manuel, I'm going to try to address your questions, but in reverse. First, Question 2. — This is a great question about Woodward's framework! Here's how I read him. He is primarily concerned with causal inference, not just causal talk, although he takes both ordinary causal talk and scientific causal claims as a starting point in his theory. So, while he starts from what our causal claims actually happen to be, he shifts focus to their purposes or goals, and this leads him to normative recommendations. As I take it, his normative project is driven by (a) the fact that some accounts do better in meeting the purposes of causal and explanatory claims, and (b) the requirement that it be possible to establish such claims — this is where manipulability and control is important. Second, Question 1. — You may be right. Neither Woodward nor Hitchcock (who develops a similar framework) says a lot about how all this is meant to work. Hitchcock just says, for instance, that "The equations should not contain any variables whose values correspond to possibilities we consider too remote." But Woodward does give a nice example to illustrate a possibility that is too remote. (I don't have Woodward's book to hand, so I'm working from memory.) Consider a case with the following variables: a ball is thrown or not (BT=1 or 0), Fielder #1 catches it or not depending on whether it was thrown (F1= 1 or 0), the ball reaches Fielder #2 or not depending on whether it was thrown and whether Fielder #1 caught it (BRF2= 1 or 0), Fielder #2 catches it or not depending on whether it reaches him (F2= 1 or 0), and a window smashes or not depending on whether the ball reaches Fielder #2 and he catches it (WS= 1 or 0). Let's say Fielder #1 catches the ball (F1=1). Now we want to test whether F1=1 is an actual cause of the window's not smashing (WS=0). (I don't like not making the contrast explicit, but leave that aside.) To test this, we intervene on F1 and change its value to 0, so that Fielder #1 doesn't catch the ball. Then — with the appropriate equations — we find that the ball reaches Fielder #2. Now we have to decide whether it's a serious possibility that Fielder #2 does not catch the ball. We do this by intervening on F2 and holding its value fixed at 0 (since F2 is another direct cause of WS that isn't on the directed path from BRF2 to WS). To hold fixed the value of F2 at 0 is simply to assume that Fielder #2 doesn't catch the ball — which could, after all, happen in any number of ways. If he doesn't catch the ball, then the ball smashes the window. So Fielder #1's catching the ball (F1=1) is an actual cause of the window's not smashing (WS=0). Now consider a second case in which we replace Fielder #2 with a high wall that deflects the ball (WD) or doesn't depending on whether the ball was thrown (WD = 1 or 0). This case can't be modeled in the same way as the first, because it requires holding fixed the value of WD at 0 in order to test whether F1=1 is an actual cause of WS=0. And to do that is to assume that the wall disappears, or that the ball passes through the wall, or that some other remote possibility occurs. Hitchcock's criterion that "The equations should not contain any variables whose values correspond to possibilities we consider too remote" is violated. The case has to be modeled in a different way, since the wall's deflecting is like a fielder who always catches. Thus, F1=1 is not an actual cause of WS=0. I'm going to let Eddy chime in about the relevance of all this to the Diana and Frankfurt cases.
Toggle Commented Jul 30, 2013 on Causal Modeling and Free Will at Flickers of Freedom
Hi Carolina, Thanks for taking the time to reply, and for asking such helpful questions! First, I think I agree with your assessment of how we treat the Frankfurt case, at least up to the point where you say that the opponent of Frankfurt-style compatibilism can “agree that the causal structure is that way, despite the neuroscientist, and even that this is because in determining what the causal structure is the neuroscientist is irrelevant, but they can argue, for example, that, if the presence of neuroscientist rules out alternative possibilities, this is enough to preclude the agent’s responsibility.” One thing Eddy and I can say here is that this is, at least in part, just what we are trying to do: illuminate the thought that Franny’s decision is the difference-maker in the situation. We think the causal modeling gives us a nice way of doing that. In this way, we *agree* with Frankfurt that focusing on alternatives distracts us from the causal efficacy of agents. However, I think we can say more than this. When we intervene on FD in order to test whether it’s a difference-maker in the case, we ignore the variables PS and BL, and thus FD is allowed to range freely over two values. That is, FD is now treated as an exogenous variable in the model, rather than as an endogenous variable that takes its value (according to the structural equations) from the values of other variables in the model. These two values represent Franny’s deciding either (a) to return or (b) to keep the money. So causal modeling allows us to say that *Franny’s doing what she does rather than doing something else* is the difference-maker in what happens in a Frankfurt case. Effectively, then, FD= 1 or 2, and so there is a sense in which Franny *is* able to do otherwise in the case (despite the existence of Black). On this way of looking at things, Eddy and I count as opponents of Frankfurt-style compatibilism (while still obviously remaining compatibilists). Regarding your question about why we focus on the decision and the bodily action (FD and KM), and the causal relation between them, instead of the causal relation between the agent’s prior reasoning and her decision, I think we do so for the sake of simplicity. But I also think you are right that Woodward would claim his theory gets the correct result in the other cases too, for the reason you mention. However, recall that it is a restriction on what counts as a variable that it must represent particular events in such a way that they can be set to different values by interventions, and it is also a restriction on what counts as an appropriate model that the equations must not represent counterfactual dependence relations between events that are not distinct. This is (I think) to agree with you that the causal relation between the agent’s prior reasoning and her decision is trickier to model, at least when it comes to deciding on the variables that will represent the prior reasoning. (But that is definitely something we’d like to do! This is merely our first stab at Frankfurt-cases.) In reply to your query about why we said this is “like a late preemptive cause,” I think you are right. The Frankfurt case we have modeled is much more like an early preemption case such as “Two Assassins” than a late preemption case. Our mistake. Thanks for pointing this out. If it’s okay, I’m going to let Eddy follow up on your comment about the manipulation case, since I know that this is something he's been thinking about, and I don’t want to preempt anything he’ll say!
Toggle Commented Jul 29, 2013 on Causal Modeling and Free Will at Flickers of Freedom
Oisin Deery is now following The Typepad Team
Apr 18, 2013