This is Brian Parks's Typepad Profile.
Join Typepad and start following Brian Parks's activity
Join Now!
Already a member? Sign In
Brian Parks
Interests: physics, psychology, philosophy, biology, neuroscience, evolution, theravada buddhism.
Recent Activity
Mark, “If you think that your (A) and my (D) are tautologous, then who cares? All that would mean is that (A) is a restatement of (D) and that (D) is a restatement of (A). If this is the case then as long as (A) is true it means that (D) is true. So, I could say here that your point about tautologies is moot. (Though I disagree that (D) is a restatement of (A), it seems neither-here-nor-there with respect to the question of whether (D) is true.)” To avoid belaboring the point, I’ll just say this. If you can establish that “S is a member of the class P”, and that “There is a way W that all members of the class P ought to be treated”, then I will gladly grant you that “S ought to be treated in way W.” As for your original conjecture, (D) “If S is a P then S ought to be treated as a P” I think you should change it to something like, (D4) “The way that S ought to be treated is entirely a function of the kind of nature that S has.” That, in my view, is a more effective way of expressing what I think you mean to say. You say: “Regarding prepunishment, the account is not committed to saying that it would be possible for anything short of God to obtain direct knowledge of persons.” But still, you are committed to saying that God is justified in prepunishment. More importantly, you are committed to the claim that God can create punishment desert in an individual from thin-air without any prior involvement from the individual. All he has to do is create the individual with an evil nature. As a moral anti-realist, I am quite confident that the property that we call ‘desert’ does not exists as a real, objective property of anything in the universe. It is just a way of putting a word on certain retributive feelings that our organism has evolved to experience, feelings that the universe does not share. However, even with my moral realist hat on, I have to admit that nothing could be more counter-intuitive to me than the claim that a person can ultimately deserve pain and punishment simply because God made the person to have an evil nature. If that is what your theory implies, then in my view it is already dead-on-arrival. You say: “Moreover, this account is agnostic regarding whether a person's nature is malleable -- it could easily turn out to be the case that the answer is no.” Right, and that’s still a problem because it could just as easily turn out that the answer is yes. If that is the case, then absurdity immediately follows: God—or a person with access to sufficient technology—can generate desert in an individual purely through manipulation. If, for example, I manage to get inside Jesus’ brain, tweak things around, and give him Hitler’s nature, then right then and there He will deserve whatever Hitler deserves. On a side note, is it not a bit peculiar that the possibility that a person’s nature can change creates problems for a theory of desert and moral responsibility? One would think that if evil people could theoretically change their natures, that this would strengthen the argument for treating them in a certain way, i.e., for punishing them for what they do. On your view, the existence of such an ability causes the entire theory to collapse into absurdity.
Mark, You say: “OK, try this: Put Stephen Hawking's body and mind into the sate that Michael Jordan's were just before he made some spectacular leap to slam-dunk a ball. Do you suppose that Hawking would make a spectacular leap and slam-dunk the ball, too? Do you suppose that he would if the universe were deterministic? My impression is that Hawking's body would follow more-or-less the trajectory of a rag doll in such circumstances. While all the rules of the universe are the same, the body is just not capable of the action you've set it up to take.” Well, unlike Michael Jordan, Hawking has motor neuron disease. We’re obviously going to fix that. He also doesn’t have as many muscle fibers per unit volume of muscle. We’re going to fix that too. He also isn’t 6’6”. We’re going to fix that too. And so on and so on and so on. Once we’ve managed to put Stephen Hawking’s brain and body in the exact physical state that Michael Jordan’s was in prior to a dunk, and we put Stephen Hawking on the court with the ball and the same open lane, thinking the same thought, with the tongue hanging out, and so on… is he going to dunk? In a deterministic universe, you bet. For him not to lift off on a dunk would violate determinism. You say: “So why should we suppose that just anyone's self would be up to the action of deciding to push the ponzi-scheme button? I don't think it's at all obvious that everyone has what it takes to do that.” What else other than the state of a person’s brain would determine whether the person is up to the task of pressing the ponzi-scheme button? If you have two individuals with brains in the exact same state—with the same thoughts, feelings, and so on—both contemplating the same crime, how could one of be ‘up to the crime’, and the other not? You say: “If the self is identified by its relationship to those experiences, then you have destroyed the self by destroying that relationship.” OK, fair enough. I’ll make a deal with you. I’ll leave any memories of your past experiences intact that you think need to be left intact in order to preserve your current self provided that those memories aren’t the kinds of emotionally charged memories that might have present-day impacts on your behavior. How does that sound? ;-) You say: “They don't destroy *me*; they destroy *my self* -- according to (S2) as I understood it. *I* am a (growing) physical object; *my self* is a (growing) cluster of experiences. A Star-Trek type transporter leaves my self intact, but destroys me. The manipulation you propose leaves me intact, but destroys my self.” OK, that’s fine. I’m not going to punish your ‘self’, I’m going to punish you ;-)
Sofia, You say: “I haven't read Perry on personal identity, but I've read Parfit. And his point (if I remember it right, was a while ago I read this) isn't that if I manipulate agent A:s brain so that every personality trait and value radically changes, it is thereby NECESSARILY the case that I've replaced A with a new person. His point is rather that there's no way to give a definite answer to the question of whether I've done so or not. It depends on how you choose to define personal identity, and there's no True with a capital T answer to what that definition is.” As a conceptual relativist, I would generally agree with that. I doubt that when a change occurs in an entity, that reality has an objective answer to the question “Is the entity the same entity that it was before the change?” At the same time, the idea of moral responsibility requires there to be an objective answer to that question. It requires there to be a self that continues to be the same self in the objective metaphysical sense despite the constant changes that occur in consciousness—the changing images, sounds, thoughts, feelings, emotions, sensitivities, moods and so on—and the constant physiological changes that occur in the body. If the self does not continue to be the same self in the objective metaphysical sense despite those changes, then moral responsibility loses its practical import. It ceases to be objectively true that the self that exists right here and right now is the same self that we seek to punish, the one that engaged in actions at a prior place and time. You say: “You want to present a hard bullet to bite for the compatibilists, by giving a scenario where the choice stands between saying that a manipulator could take complete control of another person's moral destiny, or saying that one can be morally responsible for things that happened through no fault of one's own. But if the choice is really to stand between these two options only, it must be TRUE that there CANNOT be the replacement of one agent with another, or of two agents reducing to one. As soon as these are admitted as possibilities, the bullet looses its hardness.” If moral responsibility is compatible with determinism, then the only thing that stands in the way of my taking control of your moral destiny is an unrelated possible quirk in the arena of metaphysics—specifically, that if I modify your properties in drastic ways, that I thereby destroy you and give rise to a numerically distinct being. How convenient for a compatibilist that such a quirk would apply ;-) I can hear it, “Ha Ha Ha! You can’t give me evil tendencies in your manipulation scenario because it’s impossible for me to have evil tendencies! If I were to develop evil tendencies, voila, I would die on the spot and the whole mess would belong to the a self that emerges in my place! I’m evil-proof!” I think it’s extremely disingenuous for compatibilists to cling to that kind of possibility in addressing manipulation scenarios—first, because it’s just one of many possibilities, not an established metaphysical truth, and second, because it generates an enormous non-sequitur. Why would the question of whether moral responsibility is possible in our universe hinge on whether or not you can survive a manipulation scenario that gives you a different personality? What does one thing have to do with the other? You would think that if moral responsibility were possible in a deterministic universe, that it would be possible regardless of whether or not a person can survive a drastic change to her personality traits. And so a compatibilist who wishes to defend the possibility of moral responsibility in a deterministic universe should have no problem defending that possibility on the assumption such survival is in fact possible. You say: “Then I have the same intuition as Paul Torek, that if there's a human being who's controlled just like a puppet, there are no longer two agents, but just one. Now that wasn't supposed to be the case, because the very choices were supposed to be left untouched. But if there's no power of agent causation or the like involved, I don't understand what that even means. Does the manipulator do all of the valuing and feeling by remote control, but then just lets go of the remote control one second before a choice appears, or what? How does that make the "puppet" anything more than a puppet?” The manipulator only controls the thoughts and feelings. The puppet controls the choice. Is that not meaningful control in your view? Is there nothing else to a choice other than the thoughts and feelings that go into it? If your view is that the choice itself is trivial, and that the real importance lies in the thoughts and feelings that go into it, then how can a person be responsible for her choice if she is not responsible for her thoughts and feelings? Beware of Strawson’s regress ;-) You say: “Obviously I'm the only agent here, and it seems to me that the same thing would be true in a scenario where someone is being remote-controlled, even if the manipulator lets go of the buttons for a few seconds time to time.” But the manipulator lets go of the buttons just in time for the agent to make the choice. Is the agent not responsible for the outcome of her choice in that situation, a choice that is entirely under her control at the time that she makes it?
Mike, First, I’m going to address the freedom-randomness-probablity point, then the ‘Copenhagen interpretation’ point, then the scenario itself. You say: “You've misunderstood or forgotten a couple of my points, namely (a) the distinction between freedom and randomness and how that distinction relates to probability.” I already addressed that point. Two posts ago, you suggested that an agent—i.e., a hand—can intervene on matter—i.e., a spinning coin—and force a specific outcome—i.e., heads or tails. You claimed that this intervention, in as much as it constitutes a free act, is not probabilistic, though it forces outcomes in a system that would otherwise behave probabilistically. My response was to put the focus on the agent—i.e., the hand, the entity that supposedly intervenes non-probabilistically. If such an intervention is possible, then either the intervening agent—i.e., the hand—is not an instance of matter, or it is an instance of matter with a peculiar exception: it does not manifest the types of well-defined probabilistic behaviors that all other examined instances of matter in this universe have shown themselves to manifest. Needless to say, both options are highly problematic. Ask yourself: What is your brain made of? Molecules, atoms, electrons? When these constituents exist in the system of your brain, do they cease to manifest the types of well-defined probabilistic behaviors that they manifest when they exist in other systems? Do the probabilistic predictions of quantum mechanics not apply to atoms in neurotransmitters in your brain? Does Schrodinger’s equation not apply to the wave functions of electrons in those atoms? How about atoms in the neurotransmitters in the brain of a cat? A nematode worm? Do they manifest the same mysterious exceptions? Seriously, if you can demonstrate exceptions to the probabilistic predictions of quantum mechanics, you need to stop wasting your time with me and go get your Nobel Prize ;-) It would be one thing if you had evidence for such exceptions. You don’t. They are posited completely ad-hoc, solely in order to render reality compatible with your ‘intuitions’ about ‘free will.’ Forgive me if I don’t consider those ‘intuitions’ to be particularly reliable. You say: “and possibly (b) how I am using the Copenhagen Interpretation …” Again, let’s get clear on the details of the physics. The standard model of quantum mechanics assigns probabilities to the values of a particle’s physical variables. Those probabilities are given by the square of the particle’s wave function. You determine a particle’s wave function by solving it’s time-independent Schrodinger equation. Wave functions evolve deterministically in time. If you have a particle’s wave function at time 0, you can determine what its wave function will be at a later time by solving it’s time-dependent Schrodinger equation. The probabilistic predictions that emerge from this model have been tested to extreme degrees of accuracy. Indeed, nothing in the universe to date has been more accurately tested. Still, intuitive questions remain. What is the wave function ontologically? How can a particle not have a definite location in space, but only a probability to be in a certain region of space? How can a particle not have a definite value of spin, but only a probability to manifest a certain value of spin? These are valid questions. They continue to lead physicists to search for deeper determinisms. http://arxiv.org/PS_cache/quant-ph/pdf/0212/0212095v1.pdf The Copenhagen interpretation treats the questions as unimportant and does not attempt to answer them. The model works impeccably, that is all that matters. You say: “The Copenhagen Interpretation, however, does not assign metaphysical probabilities to particles. It doesn't deny that such probabilities exist, either. It doesn't get into that issue at all.” Then why do you bring it up? There is no question that many physicists have chosen to abstain from metaphysical questions. Is that fact supposed to lend credence to your metaphysical proposal? It is an empirical fact that particles in this universe display probabilistic behaviors. The specific probabilities can be independently predicted to impeccable levels of precision using the formalism of quantum mechanics. It makes sense to suggest that “the observed probabilities are not real or metaphysical—there is a deeper determinism involved.” It does not make sense to suggest, as you suggested in your previous post, that “the observed probabilities are not real or metaphysical—and … there is no determinism underneath them either!” To conclude the point, I’ll restate a question from my last post that you still haven’t answered. If there are no real, metaphysical probabilities associated with the physical values and future outcomes of particles, then why do they consistently manifest such values and outcomes in accordance with well-defined probability functions, i.e., wave functions? Why does the Schrodinger equation work so well to predict what those probability functions are going to be at any given time? Determinists like 'T Hooft and Einstein have an answer—there is a deeper determinism underneath what we observe that we have yet to discover. As an indeterminist, what is your answer? You say: “Even if you can assign metaphysical probabilities to a collection of 1000 electrons, based on past experiments, that is a tiny sample.” The probabilistic predictions of quantum mechanics have been tested ad nauseam. Further experiments are needed to confirm their accuracy. The only issue is whether the success of the quantum mechanical model is an artifact of a deeper determinism. Needless to say, if it turns out that there is a deeper determinism involved, one more compatible with the determinism of GTR, that will not help your position. It will only make things worse for you. You say: “On the issue of manipulation: I thought your initial claim was that there was no relevant difference between the "natural" Madoff case and the manipulated Madoff case -- that if MR exists, he should be equally guilty in each case. "... same test ... same grade."” That is my claim. You say: “You seem to have shifted your ground in your response to the example of the alcoholic.” How so? Please elaborate. And you failed to answer my question about the alcoholic. If I put the alcoholic in a situation where he himself freely makes a choice, does the fact that I put him in that situation mean that he is not morally responsible for making the choice? If not, then your ‘extenuating circumstances’ excuse is a non-starter. You say: “Once again, free acts are neither random nor determined.” The idea of a free act that is neither random nor determined is nonsense, an empty play on language. If free acts are neither random nor determined, then they do not exist. One of the purposes of the Madoff scenario is to illustrate that indeterministic free choices—whether or not we choose to call them ‘random’—do not form any kind of meaningful basis for moral responsibility. In the end, they make for a great big “who cares?” Suppose you try your hand twice at the Madoff scenario. The results: you make the good choice on the first try, but the bad choice on the second. Then Paul tries his hand—he wants to see if he can do better than you. The results: He makes the bad choice on both tries. Ouch! Then I try the scenario. The results: I make the good choice on both tries! Nice! There you go, I win. You break even. Paul loses. On your view, God should compensate me with pleasure, you with nothing, and Paul with pain. Do you want to try again, do you want to see if you can beat me next time? Seriously, if we play again, and we get different results, why should any of us care? How is this silly little game we’re playing any different than playing craps? So I chose the good outcome on both tries. Why should God reward me for that? How is the result more significant than if I had just rolled 7's twice in a craps game? I understand that, on your view, the process was not the same as rolling dice because “I chose the good outcome.” But the event “I chose the good outcome” was an event that had no rhyme or reason to it. It just happened. Given that it could just as easily have not happened—as was the case with you and Paul—why should I not thank my lucky stars that it did happen? Why should I not feel extremely lucky to have God rewarding me with pleasure right now and not punishing me with pain?
Mark, You say: “As you have stated (D3), it is not a tautology. Perhaps if you move the "if" to the beginning, it would be more obvious: (D3)If S is a member of the class P, S ought to be treated in the way that all members of the class P ought to be treated.” No Mark, it is a tautology. Consider: (A) S is a member of the class P (B) There is a way W that all members of the class P ought to be treated (C) S ought to be treated in way W. When forced into precision, D2 and D3 say: if (A) + (B), then (C). But that claim is true simply in virtue of the stated meanings of (A), (B) and (C)! Look at the meanings! If S is a member of the class P, and if all members of the class P—i.e., S, T, U, and so on—ought to be treated in way W, then S ought to be treated in way W! But we already said that S ought to be treated in the way W when we said that all members of the class P—i.e., S, T, U, and so on—ought to be treated in way W! You say: “However, if by labeling it as a tautology you simply mean that it seems obviously true, then I agree with you! Glad you're on board :)” Not only is it obviously true, it’s synthetically empty. It adds nothing that isn’t already given in the meaning of its phrases. 100% of your lifting will be done establishing (A) and (B). Once you’ve established (A) and (B), there is no lifting required to establish (C). You say: “Regarding treating mundane object as "morally responsible", keep in mind the strict sense in which (FW) and (FW*) are defined. The function of moral responsibility is providing a bridge for getting to what S deserves when our beliefs about S's nature are obtained indirectly, and (FW) and (FW*) are those bridges.” What?! So, if I can somehow get direct knowledge of your nature before you engage in any behavior, I can start punishing and rewarding you then? Here is an interesting scenario for you to address: Suppose a bad neuroscientist injects you with a chemical designed to irreversibly modify your brain chemistry and give you the nature of a horribly violent criminal. During the injection, a good neuroscientist enters the lab, sees what the bad neuroscientist is trying to do, and kills him in your defense. The good neuroscientist removes the injection and then conducts futuristic imaging of your brain to determine if you have in fact developed a violent criminal nature. “Oh no”, he says, “It's happened! You’ve become a violent criminal! The condition is irreversible!” The good neuroscientist knows that you have the nature of a horribly violent criminal. He gained this knowledge directly, through futuristic imaging of your brain. Does he need to dabble with the ‘bridge’ of moral responsibility in order to start punishing you? Does he need to wait for you to actually do something wrong? On your view, no. He can get right to it. Make you suffer. You deserve to suffer—after all, you have the nature of a violent criminal. That’s absurd. You say: “For example, we can directly obtain warranted beliefs about how sharp a knife is by lightly touching its blade. (FW) and (FW*) do not seem the least bit relevant in this case. Hence, I seriously doubt whether (MR) or (MR*) are relevant to the knife. But even if they are, so what? In other words, I am not sure why it should be considered problematic for the view if moral responsibility happened to be broader than we had thought it was.” If a view implies that exacto-knives can be morally responsible, I consider that to be a problem. At the same time, I don’t think a compatibilist can really give a principled explanation for why, in a deterministic universe, lower animals, insects, plants, and exacto-knives are not morally responsible for the behaviors they manifest, and yet human beings are. You say: “I am interested in putting forward an account like this, but I don't see it as strictly necessary in order to talk about the broader concept of how desert functions with respect to objects that we cannot directly obtain information about (agents are surely of this type).” Well, I do have to say, your approach is highly creative and novel. Supposedly, that's a good thing for philosophers seeking publication. Still, if I'm being honest, I think it fails miserably ;-) You have to get rid of those reductios--the prepunishment reductio and the exacto-knife reductio.
Bob, You say: “A mind manipulator who merely added absurd options could not expect an agent to go against her current established values. It would have to alter the character - and thus make her a different agent?” So let’s say I slip you a pill that creates a deep serotonin deficiency in your brain. You begin to manifest morbid anxiety, irritability and depression. I’m not just adding absurd options—I’m altering your character in fundamental ways. Do you die in this process and give rise to a new being? When the pill wears off and your neurotransmitter levels return to normal, does the new being die and give rise to you again? That seems to be the suggestion. I find it absurd. You are a set of biological contingencies unfolding in nature. You have no “character” essence. You think you have a “character” essence because you’ve always observed yourself to manifest certain kinds of thoughts, feelings, and behaviors. That doesn’t mean pharmacology couldn’t cause those thoughts, feelings, and behaviors to change. Believe me, it could.
Mark, You say: “(Is "the Madoff trigger" just a name for your device, or have you got Madoff and Manson confused at some point?)” “Pull the Madoff trigger” means push whatever button on the computer Madoff pushed to set the Ponzi scheme in motion. You say: "Given these assumptions, you can't manipulate just anyone; you have to find somebody with an existing rule that can be exploited -- that is, someone with a pre-existing moral flaw." No, I need to be in a universe that has such a rule. The good news is that if determinism is true, then I already live in such a universe. A brain and body in the state that Madoff's brain and body were in, with the exact same internal and external inputs, will make the same choice that he made. This has already been verified once ;-) You say: “That opens the door for (at least some) moral responsibility to make its way thru the manipulation.” How so? Do you consider it a ‘pre-existing moral flaw’ that you are subject to the laws of physics, that if I change your neurotransmitter levels or I rewire your neural circuitry, that I can get you to choose things that are different from what you would choose in your current state? You say: “The description of S2 was that "[t]he self is a substantial center of both experience and activity." On that view, a manipulation that replaced the physical entity's experiences with a different entity's experiences (as you implied back on July 2nd, when you wrote "I need to tweak more of your memories, make them more congruent with Madoff’s") would indeed be destroying that physical entity's "self" -- by destroying the experiences.” I’m not giving you Madoff’s experiences in the literal sense, I’m making the content of your experiences match the content of his. They are going to be your experiences. You are going to be the one that feels them. Not him. Surely, you will agree that modern pharmacology can alter the content of your experiences in drastic ways. It can make you euphoric, grumpy, numb, anxious, paranoid, insane, all sorts of things. Do such alterations destroy you? If not, then why would engineering Madoffian tendencies in you destroy you?
Mike and the mods, Mispost above. The <<>> signs were supposed to be: You say: “In a Newtonian universe, talk of probabilities is talk of epistemic probabilities. Chance is unreal under those assumptions (cf. Hume). We say that the fair coin has a 50% chance of coming up heads on the next toss because we don't have all the relevant information, but Laplace's demon knows whether that fair coin will come up heads or tails with 100% certainty." In a Newtonian universe, there is an underlying determinism that explains the ratios seen experimentally. If our universe is indeterministic, and if there are no metaphysical probabilities associated with any specific outcomes, then what explains those ratios? To be honest, your view doesn’t just require that we reject determinism or a specific indeterministic interpretation of QM, it requires that we reject the intelligibility of physics altogether. If you claim that the future is indeterminate, and that there is no metaphysical probability of any specific result emerging, then how can there even be physics? What can physics possibly say about anything? I think it’s clear at this point, given the experimental results of QM, that there at least need to be real, metaphysical probabilities associated with outcomes in the world. Either that, or something more than that, i.e., a deeper determinism.
Mike, Let’s get more precise with our QM discussion. Suppose I take an ensemble of 1000 electrons and I put each of them in some state S1. I then measure the z-projection of each of their spins. I get roughly 80% spin-up and 20% spin-down. To verify the pattern, I repeat the experiment an infinite number of times. Lo and behold, I get a cumulative result infinitely close to 80% spin-up and 20% spin-down. Those are the kinds of results that QM experimentation consistently produces. We can explain the results in one of two ways. We can claim that the process is indeterministic, and that an electron in S1 has a real, metaphysical probability of manifesting spin-up and spin-down on a given measurement—.8 and .2 respectively—or we can claim that the process is deterministic, and that there is presently undiscovered information that explains the result of each individual measurement. What you seem to have suggested, i.e., that the process is indeterministic, and that an electron in S1 has no metaphysical probability of manifesting any specific spin outcome, makes no sense. Think for a moment. If the process is indeterministic, and if there is no real, metaphysical probability for an electron in S1 to manifest either spin-up or spin-down, then why do we get a cumulative result infinitely close to 80% spin-up and 20% spin-down? It gets worse for your proposal. Suppose that I conduct the same experiment ad infinitum on electrons in state S2. I get a result infinitely close to 30% spin-up and 70% spin-down. If the process is indeterministic, and if there is no real, metaphysical probability for an electron in S2 to manifest either spin-up or spin-down, then why do I get a result approaching 30/70 and not 80/20? If electrons in S1 and S2 do not have any probability in themselves of turning out either spin-up or spin-down, then why do they manifest spin-up and spin-down in significantly different ratios? You say: “Epistemic probability is the probability of something happening *for all we know*. Metaphysical probability is the probability of something happening based on how it really is. If we have an opaque jar filled with red and black balls, but we don't know how many of each, then *for all we know*, the chance that we will draw a red one is .5. If we know how many of each are in the jar, then we can calculate the real (metaphysical) probability, which may or may not be .5.” It’s interesting that you claim that the epistemic probability is .5. Why not .75? The jar could be 1/2 red and 1/2 black, but “for all you know” it could also be 3/4 red and 1/4 black. If we have no information about a binary system, is the ‘epistemic’ probability of each outcome therefore .5? That seems to be your suggestion. If the suggestion is true, and if there are no real, metaphysical probabilities associated with indeterminate quantum outcomes, then we should expect the epistemic probability of binary outcomes in QM such as the measurement of the z-projection of a spin-1/2 fermion to be .5 in every single case! After all, we are making guesses on an indeterminate binary systems that has no metaphysical probability of manifesting any specific outcome! <<>> In a Newtonian universe, there is an underlying determinism that explains the ratios seen experimentally. If our universe is indeterministic, and if there are no metaphysical probabilities associated with any specific outcomes, then what explains those ratios? To be honest, your view doesn’t just require that we reject determinism or a specific indeterministic interpretation of QM, it requires that we reject the intelligibility of physics altogether. If you claim that the future is indeterminate, and that there is no metaphysical probability of any specific result emerging, then how can there even be physics? What can physics possibly say about anything? I think it’s clear at this point, given the experimental results of QM, that there at least need to be real, metaphysical probabilities associated with outcomes in the universe. Either that, or something more than that, i.e., a deeper determinism. You say: “Under QM, freedom is not miraculous. Under QM, given the state of the universe at a given time, there is more than one way that universe can be at a later time.” And, according to QM, there are definite probabilities associated with each of those ‘ways.’ You want your agent to exhibit non-deterministic, non-probabilistic behaviors. To that end, you have two options. You can take your agent to be an instance of matter that constitutes an exception to physics, i.e., that does not manifest the probabilistic behavior that all other matter in the universe has been precisely observed to manifest, or you can deny that your agent is an instance of matter. Those are your options. Good luck ;-) You say: “And you haven't sealed the alcoholics fate, because your manipulation constitutes an extenuating circumstance which would not be present had the alcoholic chosen to expose himself to temptation of his own free will.” I’ve ‘sealed his fate’ in the sense that I’ve guaranteed that he will become morally responsible for something. Do you disagree? If I put him in a situation where he himself freely makes a choice, does the fact that I put him in the situation mean that he is not morally responsible for making the choice? Please answer. That would be a very interesting conclusion.
Paul, You say: “You're underplaying (2). What is really required is supposing that the Self can retain its identity and is flexible enough to transform into something anathema to the current Self.” What is so unreasonable about that assumption? Suppose I give you PCP right now. You experience thoughts and feelings that are “anathema” to your current thoughts and feelings. You manifest behavioral dispositions that are “anathema” to your current behavioral dispositions. What is so unreasonable about the assumption that you can retain your existence through such a process, that you can continue to exist despite having thoughts, feelings and behavioral dispositions that are “anathema” to your current thoughts, feelings, and behavioral dispositions? You say: “That's a very weighty additional assumption, and it's not surprising that it can do a lot of work.” It certainly is surprising that (2) does the work of rendering moral responsibility impossible. Why should we expect such a conclusion to follow from (2)? What does one thing have to do with the other? Suppose, per (2), that a Self is a substance, a center of experience and activity. As a substance, it has certain sensitivities and dispositions. I use technology to change those sensitivities and dispositions. Your claim seems to be this: If the Self does not remain the same Self through the manipulation, then moral responsibility is possible. If the Self does remain the same Self through the manipulation, then moral responsibility is not possible. ??? That’s a total non-sequitur. Explain it. Explain why we should expect it to be true. I’ll recap where I think we are. You claim that moral responsibility is possible despite determinism. Exploiting determinism, I come along and manipulate someone’s brain in a way that guarantees that they will choose some evil behavior. I ask, “Is the manipulated being morally responsible for choosing the evil behavior?” You answer, “That depends. If the being pre-manipulation is the numerically same being as the being post-manipulation, then the answer is no. The manipulated being is not morally responsible for choosing the evil behavior. But if the being pre-manipulation is not the numerically same being as the being post-manipulation, that is, if a numerically distinct being emerges from the manipulation, then the answer is yes. The manipulated being is morally responsible for choosing the evil behavior. In other words, whether the manipulated being is morally responsible for choosing the evil behavior hinges not on the nature of the choice itself, but on whether the manipulated being is numerically identical to some being that existed at some prior time.” ??? Sorry Paul, that makes absolutely no sense to me.
Mark, You say: “(D2) S ought to be treated in the way that a P ought to be treated if S is a P I see no significant difference between (D) and (D2). I also see a substantive definition in both cases -- not mere tautologies.” Consider “(D3) S ought to be treated in the way that all members of the class P ought to be treated if S is a member of the class P.” Surely, you will agree that (D3) is a tautology. What do (D) and (D2) add to the obviously tautological (D3)? You say: “According to someone like Galen Strawson, there is the suggestion it should go the other way around: (MR GS) S deserves to be treated as a P if S is a P and S is MR for being a P. If we accept (D) on its own merits, before we begin to consider the question of moral responsibility, we will have powerful reasons to reject a principle like (MR GS).” One problem that I have with your account is that it seems to lead to the absurd conclusion that things like exacto-knives can be morally responsible. Consider: (D EK) K ought to be treated as an exacto-knife ought to be treated if K is an exacto-knife. (FW EK) K has free will if O would be able to reliably apprehend counterfactuals about K by observing K. (D EK) is tautologically true. (FW EK) is true given our (arguably) indeterministic universe. Given your formula “Moral Responsibility = Desert + Free Will” it seems to follow that an exacto-knife can be morally responsible. I think there is a valid challenge here not just to your account of moral responsibility but to all compatibilist accounts. If individuals in a strictly deterministic universe can be morally responsible for things, why can’t atoms, molecules, thermostats, and so on be similarly responsible? Superficially, the compatibilist answer will probably be "Because atoms, molecules, thermostats, and so on are not conscious." But that just delays the real question: what do conscious deterministic processes offer over and above unconscious deterministic processes that magically opens the door for moral responsibility?
Paul, You say: “By the way, if you polled 1000 random Christians and carefully explained the compatibility of nonphysicalism with determinism, I bet some of them would endorse the view that each soul has a core unchangeable deterministic character which could include such an aversion.” Is there any reason to take such a view seriously? ;-) You say: “Then what you have shown is that given determinism plus additional assumptions, MR is impossible. Neither I nor any other compatibilist has ever disputed that.” Here are the additional assumptions. (1) The self is a substance, a center of experience and activity. (2) The self can retain its numerical identity as a substance even if its state is changed in behaviorally significant ways. Why would the impossibility of moral responsibility follow from the additional assumptions of (1) and (2)? Isn’t that a bit weird?
Mike, You say: “If the probabilities are epistemic, then QM remains unaffected by the introduction of agent causation.” What would it mean for an outcome to have an ‘epistemic’ probability of occurring, but not a ‘metaphysical’ probability? “If the probabilities are metaphysical, then I still need not deny QM; QM is true as far as it goes, but it's just not the whole story. There are other forces at work, namely those coming from agents. Imagine that you toss a perfectly balanced coin. The metaphysical probability that it will come up heads is .50 if not interfered with, but while it is spinning it is neither heads nor tails. Quantum indeterminacy like that spinning coin, and agent causation is like a hand that intervenes while the coin is spinning and forces it to come up heads or tails.” What is the hand composed of? Particles? According to QM, those particles have definite probabilities to manifest certain outcomes. As I said earlier, if you want to claim that the hand has no probability of making the coin turn heads or tails, then you will have to deny that QM applies to the particles in the hand. In other words, you will have to posit matter in the universe that constitutes an exception to QM. Either that, or you will have to posit something non-material—a Cartesian soul, for example—that intervenes on the hand. QM probabilities would then apply to the hand only in the absence of such intervention. But if you are willing to take that sort of ‘supernaturalist’ approach, then you never needed QM in the first place. You could have taken the same approach towards Newtonian mechanics. Just posit that free choices are an exception to its laws. You say: “Under Newtonian physics, agent causation is ruled out because it would violate conservation laws. Under QM, this obstacle is removed.” Under QM, your kind of agent causation is ruled out because it violates probabilistic laws. Of course, your approach is to claim that QM probabilities, if they apply to matter, only apply in situations where an agent does not intervene. But you could have taken the same approach with respect to Newtonian physics. You could have claimed that Newtonian physics, if it applies to matter, only applies in situations where an agent does not intervene. That is the advantage of taking a supernaturalist approach. Nothing constrains you. You can claim whatever you want. In my view, it’s a silly approach. You know you’re going to be wrong, so why bother? You say: “Here's another point to consider. Even if you were right about the probability issues and the personal identity issues, in the case of repeated manipulations the manipulator would be the one who gets the blame. Suppose an alcoholic has resolved to stay sober by avoiding any situation where alcohol is present, and has successfully done so for some time. If you repeatedly use force or guile to steer that alcoholic into situations where he would be confronted with the temptation to drink, and he eventually does drink, you are clearly at fault as much or more than the alcoholic.” OK, so there are other people responsible in addition to the alcoholic. I’m fine with that. The point is that through my manipulation, I will have guaranteed that the alcoholic would become responsible for something. I will have sealed his moral fate. Never mind that I also will have sealed my own moral fate ;-) You say: “Bottom line: the libertarian has no problem shrugging off the Madoff problem.” Look, if the only way for a libertarian to address the problem is to posit a supernatural soul, then I’m satisfied. As far as I’m concerned, that’s a reductio.
Paul, You say: “I agree that S2 is compatible with determinism. But I'm not sure that it follows from the combination of them, that you can manipulate the agent. On a traditional understanding of S2, the Substantial Self just is the source of activity, so it's not clear how you can make it change its tune, short of replacing it. We need more than just S2 + determinism.” I can make it change its tune because it is a part of the universe and the universe is deterministic. If determinism is true, then what the self is going to choose later is set by the state of the universe now. By manipulating the state of the universe now, I can manipulate what the self is going to choose later. Why would my manipulating the state of the universe now necessarily imply my killing the self that exists now and creating an entirely new one? I don’t see how that follows. Now, one could argue that if determinism is true, i.e., if it is true that what the self is going to choose later is set by the state of the universe now, then the self can’t really be said to be choosing anything. But that’s a totally different point, one that is in gross conflict with your compatibilist position. You say: “To that end, suppose traditional soul-beliefs are wrong, and instead of a simple substance, the Self is a complex one, with various properties that undergird action and interact with the environment. Then it may be open to manipulation. But by the same token, the Self loses its immunity to Sorites problems. For the Self is that which underlies the manifest continuity of experience and action, whether that be made of neurons or soul-stuff. But if the soul-stuff that underlies the continuity is complex, and you seriously disrupt the normal mechanisms (or "soulanisms") that serve that continuity, then you push identity of the Self into the indeterminate zone or beyond.” There is no need to make those suppositions. They just introduce new complications. Suppose that self S is a simple substance. It is a rule in our universe that if X at t, then S chooses X1 at t1. If Y at t, then S chooses Y1 at t1. If Z at t, then S chooses Z1 at t1. I want S to choose Z1, i.e., to pull the Madoff trigger. So I put the universe in condition Z. Per the rule, S murders someone. Why does my putting the universe in condition Z necessarily imply that I have killed S and created a new entity?
Mike, You say: “On this view, free choices don't originate from quantum events; they originate from *agents* (i.e., selves). The significance of QM for this model is simply that it denies the Newtonian principles that dictate only one possible outcome from the state of the universe at any given time.” But QM puts a definite probability on those possible outcomes. Thus, if QM is true, and if free choices map to or control possible outcomes in the physical universe, then they too must be probabilistic. Your options are to either deny that free choices map to or control possible outcomes in the physical universe (in which case they would be irrelevant), or to deny QM (good luck). You say: “I don't see how any consequences for QM follow from my claim that the probability than an agent will choose one course rather than another cannot be defined, but if they do follow, I have no problem with either weird new physics or a supernatural soul. I would much rather accept that there is more in heaven and earth than is dreamed of in our philosophy than throw out moral responsibility or meaningful freedom.” We can explain our intuitions about free will and moral responsibility quite well by reference to evolutionary psychology. There is no need to reject well-tested science or to introduce supernaturalisms. Now, we may not find the explanations to be particularly cozy or consistent with our naïve views of ourselves, but that’s a different matter. The truth doesn’t care about our emotional reactions to it.
Mark, When you say “(D) S ought to be treated as a P if S is a P” what exactly do you mean by treated as a P? My suspicion is that you mean treated in the way that a P ought to be treated. But then (D) reduces to a tautology. Please precisely define the meaning of treated as a P.
Mike, In your paper you say: “Let me begin with the point about probability. Probabilities cannot be assigned to free choices.” If that is the case, then free choices cannot be manifestation of quantum effects. So the question becomes: as a libertarian, where do you plan on getting your indeterminism from? It seems to me that in order to maintain your view, you will have to either deny our best model of physics and introduce one that is even more peculiar, or posit a supernatural soul that can act independently of events in the physical brain. Because if our best model of physics is true, and if our choices are tied to events in the physical brain, then they most definitely have probabilities associated with them.
Paul, You say: “S2 = The self is a substantial center of both experience and activity … Assumption S2 could conceivably be true. But I don't see how manipulation is supposed to work on S2.” You agree that S2 is compatible with determinism, right? In that case, there shouldn’t be any problem seeing how manipulation would work on S2. We gradually change the state of the manipulated’s brain, body, and whatever else, and then we let things unfold from there. So, consider the following two conceivably true statements: (1) The universe is deterministic (2) S2 My question for you: If (1) and (2) turn out to be true, will you abandon your conception of moral responsibility? Will you admit that individuals are not morally responsible for their behaviors?
Murali, You say: “Any theory of moral responsibility that is able to exculpate the coerced should exculpate the manipulated.” True, and any theory of moral responsibility that can exculpate the manipulated should exculpate Madoff, because there choices are identical in every conceivable respect. You say: “Or even if manipulation is not relevantly similar to coercion, there is still no moral responsibilty as he cannot be reasonably be expected not to push the button.” If he cannot reasonably be expected not to push the button, then how can Madoff reasonably be expected not to push the button? That is the problem, and your response doesn't address it. You say: “There is something wrong with Brian's description of libertarianism. Does free will genuinely mean that there is some probability that a person does otherwise?” If libertarian free will were compatible with there being a zero probability of your ever choosing otherwise, then why would it be incompatible with determinism? That doesn’t make any sense. Also, as a libertarian, you have to remember where your desired indeterminism will have to come from. Ultimately, it will have to come from QM. According to QM, if you have two particles in the exact same quantum state and you measure them, the probability of getting a specific result is necessarily the same for each of them. So, if I put every particle in your brain, body, and environment in the exact same state that the particles in Madoff’s brain, body, and environment were in immediately prior to a bad choice that he made, then the probability that you will make the same bad choice will necessarily be non-zero. We can say that much with certainty because if the probability of the bad choice occurring from that state were zero, then Madoff himself would not have been able to make it, as his choice occurred from the same state.
Paul, (1) Clockwork God v. Hovering God – You stated that the universe governed by the Hovering God has only one agent. You did not specify whether the same is true of the universe governed by a Clockwork God. I will assume that in such a universe your view is that there can be more than one agent. If that assumption is not correct, let me know. We can easily construct the manipulation scenario so that the manipulator becomes like the Clockwork God. Suppose that the universe is relevantly deterministic and that I put you in the exact state that Madoff was in one day prior to his first Ponzi choice. I then go to sleep. Maybe I die in my sleep, maybe not. We don’t know. All that we know is that you are necessarily going to make the choice that he made come the next day. Clearly, I am like the Clockwork God, and therefore we can say that you are an agent in your own right. Assuming that you remain the same self that you were prior to the manipulation, then the result is that I was able to seal your moral fate. I was able to ensure, through technological tinkering, that you would become morally responsible for something. That is an unacceptable conclusion. It represents a reductio of the pro-moral-responsibility position. (2)Punishing the Single Agent – If we can justifiably punish each part of the single agent after we break them up, then the implication is that, after the break up, each of them is morally responsible for something. Returning to the manipulation scenario, suppose I manipulate you in a deterministic universe so that you take Madoff actions. The community then breaks us up to punish us, as I was able to predict it would. If the punishment is justified at the time that it is administered, then the implication is that you—the separated, single agent—are morally responsible for something at that time. It follows that I was able to achieve my goal: I was able to ensure that you would become morally responsible for something. We see, then, that the “single agent” hypothesis accomplishes nothing. The reductio holds even if we grant that hypothesis as an assumption. The Manipulation Argument Itself – I made the point to Kip, and I’ll make it again in the form of a question. I am not claiming that the self is the numerically same self before the manipulation as after. The “self” is not well-defined on anyone’s analysis—yours or mine—so that would be a silly claim for me to make. What I am doing is having the reader assume certain things about the self that make it possible for the self to retain its numerical sameness despite the manipulation. Specifically, I am having the reader assume that the self is a substantial center of subjectivity that experiences mental content, and that this content can change in drastic ways with the self remaining the numerically same self that it was before the change. Let’s call this assumption AssumptionS. Here is a simple question for you: If AssumptionS turns out to be true, would you abandon your conception of moral responsibility? Would you concede that individuals cannot be morally responsible for their behaviors? If not, then you need to attack the manipulation argument in some other way than by challenging AssumptionS. For, by your own admission, you would hold to the same views on moral responsibility even if AssumptionS were true. As I said to Kip, challenging AssumptionS is an evasion, not a legitimate challenge to the argument.
Kip, You say: "Similarly, if you change my brain slowly into Mother Theresa's, there's a very strong argument that you've killed me and made a new person." OK, then give the argument. If I change MT's brain slowly to match the state that Manson's brain was in prior to a crime, why *must* I conclude that I've killed her and created a numerically different person? What is the problem with assuming, for the sake of argument, that Mother Theresa is a self, a subjective center of experience, and that she can continue to exist despite drastic changes in the content of what she experiences? That's all I'm asking us to assume. If the assumption were somehow incompatible with the existence of moral responsibility, then the compatibilist objection would have merit. But that's not the case. There is absolutely no incompatibility whatsoever between the concept of a self as a subjective center of experience and the concept of moral responsibility, and therefore there is no reason for a compatibilist to refuse to accept, for the sake of argument, the assumptions behind the scenario. You say: “Remember that I'm extremely sympathetic to your view. In fact, we share the same empathy-based, skeptical view on free will. But it's important to recognize the strength of the compatibilist argument based on personal identity. As I've said before, I think it's the best argument the compatibilist has.” But it’s not an argument, it’s an evasion. In proposing the scenario, I’m not claiming that the self retains its numerical identity through the manipulation. None of us knows what a self is, or if such a thing even exists. To facilitate the scenario, I’m asking the reader to make an assumption about the self. Specifically, I’m asking the reader to assume that the self is a substance, a subjective center of experience, and that it can retain its numerical identity as a substance despite drastic changes in its experiential content. Not all of us would agree with this assumption, but that doesn’t make it absurd or untenable. It’s certainly possible that the assumption be true, that the self be a substantial center of experience that can endure despite drastic changes in its thoughts, feelings, beliefs, inclinations, dispositions, and so on. If that assumption does in fact turn out to be true, are moral responsibility advocates going to immediately abandon their belief in moral responsibility? Are they going to throw in the towel and say “Oh well, a self can theoretically survive Parks’ manipulation scenario, I guess moral responsibility is therefore impossible.” Of course not! So I find it rather disingenuous that they would refuse to grant the truth of the assumption for the sake of argument.
Murali, You say: "A libertarian can consistenly bite the bullet that I am morally blameworthy and still reject the conclusion that technological advances can make me guilty of something." The problem is that technological advances would be able to guarantee that you will become guilty of something. That is a highly problematic conclusion. You say: "It may be highly proable that under many iterations, a manipulated madoff clone (MMC) would pull the ponzi scheme at least once, even though in any one iteration, the individual probability would actually be quite low. However, it is not necessarily the case that our MMC will eventually push the button, only highly probable. In that case, technology can never seal any moral destiny, only make certain destinies highly improbable." I'm going to rerun the scenario until you make the bad choice. I will wait as long as that takes. If there is a non-zero probability that you will make the bad choice, then I will win. There will never be a point where you will be able to say that you won, that you completed the exercise without making the choice, because the exercise will not be over yet. It will keep going until you do make the choice ;-) You say: "But that, in itself is not really controversial or problematic at all." No, it's still a problem. My being able to guarantee, to a probability of 99.99999...999%, that you will become guilty of something is most definitely a problem. If the only advantage that the libertarian position offers over the compatibilist position is that .0000...0001% chance, then it does not offer any meaningful advantages.
Richard, You say: "I share the view that your manipulator has simply replaced the target with a new person." What is the most that I could change without your claiming that I have killed the original person and created a new one? If, for example, I just alter the person's personality (well within the capabilities of modern psychiatry), and I leave the person's memories and external appearance intact, will I have killed the original person and created a new one?
Randy, You say: "One way to change someone's brain is to present her with a cogent argument. Another way, which I suppose would be available to the manipulator in your scenario, would be to skip the argument and just rearrange the neurons and their interactivity. Seems a morally relevant difference to me, one that a theory of moral responsibility should recognize." In what sense would it be a morally relevant difference? Suppose that A and B have brains in some identical state S and that they make an identical choice C. Assuming that neither A nor B is morally responsible for having a brain in state S, why would the specific details of how their brains got to that state matter as far as their moral responsibility for making choice C? Can you explain the difference in terms of your integrated agent-causal theory of free will? When an agent makes a decision, you say that "the decision is caused by by her, and it is nondeterministically caused, in an appropriate way, by [her reasons]" Assuming that the agent is not in any way responsible for having the reasons that she has, why would their causal origins matter as far as her responsibility for the decision itself?
Kip, You say: "How can you put someone else's brain into the exact same state as someone else, without killing that person and creating a copy of the other person?" There are conceptionalizations of 'self' in which numerical identity is possible through the described change. I am asking the reader to assume one of those conceptualizations for the sake of argument. The only reason this request would be problematic would be if the conceptualizations were somehow incompatible with moral responsibility. I see no reason to think they would be. Do you? You say: "Are you suggesting do this slowly enough (molecule by molecule) to preserve continuity of consciousness?" Sure. That's one way. If I changed Mother Theresa neuron by neuron, at what point would she cease to exist? Suppose I give her a pill that alters her neurochemistry. The pill slowly takes action, such that over a period of twelve hours, she gradually turns into a completely unrecognizable person, a violent animal--just like Manson. Does the entity previously described as 'Mother Theresa' die in this process? If so, at what point does she die? What if the change is more benign? She becomes different--more antisocial, more difficult--but nothing like Manson. Would she still die in this process? How much change would be necessary for her to die? You say: "Even in that case, there is a strong argument to be made that the old person is gone and a new person has been created, however gradually." Really? What's the strong argument? "C'mon, it's obviously not her anymore!" is not a strong argument ;-) The truth of the matter is that numerical identity is a mental construct, not a privileged feature of any reality. That is why these kinds of discussions tend to generate so much confusion. Is the ship-of-theseus, with all its wood fully replaced, the same ship that set sail from the harbor? Was the Michael Jackson that just died the same underlying person that sang "I Want You Back" in the 1970's? Are your Nike basketball shoes the same shoes they were before you popped the air pockets and changed the shoelaces? Good luck with those questions. Reality has no answer for them. Likewise, it has no answer for the question "Would I remain the same underlying person if this or that about me were changed." A meaningful conception of moral responsibility, however, requires that numerical identity be more than a mental construct. And so my example works inside that assumption: specifically the assumption that agents exist, that they endure through time, and that they can maintain their identity as substances despite drastic changes in their properties. These assumptions do not preclude or undermine the possibility of moral responsibility, so there is no reason (other than evasion) for MR advocates to quibble over them in responding to the scenario.