This is Scott Forschler's Typepad Profile.
Join Typepad and start following Scott Forschler's activity
Join Now!
Already a member? Sign In
Scott Forschler
Recent Activity
I'm afraid I don't quite understand the proposed solution, but in any case the general proposal to rate our moral principles as a function of how they would work in some ideal or semi-ideal world (Kant, Scanlon, etc.) always struck me as rationally unmotivated, since we don't live in such a world (in this I agree w/ the remark Jussi ascribed to Hooker, although I found Hooker's presentation of his view less clear than this simple summary, at least in _Ideal Code_). Such a test may be a /necessary/ condition of our principles being moral, but it is certainly not /sufficient/, since the good and bad effects of additional persons conforming to certain principles is sometimes non-linear. Some principles may work fine in an ideal or semi-ideal world, but awfully in others. What we need are principles that work wherever we happen to be, including ones for the current world, of course--but we must also prepared to act differently if the number of people following certain principles changes which in turn changes the effects of our acting on our current principles, just like we should be prepared to make similar changes under other circumstances. All versions of ideal-world tests seem to simply be proposals that we /exclude/ from consideration any principles which would specify--in their antecedent description of under what circumstances we should perform certain consequent actions, or value certain things, etc.--the class of descriptions including features like "and when the number of persons following similar principles is M..." I have never understood why anyone would want to rule out of bounds such principles in a fundamental moral test. So if you start with an ideal world test, yes, you'll need some more or less complex way to then try to rule out principles which lead to this result, but this is the long way around. I think we should instead abandon the ideal world condition, and go straight to moral supervenience, requiring that our moral principles be acceptable wherever their antecedent conditions hold, with no restrictions on what kinds of antecedent conditions are to be considered. This makes the test of whether we want *any* number of persons following our principles be the guide, which then forces us to adopt (in many cases) principles which prescribe different actions under different circumstances, including those in which different numbers of people follow just those principles.
It seems to me that this is essentially what our conscience is, or is supposed to be; a reminder of what is ethical, a check on our temptations to be unethical. Except that it is 1) not guaranteed to actually correspond to actual morality in any given individual, and 2) is not always heeded. In general, if 1&2 were fixed, it seems to me that this would be a good thing, and is not at all incoherent. Though Tristam's comment from Korsgaard, that normativity must be something we could fail to follow, is interesting; it seems in tension with the general Kantian (& Korsgaardian) idea that acting ethically makes you *more* of a person, not less of one (say, on the grounds that your freedom is restricted). I've always been suspicious of the original Kantian idea of a "holy will" which always acting ethically and cannot help but do otherwise. At a first pass on reconciling these, I'm tempted to say that, given our instantiation in the physical world, and supervenience upon fortuitous combinations of particles, both we and the "conscience/ethicschecker" mechanism still *could* fail. We need to be aware of the fact that, at the very least, cosmic rays could burst into our brains or checkers and send the mechanism off its rails, so we did something unethical. To be a fully ethical being is not merely to have the percentage of one's ethical behavior reach 100% (which could occur by chance), but to be prepared to resist pertubations of one's ethical tendencies by "impulsive" causes, whether internal or external, if and when they arise. Of course, having such a back-up or standing disposition can simply be construed as part of the means by which we increase the percentage of ethical behavior, and can be built into the machine. But we must also endorse the machine being built /in just this way/, rather than merely accept the output of the machine blindly, or because we cannot help doing so. Of course, if the machine is part of us, that helps--or if it can become part of us, in a sense, through our understanding of and endorsement of its workings, as part of our Korsgaardian "practical identity." I think this is in at least partial agreement with Richard's point that it's not enough to passively accept whatever the machine forces you to do. If you wholeheartedly endorse it via an understanding of why it dictates as it does, knowing that and why its requirements are right, and happily acceding to its correction of your behavior--knowing your moral frailties without it--then we can properly say that you have intentionally done what the machine makes you do, in a sense in which you don't if you resist (with futility) its corrections. I don't think I've fully reconciled the tension here, but it's an interesting challenge to think about this.
Toggle Commented Feb 8, 2012 on Ethical AutoCorrect at PEA Soup
I answered the survey as I think an actualist would, since that was my understanding of what "objective obligation" is supposed to mean given both the definition of it prefacing the scenario, its application of the wire-cutting scenario, and other discussions I've heard of it. Hence I answered that Phil should give C in those cases where, in fact, doing otherwise will result in Pat's death, and "can't say" in all the other cases where whether Pat will get C or P or nothing depend on facts which have are yet established by the fact of whether Phil gives C or P or nothing. If I'm not mistaken, all the questions fell into one of those two categories. However I don't think this says anything interesting about my "intuitions" since I have no intuitions regarding "objective obligation." I generally see it used, and used in the definition & example, to mean "X is objectively obliged to A just if X's Aing will actually lead to the best results," so once I'm told what leads to the best results, or am not given enough information to answer that question, I answer accordingly. If anyone thinks it means something other than that, then frankly I'm tempted to conclude that the concept is more confusing and potentially misleading than I already thought it was. This is part of why I think the more useful and morally significant concept is subjective obligation: not only is it based on facts accessible to agents, but it does not duplicate the already perfectly clear phrase "actually produces the best results." Just my 2 cents.
Toggle Commented Mar 17, 2011 on Post Survey Wrap Up - The Two Medicines at PEA Soup
>I'm still not getting it. We agree that (1) have an informational reading. Why does that mean (2) and (3) do as well? I assume this was in response to my last post; in any case it made me realize I still wasn't quite clear on something. Charitable readings of ambiguous claims are under at least two constraining assumptions; that the claims are intended to be both true and relevant. So if someone asserted (2) and (3) in response to (1), or in an argument such as (1)-(5) purporting to describe a paradox, charity forces us to assume that the "ought" of 2&3 is intended to be--let us first try relevant--to the "ought" of (1); i.e., informational/epistemic. Of course then 2&3 are false. Or we could charitably assume 2&3 are intended as true, forcing the objective/ciurcumstantial reading, but then irrelevant to challenge (1) with. I should never have claimed that we should assume either reading simpliciter; rather, that either reading is possible in itself, and either has some charity behind it, given how 2&3 were presented in the various examples; being charitable in both ways simultaneously is impossible; and whichever one we assume, the paradox is dissolved without questioning modus ponens. In that I agree with Janice; I just don't understand what advantage the modal language for explaining this is supposed to give us.
Janice, I think (2) and (3) must have an epistemic reading if they are taken to be talking about the same kind of ought as in (1), which presumably is the moral ought which would govern the action of a rational being; though in such a case they are false. Or, they can be given a non-epistemic reading, where "ought..." means "...will lead to the best outcome," and then they turn out true, but no challenge to (1). Sorry if that wasn't quite clear in my verbosity. Arguably we should read assertions charitably as intended to be true in the context of utterance; but I guess I was taking the context of (2) and (3) as "part of an argument posing a problem for (1)"--suggesting the oughts are the same, rather than "people on the scene of the mine urging logical considerations," which suggests different oughts. Though it doesn't matter which reading we assume is intended if the paradox dissolves under either. In your earlier comment, you say that the talk of "modal bases" can accommodate the ambiguity of "ought" which the subjective-objective distinction captures. You refer to an (unpublished?) K&M paper for a discussion of the advantages of doing so; but I guess I'm asking for a little hint of what these are prior to looking at that paper, something to make it worth my while to explore. You seem to be talking about {cicumstantial/informational} readings of "ought" instead of {objective/subjective} readings. The latter terminology is familiar to most of us; so if the former just is another way of saying the latter, but bringing in some complex modal semantics, I'm wondering what should motivate me to consider this way of speaking. Here's part of what concerns me; Clayton noted that your claim that "(1) is true, roughly, just in case in all the best worlds compatible with our information, we do nothing" seems just false; for we take (1) to be true, yet in the best worlds compatible with our information we do not do nothing. You reply that we can save the modal reading by redefining "best world" to mean, not what we would ordinarily take it to mean, but some function combining the goodness of the world with the probability of our attaining it given certain actions on our part (if I'm reading you right here). [Crucially, of course, this must be *subjective* probability; objective probability lands you into the same soup again.] But then the modal talk seems to take a very long way around to end up saying something that can be and has often been expressed in the less misleading language of "maximizing expected consequences."
John, There is a contradiction between (1) and (5) because you actually have removed the "if" clause of (2) and (3) by reaching (5), using proof by dilemma. You contrast the "logical" consequence of (5) with the "epistemic" consequence (1). You seem to be implying that the "epistemic" consequence somehow is stronger than the merely "logical" one--and I actually agree with you here, and think that we have no obligation to respond to facts whose existence we are both unaware of and incapable of practically becoming aware of in the requisite time for a decision. The problem remains that if we had evidence supporting (2)-(4), then we would also have "epistemic" reason to accept (5). I take this as a reductio of the claim that we could ever have evidence supporting (2)-(4), and as a pragmatist I take that to be equivalent to saying that they can't all be true. Since (4) is contingent, but obviously a claim which could be true, we must reject (2) or (3); since they're formally equivalent, we should reject both.
I agree with John Alexander's suggestion that (2) and (3) have implicit epistemic qualifications on them to make them true to be the most plausible approach, and am not sure exactly how the contextualist analysis is supposed to be better, or even different really. It seems to me that Janice implicitly says that (1) has such a qualification, when she says that we can only find it true "under an epistemic reading". I take this to just mean that what is really true is that "If we have evidence E about the miners, then we ought to block neither shaft--and we have evidence E, so we ought to block neither shaft" where E is the evidence described in MINERS. I would just add similar qualifications to (2) and (3), making them come out true under different evidence conditions. I think this is implicitly what they mean if read as moral judgments, and anyone who insists, like Mickey in Jussi's post, that No, he really believes that we are morally obligated to block A just if the miners are in A, and we ought to do so now even if we have no evidence about what shaft the miners are in, is simply confusing two senses of "ought"--see below. I think this is even clearer if we analyze moral ought-judgments, as I think we should, as second-order approvals of norms of responsiveness to evidence, to be evaluated by assessing the total harm and good done by agents following such response norms across all possible worlds in which they could do so. This forces us to clarify what we are really doing as intentional agents. For we never just "block A"--in every circumstance in which we do this, we block A in response to the evidence we are confronted with [one might say "I'll always block A when I have the chance, whether there are even any miners around or any dangers"--but this is just a trivial form of a response norm, and is obviously one we must rationally disapprove of]. Blocking A in response to the actual fact of the miners being in A [apart from our evidence regarding the same] is simply not an action we can perform, and does not describe a response norm which we could approve or disapprove of. This can take care of the King Henry example also. "Valuing today the knights' non-arrival if we win/lose tomorrow" is not a response norm, for the action of valuing the knights' arrival cannot be made in response to evidence of an event which has not yet occurred. The relevant response norm we can consider today is K: "value the knights' non-arrival when anticipating an uncertain battle tomorrow", and we have much reason to disapprove of such a norm. After the battle we may have reason to value part of the result of their non-arrival, but we will still have no reason to approve of anyone following K, including our past selves. This is because another likely part of the result of their non-arrival--losing the battle--is one we greatly disapprove of. My view is also captured by the "subjective" side of the objective-subjective "ought" distinction. I don't like that language so much because I think it misleading if we thought that the objective ought is still a moral one in any important sense. For we use "ought" ambiguously; sometimes in the moral sense of approving of a response norm and describing our obligations, other times simply to describe the most favorable possible outcome of some situation. The latter is not a moral judgment, IMO, and both apparent paradox and confusion result from treating it as one.
Doug, I never understood the problem of memory/difficulty of typing to be the problem that Howard-Snyder and Wiland's examples were getting at. So attaching the "securable" qualification to one's original intentions seems completely orthogonal to their main points. If you're trying to distinguish between, say, someone who could make *one* intention and thereby complete the activity as a guaranteed result of this intention, and someone who needs additional (contingent) intentions to do so, this fails; because we're all in the first condition for most activities. You ask: do you think that it's likely that there is any intention that I could form now such that, were I to form this intention now, I would type out the next King Lear? But I doubt that Shakespeare ever had such an intention which was sufficient to perform the job; he needed others that occurred to him along the way. So would I. Now the *probability* of my being able to form such appropriate sequential intentions is much lower than that of Shakespeare doing so at some point in time, or Don Delillo, Toni Morrison, etc. But it is *possible* (and it is also possible that such masters of the craft would fail; it's a matter of degree, though a huge one). I take the Howard-Snyder/Wiland cases to show (1) that the differences between logical possibility and what we might call morally practical possibility are so vast, much much vaster than what we see in more mundane examples like pressing a red or green button to stop a bomb, that to make moral attributes a function of the former instead of the latter is more palpably absurd than might previously have been thought, (2) even if I were to press the right button, win the chess game, etc. by making random motions and hitting on the solution by luck, only the motor actions can be considered truly mine, and not the relevant success; the latter is not something *I* do as an agent, but merely something my body actions contribute to in conjunction with highly fortuitous circumstances, and hence not relevant to moral judgment of *my* actions, which are now more clearly seen to be based on intentional responses to the evidence which I have and can assess about my situation, which *must* include awareness of and appropriate responses to my cognitive limitations, (3) [Wiland's addition] that this gulf appears not just on rare occasions, but constantly and massively. Now, that said, you can evoke a theory by which you *call* anyone who fails to maximize the logically possible good (copossible with their current physical state, and notably including vastly improbable psychological possibilities) "wrong." But this now appears to be (1) a palpable abuse of the English language, and (2) a palpable abuse of language-independent moral concepts. If anything is true, we are not all morally wrong for failing to do these things at nearly every moment [this shows we are *imperfect*, but to conflate imperfection with wrongness now appears more vividly to be a salient fault of objective consequentialism]. My inability to come up with with the next great novel or scientific document is not based on the fact that I couldn't form a single intention to put it all down at once, or that I would have some irresistible compulsion to mistype a crucial character or section part-way through. It's that the subjectively-known chances of my successfully completing it, while non-zero, are so low that I would now be wrong to form and act on any intentions to try to produce such documents instead of doing other things with far higher subjective expected value. For forming and acting on intentions in response to accessible evidence is what agents actually do; whether they produce good results, write novels, or even succeed in pressing buttons, is strictly speaking out of their control. They can only reach in certain directions with certain hopes and expectations; to judge them right or wrong based on the results is, strictly speaking, to confuse qualities of the resulting states of affairs with qualities of their actions. Now this is something that deontologists have long accused consequentialists of doing; and I think they were right. What's interesting is that a consequentialist response to this criticism is possible: it is subjective consequentialism. You concede above that most of us are at most moments of the day not doing what we are, as of that moment, objectively morally required to do. But then you basically agree w/ Howard-Snyder and Wiland about the facts on the ground, so to speak; you just want to use different words than what they want to use. The question then is: is it more, or less confusing, to say that Caesar was equally wrong not to have advanced human civilization by inventing the steam engine as to not have done so by restoring the republic, than to say what normal people would say about this case. I think it is deeply misleading to say such things; we would constantly have to be qualifying the term "wrong" using objective and subjective parameters. We could also say a dog has five legs, four regular ones and a fifth "tail leg." But why on earth should we do this, when both ordinary language maps better onto important real distinctions? We can say all you want to say about "objective wrongness" by using words like "actually bad results." But this has *nothing* to do with moral qualities of human action /except/ insofar as evidence for, anticipation of, etc. such results formed some part of some agent's intentional response to some facts.
I'm joining this late, but reviewing the comments so far, I think I most agree with Mike's view that a generalized Subjective Decision theory would advocate A3 in Normative Uncertainty, as well as A3 in Mine Shafts , SC being a special case of SD. Doug, you responded to this by not discussing SD, but something called "SC-subj," but I didn't follow if by this you intended to be describing something like SD, or something else, so it doesn't appear to me that this point has been responded to. I also like James Allen's suggestion that the value/deontic value distinction is untenable. Doug responded to this by saying that "The bearer of value is a state of affairs. A state of affairs can have more or less value. The bearer of deontic value is an act (not a state of affairs)." But states of affairs can be described as those in which some act has been done, or resulting from some act. This is true even if all future causal results of, say, act A and act B are identical, for we can speak of a state of affairs across a range of time, starting with the performance of A and B, and so the "state of affairs" resulting from them is different, even though all later time-slice "states" of the universe are identical. So unless the bare term "value" is restricted in some way so that it is not, in fact, value-all-things-considered, then it must *include* deontic value. Then we may ask if deontic value must also mean value-all-things-considered. If not, then it is something less significant than value, and we can ignore it, since "value" includes it and other considerations as well; if it does, the two are identical. Or if "value" is restricted and "deontic" value is value-all-things considered, then again, SD (generalized SC) tells us to pick A3. I quite agree that the relevant kind of subjective expectation of value is one's evidence (not one's beliefs), as Doug argues for in the Huck Finn case. I simply add to this that the evidence for SC is, I think, fairly massive, when this is properly defined. Hence in any but very convoluted cases, SD immediately leads to SC, which is why SC is a very interesting theory, even though in the strictest sense it is not universally correct, as I suggested in Doug's earlier post several months ago attacking SC.
Toggle Commented Apr 27, 2010 on Consequentialism and Uncertainty at PEA Soup
I agree that Bloom's reference to the power of stories doesn't by itself count for reason as against emotions in the formation of ethics. But I suspect a better point could be made of the examples, that stories can highlight for us the salience of some facts as against others, appealing thereby to intrinsically rational aspects of moral judgment. They may thereby make our moral judgments more rational, as when we are led to recognize similarities between ourselves and other beings (other races, genders, species) which we had previously not given due weight. In other cases they can make our judgments less rational, as when they downplay similarities and highlight irrelevant or imagined differences between ourselves and other beings. But even the latter may appeal to a rational attempt to make our moral judgments more coherent, just on the basis of false (though appealing) information. Joshua: I don't think globalization by itself can save the contact hypothesis. Part of what needs to be explained is why we take some kinds of contact to be morally relevant, and others not. People have been encountering strangers for thousands of years, and sometimes helping but sometimes taking advantage of them. Stories, rational (or pseudo-rational) persuasion, or reflection, may explain the difference more than actual physical proximity. I wouldn't equate rational morality with ethical theory though; the latter may attempt to explain the former, but if ethics is based in reason, it can surely be based in the kind of reason that doesn't require a philosopher to incite us to engage in. And philosophers can be as good at coming up with false theories of ethics as anyone else.
Toggle Commented Apr 14, 2010 on Emotion, reason, and moral convictions at PEA Soup
I've thought a little further about why I'm seeking a clearer definition of "normative fact." The problem is that I believe that subjective moral facts are fundamental, and other kinds of moral facts are derivative from these. Now, Doug announced in his initial post that he simply believes that there are objective moral duties describable by PO, so he believes there are objective moral facts. I do too, in a sense, but would say that objective moral facts simply describe what our subjective duties would be in the counter-factual situation where we knew all the relevant normative and non-normative facts. But then, why can't an H-theory supporter just postulate the existence of hybrid moral duties? A hybrid moral duty is the one you would have subjectively if you had access to the full normative evidence, but not the full non-normative evidence. H1 could then be a true theory about such facts, and in turn very interesting. Now one could reject the idea that such facts exist; but this would require an argument. If the argument is that only subjective duty satisfies the condition that the right-making features of our actions are accessible to us in a way that can guide our actions, giving us an opportunity to respond to them, then PO theories face the same problem and should be rejected likewise. So in this sense--which I admit I didn't state clearly earlier, so this discussion has been helpful--PS and H1 theories can both be true, with different senses of "permissible" etc. in each. I am tempted to believe that only subjective permissibility is ultimate or "real permissibility", but then that would lead me to treat both PO and H1 theories as kinds of idealizations, and disagree with Doug that H1 theories suffer from some defect that PO theories escape. Here's a related problem: it is indeed common to distinguish between objective and subjective obligation, facts, etc., and such appeals have been made throughout this discussion. But Doug's initial message distinguished the theories on how they defined "permissibility." He didn't say "subjective permissibility" or "objective permissibility." If he means either, then this is muddled; for surely PO theories do not say that what is subjectively permissible depends upon objective facts, nor do PS theories say that what is objectively permissible depends upon subjective facts. Nor can he mean just one or the other throughout the post. Perhaps he means objectively permissible when talking about PO theories, and subjectively permissible when talking about PS theories. But if such equivocation is acceptable, then again I see no reason why we can't find it useful to introduce a third concept and talk about hybrid-permissibility for H theories.
Doug, I think as I described it, SU could be the true theory of something very interesting: what most normal people could do. Now, show me someone who's been raised in unusual circumstances, with distorted normative evidence, and I might concede that such a person should not follow SU, and acts maximizing expected utility would not always be right for the person to do, because I subscribe to a deeper PS-type theory which absolves him. What would you call a theory that says that the vast majority of normal people should maximize expected utility (and the rest would also have been so obligated, had they not suffered from severely defective normative evidence)? If not SU, we need a new name; and we have been talking at cross-purposes.
I meant *for* all agents, not *about* all agents, in my last post. I apologize for this and other spelling errors recently; I am accumulating evidence that I should preview posts and edit them instead of writing them too-quickly between sessions at a conference. This would produce better consequences, which I have long ago decided is what I ought to produce. :-)
Consistent w/ my last post, I would say that in the situation you just described, S ought to do Z. Subjective utilitarianism, by saying S should do Y, is strictly speaking incorrect. But it is correct for a large number of other cases, perhaps most, and hence should be developed by conscientious moral philosophers, and its practical adoption should be urged about all agents (along with the relevant evidence supporting the validity of its basic norms).
I agree both with David Faraci's first post, and Daniel Elstein's message above. Doug, I'm a little puzzled still about the strong antagonism you're suppose must exist between PS and H1 theories. One might say the same thing about Einsteinian and Newtonian physics: they can't both be true. Well, yes, in the sense that they can't both be ultimate, complete and descriptions of the universe. But if (say) Einsteinian physics is this (it probably isn't, of course), then this gives us excellent reasons for adopting Newtonian physics for 99% of all applications, reserving the ultimate theory for unusual cases. So when you ask if I am claiming we "should adopt false theory", I say of course, if "adopt" means "use in practice in relevant circumstances, which may be most of the time." If "adopt" meant "act as if it was the ultimate, foundational truth," then no. But we don't need to think of H1 in the second sense to be "interested in developing" such views; the first sense of "adopt" is quite sufficient to motivate developing such theories.
I was trying to think of a good analogy to this; this is a little rough but try it. One might debate the problem of what kind of bridge to build over a certain river. One might say there is structural uncertainty--what kind of bridge shape would be best to use--and load uncertainty: what vehicle weight it is likely to bear. To some extent, the structure you build limits the load: if you have a two-lane bridge, you can be pretty certain the maximum load it will have to bear is less than if you have four lanes. If you have a toll bridge, you can be pretty sure that the total number of vehicles on the bridge will be limited by the rate of passing the toll gate, limiting the load further. And so on. Now in general we can have a double-uncertainty theory about the bridge: we ought to build the bridge that is best given the geographical constraints, and carries the load safely, and if we don't know both of these, or either, we are correspondingly uncertain what bridge we should build, and would not be wrong perhaps in being safe/conservative in our plans (or building no bridge at all). Call this SE, for subjective engineering. Ultimately that's the best, most rock-solid truth we can assert about how we should build the bridge. But that may not get us very far. So we do a little work and think: well, look, it's not a very wide river, and there are high banks on either side, so a two-point cantilever is obviously the way to go. We could be wrong, but our evidence for this might seem pretty compelling. So we now have a new theory: we should build a cantilever bridge strong enough to handle the likely load. Call this HE1 (hybrid engineering type 1). Now we have a further question: what is the maximum load we need to support? This is a whole new question, and different answers to it will give you different sized bridges. We could play it ultra-safe and build a massive bridge, but that may be wasteful; or we could cut corners and end up having one that wears out dangerously soon. We gather evidence and build accordingly; but we might remain much more uncertain about these numbers than about the cantilever design. But we base such actions on HE1, even though this is strictly subordinate to and derivative from the more fundamental SE. We could go on and talk about an objective engineering principle, OE: build that bridge will handle all actual future traffic safely at optimal cost. But this is pretty useless to us, since we can't know the exact values of the relevant variables; at best we approximate this with the evidence we have. But the evidence about the bridge design may be very clear, the evidence about the load more uncertain. So for practical purposes developing HE1 is a good idea.
As a promoter of both PS and H1 theories, I would argue that the attractiveness of H1 is that there may be reasons for thinking that most of the normative uncertainty can be far more easily resolved than the non-normative uncertainty , and that this is especially true if resolving the moral uncertainty leads us to consequentialism. For under this theory, an infinite number of future non-normative facts may be relevant to our choice; we can only hope for some broad, approximate resolution of these, and must often concede that the evidence for them will vary amongst conscientious agents. However there might be reasons to think that very few conscientious agents could be absolved of ignorance of the relevant normative facts which should lead them to adopt consequentialism, or at least to have reasons to give consequences great moral weight. If so, it is worth developing an H1 theory, for use by most conscientous agents (or the most conscientious agents!) while conceding that it is at least theoretically possible for some conscientious agents, in unusual circumstances, to be morally permitted to follow a non-consequentialist theory. Hence I agree that PS theory is ultimately the best for action-guiding; and as a good pragmatist, I would say this also means that the true moral theory is ultimately of this form. But such a true PS theory could rather quickly give moral agents reasons to adopt an H1 theory, only recognizing the possibility of a PO theory as a kind of abstract ideal. It is our obligation to in some way approximate our behavior to what the PO theory would say, but keeping our feet firmly grounded in PS, I would deny that we are actually obligated to do what a PO theory says we should, when the relevant evidence is inaccessible to us (as it always will be, for consequentialism at least).
Steve, I wouldn't take "I don't know whether A is what I ought to do" quite literally, or attempt to preserve its apparent conceptual structure, which I think is just slightly confused, though in practice people often enough derive correct implications from it, namely that one should think harder or gather information (if there's time for this). You are correct about the second two claims; I didn't say that the airport situation gave me insufficient info to commit to any action, but to "some", by which I meant some specific action, i.e. leaving at time X. I do have enough information to commit to information-gathering action. However you may also be making the point that "I don't know what to do" doesn't always mean "I ought to gather more information." This is true; sometimes when one must choose between X, Y, etc. it is unclear whether it would be more useful to gather more information, think harder about the information you have, or just commit to X or Y etc. and hope for the best. I don't want to rule out ambiguity, but I think that to the extent that one's evidence about what is best is truly ambiguous (we're not just pretending it is out of self-serving convenience or laziness), then the ambiguously-matched options really are approximately equally right, and we ought, then, to choose between them. One of them may later turn out to have been truly leading to the best consequences, but given what we knew at the time, there was no truth of the matter as to which one option out of those evenly matched in subjective evidence of their goodness was the one we ought to have chosen.
Toggle Commented Sep 28, 2009 on Against Actualism at PEA Soup
Steve, I think a purely subjective consequentialism can account for claims like 'I just don't know enough to be able to tell what I ought to do', or the need to ask for advice from better informed people. The second is straightforwardly justified by the fact that we often have (subjectively available) evidence that the likelihood of our current evidence, or our action-guiding beliefs, which may or may not properly correspond to the evidence, is wrong, and could lead to harm if acted on, while the cost of gathering more information (like seeking advice from better-placed persons) is relatively low. And the fact that we have such subjective evidence is an objective fact about us. This is why I relativize our obligations to evidence, not to beliefs; the latter could be made up as we go along, but the former is not entirely up to us. And there's nothing paradoxical about having evidence that one's current evidence about P is inadequate for committing to some action; we have precisely such evidence in many cases, like when I think I was told that I need to pick up somebody coming in from Atlanta at 3, but it could have been 4, and I know I could potentially save myself a lot of trouble by going online to look at the flights. I take the first statement to be a more roundabout way of saying roughly the same thing, though of course you could also that in cases where there is no time/opportunity to gather more information. In that case, I take it to mean that your confidence that you will not cause harm is not high and you hence have grounds for being uncomfortable with some imminent choice. My take on that is that being hesitant and uncomfortable with your decision is often productive and morally good, for it can motivate you to search for more information which might help; but if you really can't get it, and act on uncertain information, you are actually less blameworthy if a "mistake" is made that the phrase suggests. Thanks for the Thomson reference, I'll look up what she says on this.
Toggle Commented Sep 24, 2009 on Against Actualism at PEA Soup
I'm finding both alternatives highly problematic, and want to suggest a third possibility, or a set of them. Moral obligations are neither functions of what will actually happen, nor would could happen, but what our current evidence suggests will happen, as a result of our various choices. For consequentialists like myself, this is the view called subjective consequentialism, as opposed to objective consequentialism; I think a parallel distinction could be made in deontology but I don't know if anyone's systematically discussed this. Now if we try to reconstruct the example in these terms, the prerequisite for making it remotely plausible that the kidnapper ought to rape the girl is that he has strong evidence that if he doesn't, then he will kill her, but if he does, he will release her afterwards. It might appear that he could have such evidence, say if in past kidnappings he has done one or the other, and never anything else. But really it takes more than that: he would need evidence that he will come under some compulsion to kill her if he fails to rape her, and that it is not possible for him to simply release her unharmed. Now I can easily imagine someone telling himself this, as a way of trying to excuse his behavior. It is much harder to imagine someone actually having serious evidence--as opposed to a self-serving belief--that this is the case. He would need evidence that in fact he will lose control of his agency, his power to choose, coming under the grip of unstoppable compulsions, as a result of some choice that he current can make freely. I think that some of the conundrums encountered here come from too-easy reliance on stipulations of the hypothesis, like "we know that if he doesn't A, he will B...". But how can we know this? More to the point, how can the kidnapper know this, or have reliable evidence supporting it? I reject ideal observer theories in part because they lead to counter-intuitive results like the ones being considered here; they leave out the question of whether the agent has reasons to do A or B based on the situation he finds himself in, and attend to the (relevantly irrelevant) question of what an ideal observer has reasons to advise him to do. The only way I can easily imagine the agent having reliable evidence for the results is something like the Dr. Evil example: the agent is assured by some powerful external force which he cannot control that certain results will come from his choices, and certain other alternatives will become unavailable as a result of some choices. Even then, I think there's something misleading about saying: >If you aren't going to press the red button, you ought to press the blue one. I would again ask the antecedent to be recast as: "if you have strong evidence that you won't press the red button" in order to make the consequent plausible. But what could such evidence consist in? If the red button is fenced off or taken away, perhaps. But if it is within your power of choice to press the red button, then it seems to me that you cannot possess strong evidence that you will not press it; there is nothing preventing you from doing so. That's not to say that you will press it; you might not (say, on a sudden and unpredictable impulse). But without an antecedent specifying evidence that you will not do so prior to making the choice, I don't find the conditional compelling. If such compelling evidence exists, then I would say yes, you ought to press the blue button, seeing to it that the girl is harmed but not killed, etc. Then this result seems tragic but not paradoxical.
Toggle Commented Sep 19, 2009 on Against Actualism at PEA Soup
I find Korsgaard's regress argument plausible, especially in the form given to it in The Constitution of Agency, p316: “even if we know what makes an action good, so long as that is just a piece of knowledge, that knowledge has to be applied in action by way of another sort of norm of action, something like an obligation to do those actions which we know to be good.” In response we may simply postulate the existence of moral rules, and their intrinsic normativity. But both claims beg the relevant questions. “The trouble with that strategy is that it leaves us with two problems, which in the end come to the same thing. First, it does not tell us why there is such a rule. Nor, if this is a different question, does it tell us why we should conform to the rule.” One way I interpret this as leading directly to the regress is that a moral norm or maxim takes the form of N1: "if conditions C1 are true, then you ought to A." But if the norm is itself defined in terms of some conditions C2 about the natural world, then we need a norm like N2: If C2, then accept N1. But then we need to seek out the conditions that make N2 true, and the regress is rolling on. We might like to stop the regress by saying: but if conditions C2 are true, then we *do* know enough already to know N1, and hence to do A given C1. But simply asserting that C2 exists and gives us a "reason" to adopt N1, or constitutes the truth of N1, isn't enough, unless we *already* have some knowledge of a norm authoritatively connecting C2 with acceptance of N1. But establishing norms was the initial thing we were trying to do, so assuming such a norm simply begs the question. I also read these & surrounding passages as saying that the problem with "moral realism" is not so much that it is false, but is misguided and misleading. It treats moral knowledge as a simple kind of acquisition of perceivable or intuitable facts. This suggests that the first thing we do, or need to do, is to get our intuitions/moral perceptions straight. But in her constructivist view, the first thing we need to do is make sense out of our ability to choose and value things, that is, of our practical agency. If we choose in certain ways, we encounter a contradiction in the will, which in Sources of Normativity she also describes as a clash with our practical identity as agents. Therefore we can't make complete sense of such choices on their own terms. Now, when this happens and we recognize it, we may intepret this as "having perceived a reason not to do X anymore, to do Y instead," etc. The error of moral realism (or more properly, "substantive realism", for she sometimes calls her own view procedural realism) is to think that that's all there is to it: we perceived something we call a reason, we acted on it, and nothing else happened there. It's not much different from seeing rain, and getting the umbrella; we see the rain as wet, and also see its reason-to-bring-an-umbrella property, and we're done. But Korsgaard is getting at the question: what is that "reason," where does it come from? And her answer, roughly, is we treat things as reasons by choice, and impose "reasons" upon the world; but that there are objective facts about what kinds of reasons we impose are consistent with a coherent sense of ourselves as agents. We can describe the requirement to treat certain things as reasons to act as norms, and conclude that failure to adopt certain norms constitutes a failure of your attempt to be an agent guided by principled reasons of any kind. Rejection of certain norms, like the value of respecting rational agency itself (in you or others), turns out to make your agency more or less incoherent. Realists might want to interpret this as moral reality forcing itself in on us, but Korsgaard's picture is that it's more like reality is just sitting there inert, and it's our practical reasoning which is trying to make sense of our attempts to aim at certain goals. The clash of a moral error is not between our mind and the world, but between two parts of our mind, which then prevents us from being a complete agent. We can restore our agency and make sense of what we are doing by selecting different principles/maxims. The corrected principles will indeed tell us to respond to specific worldly conditions in certain ways, treating them as "reasons" to do various things, but it's the criterion of internal consistency that forced us to do that, not some property of "being a reason to A" that in any sense resides in the external world and was there to be perceived. Put another way: reasons are not perceived, but assigned; all the same, some assignments of reasons lead to incoherency/fragmentation of our agency, and this is an objective fact--but it's more a fact about the nature of agency than about the objects we are treating as reasons (substantive realism reverses this). So to restore our full agency we must change our ideas about what counts as a reason for what until we eliminate this internal clash. Just my 2 cents. If anyone thinks I'm misreading Korsgaard here, and importing my own ideas into her views (which I fear I may sometimes do), let me know.
Toggle Commented Aug 24, 2009 on Korsgaard on Moral Realism at PEA Soup