This is Toby Ord's Typepad Profile.
Join Typepad and start following Toby Ord's activity
Join Now!
Already a member? Sign In
Toby Ord
Oxford
Recent Activity
Richard, Thanks for the detailed explanation. I think you have succeeded in giving me a better idea of where you are coming from, but I'm afraid that it is not somewhere I'd like to be. In short, I think that the concept of fittingness has a lot of hidden complexity (and possibly vagueness) and is more complex than good which it is being used to define (and along with proposition is all that I require). I think that the conceptual framework that I have is simpler and more elegant than yours, though I'd probably have difficulty convincing you of that and you probably think the converse. Thanks for explaining it though, and hopefully this comment thread will be a useful place for us to point others who have a similar clash of conceptual frameworks for consequentialism.
OK, good, it looks like this conversation has been useful. So the final thread remaining is the 'goodness as reason to desire' thread, and I think I see some light in that tunnel. Isn't it obvious that the inducement of the evil demon doesn't make pain desirable, but rather only makes the state of your desiring pain desirable? I might be willing to grant that the demon's threat doesn't make pain desirable but I'm not sure where the term 'desirable' came from. I would deny that when it is good to desire X that X is desirable (the same goes for when we have reason to desire X). In general I think that 'desirable' is a pretty flexible word which can mean several different things in different contexts and is best avoided. I would say similar things with 'rational'. I'm not sure that 'rational belief' means the same thing as 'belief that you have reason to possess'. Indeed, it seems that you are committed to some kind of substantive connection between reasons and rationality that I would probably deny. Perhaps it would be useful to say a bit on the topic of rationality. I think that rational has two types of meaning within the type of theory of rationality that I support (i.e. Bayesian belief formation + decision theory). It is applied to belief formation in a sense that ignores consequences and is purely epistemic, then it is applied to outcomes of the reasoning process in a way that takes into account consequences (well, they don't have to be consequences, but lets just say that they are). Now these outcomes of the reasoning process are often called 'actions', but are perhaps better called 'choices' as they can be broader than what we commonly consider as actions. For example, Jeffrey in The Logic of Decision represents them as arbitrary propositions. If so, then it is possible to have beliefs as the output of decision theoretic reasoning (more accurately: the proposition that you hold the belief). If so, then there are two types of label 'rational' that can be applied to the holding of beliefs - the bayesian belief formation one and the output of decision theoretic reasoning one. (I'm not sure if Jeffrey notices this consequence of his theory). In the case you give, they conflict, meaning that I can't simply answer your questions. Ultimately, I think that it is not bayesian-belief-formation-rational to believe what you don't have evidence for, but it might be decision-theoretically-rational to do so if the outcome is expected to be good (assuming it is a belief you can voluntarily form). You seem to hold a roughly analogous position for reasons, but whenever the term would be overloaded (outcome related practical reasons versus epistemic reasons or the like) you only seem to consider the special type of reasons and not the more general ones. I consider both, so I think there is a sense of reason in which I have reason to believe the truth, another sense in which I have reason to believe what the evidence points to (both of these are epistemic) a third sense in which I have reason to believe what leads to the best outcome, and a fourth sense in which I have reason to believe what leads to the expectably best outcome (both the latter types are consequentialist). I don't think the latter types go away when there is a former type present. I just think that the question 'what do you have reason to believe' becomes ambiguous and misleading since there a multiple answers on different senses of reason. If forced to choose, I would ultimately go with the consequentialist ones in every case, as it is more important to lead to more value than to hold a correct (or justified) belief. In practical decision making I *am* forced to choose since the accurate belief may be different to the utility producing one, but in conversation, I'm happy to recognise that there is something I'd call a reason which is epistemic and not consequentialist. I hope this helps!
Richard, (i) Do you deny that the world-evaluations offered by an axiology entail local evaluations of particular things as more or less fortunate? Or do you accept this and merely hold that this entailment doesn't prevent the local evaluations from being "further claims" in some interesting sense? I'm not sure which. However, I think that there are many people who might accept an axiology in the world-evaluating sense but would stop there. These people are non-consequentialists, or at least people who are not direct consequentialists (Brad Hooker takes this view for instance). They might agree that one act would lead to a better outcome, but not that it is the better act, or that a motive would lead to the better outcome, but not that it is the better motive. In the case of Hooker, he doesn't think a moral theory assesses motives one way or the other. I'm really quite unsure that we have a substantive disagreement on this topic though, as you agree with the claims made by what I call semi-normative consequentialism, you just think that the evaluative parts are trivial (and thus that Hooker et al are trivially mistaken). This is something to take up with Hooker et al, not with global consequentialists, who are in fact the only people who explicitly agree with you on this point! (ii) Do you think that GC makes further (non-evaluative) claims? If so, what are they claims about? Do they have implications for the rational 'correctness' of our responses to the world (and if so which ones?), or do you think that one can make normative claims that have no such implications for us? The additional normative claims (rightness claims applied to all evaluands) are made by what I call normative GC (as opposed to semi-normative). Personally I'm pretty untroubled as to whether to accept normative or semi-normative GC. That said, I'm quite partial to scalar consequentialism anyway and thus also scalar GC (which is just the axiology plus the direct evaluations of all evaluands). I thus don't know exactly what rightness claims are about, whether they apply to acts or other evaluands as I'm not very interested in them (which makes me not as good a sparring partner for you on this point as I could be!). I trust that you wouldn't have been so tempted to ask 'why having a benefit from acting can rationalize action, whereas having a benefit from desiring p doesn't rationalize desiring p, but merely desiring to desire p.' I take this to be an obvious datum, not an "odd consequence"! Actually, I would have asked this too. Boring though it may sound, I have no idea why you think this and I don't find it obvious at all.
Richard, I am very confused by your explanation of why you treat the reasons for acting case so differently to the reasons for desiring case. One thing is that you changed the terminology from 'reason to desire' and 'reason to do' into 'reason for desire' and 'reason for action' which is a different part of speech and seems to confuse the issue (the same goes for the introduction of the term 'rational'). My puzzle is why having a benefit from acting gives a reason to act (not a mere reason to desire that we act) whereas having a benefit from desiring doesn't give a reason to desire, but just a reason to desire to desire. This is certainly an odd consequence of your theory and is where is comes apart from GC (and common sense, I think). I understand that one might be able to couch everything in terms of voluntary action and thus focus on the voluntary transitions between states rather than states. However, I don't know why you would want a theory to do this, when it adds various complications. Of course, this is probably a big topic in the discussion of reasons (which I'm not familiar with) and I'm not certain that a long comment discussion between us is the best way to work it out. My main point is that I don't think you have deflated GC. I think you have just shown that: * If you understand rightness and goodness in terms of a conception of reasons in which there is a strong asymmetry between reasons to act and other types of reasons, then GC is deflated. I am pretty happy to consent to this conditional, but one would need to do a lot of arguing for its premises. On the matter of GC being more expressive, I think this comes down to your understanding of an axiology. I understand it as a function from states of the world (including the entire future and maybe the past) to some kind of numbers, such that we can talk about different outcomes being better or worse than each other and maybe about degrees of betterness. To determine the goodness of an entire world, we often break it into the intrinsic values of some of its parts (such as happy people), but it is not always fully separable (for example, distributional effects might get in the way). I am not aware of anyone else who disagree with this conception of an axiology. Axiology in this sense has no intrinsic connection to consequentialism -- it is a study of the goodness of states of the world, but doesn't imply that we should assess (the instrumental value of) actions in terms of the state of the world which is the outcome of that act, or that we should assess (the instrumental value of) motives in terms of the state of the world which is the outcome of having that motive. To do so is to add something other than an axiology. AC is often taken as an axiology plus a connection to rightness via outcomes of acts such as: (1) an act is right iff it leads to the best outcome If so, then AC does not yet assess the instrumental value of motives. If you also add something like: (2) an X is best iff it leads to the best outcome then we can also evaluate motives, rules, dispositions, etc and have a form of GC which has rightness for acts only and betterness for all evaluands (we could call this semi-normative GC). You may think that all people who accept AC should accept (2) as well. I think this too, but I recognise that it is a further step, and one that can dramatically change how one views consequentialism in its relationship to virtue ethics and deontology. I also think that Mill, Bentham and Sidgwick accepted (2), so I know it is not new. However, in my dissertation I go to a lot of effort to show that it is quite difficult to spell out (2) in a way that works, but that it can be done and that what follows from it is very important to understanding consequentialism. If any pea soupers are interested, I can send them a copy (just search for my address on google and email me).
Richard, Thanks for the clarification regarding your claim about rightness as 'reason to do'. In that case, our dispute really is whether rightness is best understood in terms of reasons, and if so, whether the appropriate sense of reasons is one in which we can have reasons to have a certain eye colour, reasons to have a certain character, reasons to love etc. I personally wouldn't understand rightness in terms of reasons, but even if I did, I think that people talk about many more types of reasons than you allow for. GC lets us take these at face value, whereas you have to describe them as disguised reason claims of a certain restricted set of types. Maybe that is the best way to go, but it doesn't strike me as a strong argument when GC more accurately reflects the surface language, is a simpler theory than AC and is more expressive in important ways (such as its assessment of character, principles, decision procedures, institutions etc). Regarding goodness as reason to desire, I'm glad to see that you don't think pain would be good in the world I describe, but I can't see how you can avoid it... If someone will kill me unless I sit down, then I have reason to sit down, right? So it seems to me that if someone will kill me unless I desire pain, then I have reason to desire pain. I don't understand how (or why) you treat these cases differently. Or are you saying that in the first case I just have reason to desire to sit down?
Richard, As you know, I disagree with your conclusions, and I hope I can helpfully point out where the disagreement starts: I take it that to say what's good is to say what we have reason to desire, whereas to ask about what's right is to ask about what we have reason to do. I think this goes wrong on two counts. Firstly, I don't see how defining 'good' in this way does justice to what most consequentialists mean by the term. If a demon will cause unending torment unless we all desire mild pain, then we have reason to desire mild pain and by your definition mild pain is good, even though intuitively it is just that the desiring of pain has become instrumentally good, not that the pain itself has become intrinsically good. Secondly, if you define 'right' as what we have reason to do, then strong versions of global consequentialism (those which apply the term 'right' to all evaluands) are obviously false and you don't even need to continue onto your explanation. This just appears to beg the question against global consequentialists.
But what about personal beauty, which our evidence suggests is one of our most positional goods? I find this quite puzzling. I like to see beauty in architecture, gardens, artwork, and of course people. If offered a deal where I could make everyone else more beautiful while leaving my level of beauty fixed, I would definitely take it (even for my own sake). Doesn't this hint that beauty is not very positional (and indeed that it has positive externalities)? Or are my preferences just very different to the norm? And since government spending seems far more positional than income, shall we greatly reduce our unprecedented levels of such spending? Why do you think government spending is so positional? What am I missing?
Toggle Commented May 18, 2009 on Against Makeup? at Overcoming Bias