This is Richard Yetter Chappell's Typepad Profile.
Join Typepad and start following Richard Yetter Chappell's activity
Join Now!
Already a member? Sign In
Richard Yetter Chappell
Princeton, NJ
Recent Activity
I think the restrictions follow quite naturally from thinking about content (e.g. of desires) in terms of possible worlds rather than in terms of words. When thinking about these cases, I ask myself, "What state of the world is this agent aiming to realize?" I imagine presenting the agent with various possible worlds (seen through Kripke's trans-world telescope, so to speak, or perhaps given in some Chalmersian canonical language together with the cognitive capacities to immediately grasp all that follows from the fundamental facts), and asking them, "To what extent is *this* what you're after?" This sort of approach invites us to go beyond "face value" when talking about desires described using natural language. Judging by the other comments up-thread, I'm not being completely idiosyncratic here, but it does appear to be more controversial than I would have expected, so that's interesting. re: Arntzenius and McCarthy -- Yes, I'm very sympathetic to their approach! I don't really get the force of your response (or the suggestion that it's "question-begging" to assume that the disvalue of pain is linear and continuous). You insist that "Surely a person whose stable preferences dictate that she’ll smoke only a few cigarettes and then quit so as not to endanger her life unduly [under the "stochastic hypothesis" that each cigarette has an equal small chance of triggering lung cancer, and the pleasure from smoking each is independent of how many others are smoked] is rational in light of her ends." This just sounds like an agent who doesn't understand expected value and irrationally ignores low-probability risks no matter how dire.
Toggle Commented Aug 15, 2016 on Do we have Vague Projects? at PEA Soup
Hi Sergio, thanks for your response. I should clarify that I am skeptical as to whether any of our projects are truly unachievable by means of choosing on the basis of rational pairwise preferences. My linked discussion of the self-torturer case explains why I think the pairwise preferences you ascribe to ST are invariably irrational, for example. I actually doubt it's possible to have the sort of structure you describe without relying on vague sufficiency-type desires. I interpreted you as holding that ST merely has a coarse-grained desire to lead "a relatively pain-free life", for example, because once you instead consider her situation in terms of competing graded pro tanto desires for more money and less pain, it's provable that later individual increments are net negative in value for ST, whereas you claimed that "in any isolated choice, she must (or at least may) choose to turn the dial". Your builder example sounds like the cynical book-writer to me. If they only care to do a "good enough" job, to make sense of this I have to ask myself, "good enough for what purpose?" Presumably it's to achieve some kind of social consequence: good enough to avoid getting sued, or to secure a positive recommendation from the customer, or some such. But there's nothing vague about any of that.
Toggle Commented Aug 15, 2016 on Do we have Vague Projects? at PEA Soup
Hi Nate - Even if we accept counterfactual indeterminism, I think the desire's satisfaction isn't really vague in the relevant sense (of individual increments making "no difference"), but rather probabilistic. If each additional stone raises the proportion of nearby in-the-hill worlds in which you are successfully guided, then traditional expected value approaches suffice (contra T&R) to give us reasons for each incremental act here.
Toggle Commented Aug 15, 2016 on Do we have Vague Projects? at PEA Soup
I think it's probably clearest to think of the agent's desires here as corresponding to a preference ordering over possible worlds. They prefer the more-restful worlds over the less-restful ones, and there isn't anything vague about that. You can probably translate this into proposition-talk, but I don't see that much of interest hangs on that.
Toggle Commented Aug 13, 2016 on Do we have Vague Projects? at PEA Soup
Hi Jamie - thanks, that's an interesting case! * Jussi - I meant the claim in sense #3: it'd just be a bizarre thing to care about. (Not sure what further explanation I can offer if one doesn't share this intuition upon seeing the alternative interpretations of possible desires in this vicinity.)
Toggle Commented Aug 12, 2016 on Do we have Vague Projects? at PEA Soup
Andrew - yes, that seems right to me (assuming the agent still believes in something that plays the qualitative role of hair). A third way to accommodate this would be to posit that (most of) our desires fundamentally involve qualitative concepts, which is something I had in the back of my mind when thinking about the cases in the OP. Insofar as they are fleshed out by reference to something like a kind of phenomenal image of what we care about, it won't matter what higher-level concepts or descriptions apply. (Or is this just what you were thinking of with non-conceptual content?) * Hi Tristan - you can think of the question as whether we have (rational) desires with vague contents.
Toggle Commented Aug 12, 2016 on Do we have Vague Projects? at PEA Soup
Hi Alex, that sounds interesting. Could you flesh out an example of a borderline case of intentionally causing harm, to help me get a better grip on the idea? (Are deontologists generally on board with the idea that some acts are of indeterminate permissibility?) It may be that deontic goals provide a second class of exception, then. (I'm reminded of this old discussion with Doug Portmore about the case of fairness as rough equality.) Though they at least won't pose any sort of problem for utilitarians, insofar as utilitarians have independent grounds to deny that they are reasonable goals to have. * Kenny - thanks. That sort of open-endedness does seem an important feature of many of our projects, I agree, though as you say it's quite different from the vagueness that T&R discuss.
Toggle Commented Aug 11, 2016 on Do we have Vague Projects? at PEA Soup
Tenenbaum and Raffman (2012) claim that "most of our projects and ends are vague." (p.99) But I'm not convinced that any plausibly are. On my own blog, I recently discussed the self-torturer case, and how our interest in avoiding pain... Continue reading
Posted Aug 11, 2016 at PEA Soup
22
The first of six ESRC-funded workshops exploring issues where the ethics and economics of climate change intersect will be held at Oxford University’s Martin School on 13-14 January 2016. The keynote speakers will be Simon Caney and Partha Dasgupta. We... Continue reading
Posted Nov 9, 2015 at PEA Soup
Picking up on the exchange between Sergio and Jamie, I've written a new blog post arguing that in these very special cases, agents really should treat their future behaviour as a "fixed" natural fact, independent of their present deliberations: http://www.philosophyetc.net/2015/09/deliberative-openness-and-actualism.html
Hi Peter, you write: "So if I know Bloggs wants to be a morally conscientious person I'll most certainly advise him to jump in." I just wanted to flag that it's not always advisable for non-ideal agents to act as an ideal agent would. Some examples discussed here: http://www.philosophyetc.net/2009/01/ignoring-reality-aint-so-ideal-either.html Having said that, I like your original case and find it much more compelling than the standard objections to actualism (which strike me as clearly silly, at least when we focus on practical issues rather than semantic ones as Jamie sensibly advises). I wonder if one could justify a hybrid view which is actualist about impartial costs (and hence avoids the disastrous advice that possibilism yields), but possibilists about personal-prerogative-enhanced costs (and hence avoids letting agents "off the hook" simply due to their disposition to later sacrifice more than they need to)?
I actually think there's a pretty strong case to be made that, if anything, blogging is more likely to help than to harm one's career prospects. This is so even if any given person is more likely to dislike than to like what you post. For anyone interested, my argument is here: http://www.philosophyetc.net/2009/03/academic-blogging-pros-and-cons.html
I blogged a response to Brennan's interesting "Skepticism" paper here: http://www.philosophyetc.net/2012/10/unreliable-philosophy.html Three key points: (1) There's no better alternative to philosophical inquiry, if you want philosophical knowledge. (2) We can come to know all sorts of conditional claims, even if we can't be sure which antecedents are true. (3) Some sub-groups, starting from (near enough to) the *right* intuitions, may be objectively reliable (and hence, on various 'externalist' views, secure knowledge) even if there's no "neutral" way to establish who is in this privileged position.
I'm all for clear standards, especially ones emphasizing that a paper shouldn't be rejected just because the referee can think of a possible objection. (If the paper is subject to some very obvious, devastating, and unanswerable objection, then that's another matter. But very few objections are so serious as to suggest that the paper in question is not a valuable contribution to the literature. Rather, they merely suggest that further discussion -- say, in a response piece -- could be fruitful...) I'm not sure about requiring summaries and "detailed philosophical rationales". There's an obvious trade-off here between providing value to those who submit to the journals, and being able to recruit sufficient referees. Given that journals in general have a much harder time attracting referees than they do attracting submissions, it would seem unwise for them to make the refereeing process any more burdensome than absolutely necessary. And, frankly, I think sometimes a very brief report in response to a clearly unsuitable paper is just fine. I especially think it's fine to reject papers from top-5 journals on grounds of their being insufficiently ambitious / interesting. In such cases, I don't see what would be gained by requiring more than a couple of sentences explaining why this is so.
I would certainly hope that it isn't substituting for other philanthropic ends. If this is *in addition* to one's usual philanthropic budget, then I guess it's all to the good. But given that most of us have only limited moral "willpower", so to speak, this does seem an especially low-priority use to which such moral efforts might be put. For broader discussion of "moral priorities", see: http://www.philosophyetc.net/2015/05/moral-priorities.html
On second thought, (*) seems false even for objectivists. Construct a case where (i) one may permissibly radically impair one's future capacity to do good, say because failure to do so would involve great personal sacrifice, and (ii) if one refrains from so impairing oneself, one foreseeably will in future perform an act that is (a) pitifully inadequate compared to what would could, at that time, instead bring about, and yet (b) is nonetheless much better than the best that one could have done if one had previously impaired one's capacity to do good. In that case, it's morally better to refrain from impairing oneself, even though (i) as a result you will in future act morally horribly wrongly (woefully failing to do as much good as is minimally required in the situation), and (ii) if you had impaired yourself, you would never act wrongly (since the impairment itself is excusable on grounds of avoiding great personal sacrifice, and in future your capacity to do good is so impaired that you always do the best you can, which isn't much).
Hi Peter, fun case! As someone inclined towards "splitting senses", I'm ambivalent about the purported conceptual truth (*). It seems true if I have in mind the objective sense of "wrong", and false otherwise. Does the case show that "the ‘morally wrong’ that is of interest to the morally conscientious person in her deliberations about what to do is not the subjective ‘morally wrong’"? I mean, I don't really think the morally conscientious person should be interested in the 'morally wrong' (de dicto) at all. But for one who thought otherwise, the most I can see them concluding from this is that the subjective moral status of their future actions is deliberatively irrelevant. I'd expect them to still hold that, regarding their current action -- the choice they are currently deliberating over -- its subjective moral status is of greater deliberative import than its objective moral status (for mineshaft-style reasons, say). Does that seem right?
Some friends and I put some thought into a similar idea many years back, but unfortunately never completed the project. But for those interested in the discussion (including various implementation proposals) see: http://philreview.pbworks.com/w/page/16449741/FrontPage It would be great to see something like this finally take off!
Hi David, you might find my old post on 'Epiphenomenal Explanations' of interest: http://www.philosophyetc.net/2011/11/epiphenomenal-explanations.html Also worth flagging: one (very controversial!) way to deny (4) would be to hold that the *intentional content* of our moral attitudes depends in part on the moral properties themselves, and so our moral attitudes -- in contrast to their behavioural and neurological correlates -- are not wholly natural phenomena.
Please add me: Richard Yetter Chappell, University of York
Some thoughts... (1) It's plausible (and perhaps trivial) that inflicting harms without right is impermissible. But it doesn't seem remotely plausible that violating such negative duties is the only way to act wrongly. (Cue the standard "drowning child in a pond" cases. Better yet, cue a case where you can prevent great harms to others at no cost to yourself. If your theory implies that one can permissibly ignore such opportunities to help others, few will find this credible.) (2) On the difference between promising vs. conditionally permitting appropriate harms, note that only promising obligates me to avoid triggering the condition. This is just like the difference between laws that forbid X (on pain of a $100 fine) vs. laws that permit X so long as you pay a $100 fee. (3) I don't think it could be permissible to torture someone merely to extract a meager benefit (say they promised me a penny), even if there was no other way to obtain what was owed. Quite aside from any prudential costs, it just isn't worth it morally. Getting what you're owed is not the most important thing in the world. It can't license such disproportionate suffering. P.S. I think Seana Shiffrin and others have done some relevant work on promising (and whether its normative status is to be explained as a matter of raising expectations, or transferring rights, etc.) that you might want to look into.
Toggle Commented Jan 23, 2010 on Retributive Ethics at Tomkow.com
1 reply
As a first pass, we may think of Consequentialist moral theories as those that specify the right in terms of the good. But these terms occlude some important structure that can be brought out by further analysis. In particular, I... Continue reading
Posted Dec 16, 2009 at PEA Soup
19
Thanks all!
Toggle Commented Dec 16, 2009 on Welcome Richard Chappell! at PEA Soup