This is Sergio Tenenbaum's Typepad Profile.
Join Typepad and start following Sergio Tenenbaum's activity
Join Now!
Already a member? Sign In
Sergio Tenenbaum
Recent Activity
Well, the force of "provable" here depends on which assumptions you need for the "proof"... I just took a very brief look, and it seems to me that you assume that there is a continuous, linear function from increase in pain (it should really be increase in electric currencies) to decrease in utility. This begs the question (even leaving aside the assumptions you are making about the value of pleasure and how it adds up). Arntzenius and McCarthy propose an orthodox solution that seems to rely on weaker assumptions (again, I just took a brief look at your proposal, so I might be missing something). We do discuss their solution in the paper and try to argue it doesn't work. I still don't understand these restrictions on human psychology or rational desire; they seem to me unmotivated. Just to give one possible interpretation of the builder (though, again, I don't really see the need to elaborate; the builder could just have a basic desire to build a good house, without having further preferences about how good it is) a builder could think that he was paid to build a good house (not a perfect house, or an extremely excellent one), and care for nothing beyond doing what he was paid to do.
Toggle Commented Aug 15, 2016 on Do we have Vague Projects? at PEA Soup
Richard (and commenters), many thanks for starting this thoughtful discussion on some of the topics of our paper. I haven’t had time to talk to Diana, so she is not to blame for anything here. Some of the commenters (especially Tristan and Nate) already said some of the things I wanted to say (in fact, this might be convoluted version of what Nate explains in a few sentences), but I would like to go in a bit more detail as this is a great opportunity to clarify some points that might have been unclear in the paper. I think there might be a misunderstanding here about the structure of our view. In explaining why he doubts there are vague projects, he claims: “But it strikes me as strange for one's goal to be to reach some vague level of sufficiency. When I imagine writing a book, my preferences here are graded: each incremental improvement in quality is pro tanto desirable; each reduction in time spent is also pro tanto desirable”. Leaving aside a possible optimism about there being a precise way of ordering books from better to worse, the basic thought here is common ground (as it is common ground that the self-torturer prefers less pain over more pain and more money over less money). We did not deny that, all else being equal, we have preferences for better books, less pain, and more money (though, a theory of instrumental rationality should allow for agents who only interested in writing a decent book, or a good enough book, or even just a book, and don’t care about anything beyond that, just like a builder might only be interested in building a house good enough but not care at all if it’s excellent or not). But this is not the issue; the issue is what our “all-out” attitudes are (or at least should be), since we want to determine what is rational for the agent to choose or do in a particular situation. Richard speaks of the issue of trade-off between things that we find pro-tanto desirable as if it were a side issue, but this is wrong. The pro-tanto desires of the self-torturer do not generate a problem for orthodox theory. But given the self-torturer’s (or the book writer’s) attitudes there is no way of determining how she should trade off between money and relief from pain by means of a preference ordering; and yet, the self-torturer (or the book writer) is perfectly rational, or so we argue. Vague projects, just like preferences (as they figure in decision theory), are supposed to be all-out, not pro-tanto attitudes. If I have a project (or the end) of writing a book, and, due to procrastination, akrasia, etc. (rather than some unforeseen circumstance) I don’t write a book without ever abandoning the project, then I (thereby) acted irrationally. But if my book is not as good as it possibly could have been, I have not (thereby) acted irrationally (even if I recognize that this outcome I would have been in some respect more desirable); I never undertook the project (or chose the end, or formed the intention) of writing a perfect book. On our view, a theory of instrumental rationality needs not just my preference for writing a (decent) book over not writing a (decent) book, but it needs also to take into account that I have a (vague) end of writing a book (in other (solo) work, I just talk about the fact that I am (intentionally) writing a book. As I said, if through procrastination, weakness of will, etc., I end up making impossible for myself to write a decent book (while not giving up my end) I am exhibiting a form of irrationality in a way that it is not (necessarily) true that I exhibited any form of irrationality if I wrote a decent book, but could have written a slightly better book (or if I could have written the same book but spent twenty more seconds playing Pokemon Go). These verdicts about the rationality of the agent cannot be captured by examining only the preferences of the agent. Of course, one could try to argue that you can do it, or that the ST preferences are not coherent, etc, but this is a different point (in the paper, we argued, that these attempts fail). In other words, we cannot explain what is rationally permissible or impermissible for agents who are writing books, caught in the self-torturer predicament, building houses, by appealing just to their preferences; this is in part because their preferences are not transitive in such cases, at least when we take them at face value. It is worth noting that we’re not the only ones who think that a single set of preferences can represent the predicament of the self-torturer or of agents in situations that exhibit a similar structure. Although the added structure plays a different role in each theory, Gauthier distinguishes between the agent’s vanishing point and proximate preferences, Bratman argues that we need to appeal to the agent’s intentions, and Andreou distinguishes between given preferences and chosen preferences; none of these authors deny that ST (or the book writers) have these pro-tanto desires. A couple of words on vague projects or ends: in the paper, we do not define a vague project as a project that is described by means of a vague predicate, but in terms of a certain structure. The structure, roughly, is that there will be actions or outcomes that clearly count as achieving the end (or executing the project), some that clearly do not count as such, and that choosing on the basis of our otherwise unproblematic pairwise preferences will invariably prevent us from achieving the end (due to the cumulative effect of these choices). I think that even in Kenny’s case in which we do have a precise blueprint would be a “vague project” in our sense, at least if we add a few further assumptions about the agent’s preferences. For, even in those cases there would be clear cases of following the blueprint and clear cases of not following the blueprint, but many in-between cases that are not determined in advance whether they count as following the blueprint. Such cases could also generate the structure we describe under the rubric “vague projects”. So I’ll stand by the claim, but it’s important to note that this is not a claim about the agent’s pro tanto desires or preferences, but about the agent’s ends: since nearly all such ends or projects leave much about what counts as realizing them indeterminate, they are nearly all vague ends in our sense. And Richard’s question is really whether we have vague pro tanto (rational? fitting?) desires, rather than whether there are vague ends in our sense. I do think that many of our pro tanto desires are vague (though, again, this is not the issue in our paper); Jamie and Alex have given some examples, but I there are others as well. Despite Richard’s interesting take on the psychology of the hairline anxious, I think that some people desire simply not to be bald just as some people are averse to being old (and I still don’t see what is irrational about such desires); I might have the desire to swim on the lake or dance (well, dancing, not really...) without caring how well I dance or how fast I swim. And I should say that anyone who has seen me swim or dance (fortunately, a very small number of people) must be keenly aware that “swimming” and “dancing” are vague predicates. But this is really a different issue.
Toggle Commented Aug 15, 2016 on Do we have Vague Projects? at PEA Soup
The Centre for Ethics is able to accommodate a limited number of visiting professors each year. Although we do not provide salary replacement or support, the Centre is able to provide an office and computer, library privileges and access to... Continue reading
Posted Jan 6, 2016 at PEA Soup
A conference on the theme “Cultural Diversity & Liberal Democracy: Models, Policies and Practice” will be held at the Glendon School of Public and International Affairs, in Toronto, on April 19-20, 2016. Confirmed keynote speakers include David Miller (Oxford) and... Continue reading
Posted Sep 14, 2015 at PEA Soup
Manuscript Workshop Announcement: A. J. Julius, “Reconstruction” The University of Toronto Centre for Ethics will be hosting a workshop on A. J. Julius’s manuscript “Reconstruction” on Friday, June 6, from 11 to 6. See below the fold for more details.... Continue reading
Posted May 15, 2014 at PEA Soup
Maxims and MRIs: Kantian Ethics and Empirical Psychology A two-day workshop to be held at the University of Toronto, Centre for Ethics, Toronto, ON, Canada May 9-10, 2014 In recent years, many moral philosophers have drawn inspiration from the exciting... Continue reading
Posted Apr 6, 2014 at PEA Soup
Hi Doug: No, I think that if you allow preference sets with no upper bound, you will have other cases in which you need to allow that it would be rational to choose counterpreferentially. At any rate, I find it very intuitive that if ST stops at a point that is not too far in the series she acts rationally (and if she goes to the end she acts irrationally). And if accepting (R3) commits you to denying this, it seems to me a high cost (but, of course, you might have independent reasons to deny the consistency of the ST scenario as I am describing it).
Hi Doug: I guess I don't understand why would we say that intransitive preferences are rational if we then conclude that an agent that chooses on the basis of such preferences is irrational no matter what she chooses (is this what you are proposing?). If your preferences put you in a position that whatever you do is irrational, then I would say that they are not the preferences of a rational agent. Generally when people say that some sets of intransitive preferences are rational, they mean that an agent with such a set of preferences could still choose rationally. I am also not sure why you think that the probabilistic case is more interesting. Of course, you could reproduce the same structure with probabilities, if the probabilities of moving up too far are never large enough to offset the gains of continuing to the next stage.
Hi Doug: Here are the things that ST might intend at N. I assume that this is choice under certainty so the prospect is just the state of affairs in parenthesis. (1) Stop at N (Pain at level N &$X) (2) Continue at N then stop at N + 1 (Pain at level N +1 (indistinguishable from N), $X & 100,000) (3) Continue at N and N + 1, stop at N + 2 (Pain at level N + 2 (indistinguishable from N + 1) & $X + 200,000) . . . On Quinn's proposal, it is rational to choose (intend to perform) act (1) even though (2) is preferred to (1). This seems to be a straight violation of (R3). If the reply is that if ST really prefers (2), she should choose (2), then, by parity of reasoning, she should choose (3) as she prefers (3) over (2), and so forth. Given that the preferences are intransitive, for every act she intends to perform, there is an act that you prefer over the one you intended to perform. Your reply to Chrisoula seemed to assume that the choice must be between only two alternatives, but I don't know why we should restrict the choice set in this manner. I take it that "by following one's intransitive preferences", Chrisoula means that you choose according to preference at each choice node. One could have the view that ST has intransitive preferences but it is perfectly rational for her to go all the way to the last setting (and also to switch back to the first setting and no money if she is later given the option). Or one could have the view that intransitive preferences are not rational (or not even possible). But I agree with Chrisoula that if you reject both these views, it'll be hard to endorse (R3).
I don't think you need any such assumption. The ST can rationally believe that he is just as likely to stop at N + 1 as he is to stop at N, but this couldn't suffice to make it the case that it is rational to continue, because the argument will generalize, and would have as a consequence that it is rational to go to the last setting. I have troubles assessing objective probabilities in cases we are assuming that ST is rational. If ST is rational, and Quinn is right, the OP that he will stop at N is 1. If we now look at the counterfactual possibility that he wouldn't, then I would guess that insofar as ST is rational and Quinn is right, the OP he'll stop at N+1 is again 1 (since this is the closest to the original plan). But you don't even need it to be 1, it could be just arbitrarily large and either you conclude that it is rational to stop at N or that the only rational place to stop is the last setting. At any rate, many solutions to the ST puzzle conclude that ST must choose counterpreferentially, which would be in violation of R3. I think these are the only plausible solutions. In general, other solutions will be committed to there being a most preferred outcome in the series (at least for a rational ST), which rejects the initial setup of the puzzle and, to my mind, arbitrarily restricts what can count as a rational set of preferences. Of course, not everyone agree with me here. But it is at least not obvious that any plausible way of dealing with the ST puzzle and similar cases will be compatible with R3.
I forgot to say: "relative to this set of options" just meant to leave open the possibility that if somehow ST got unhooked and then faced the same scenario, it would be perfectly rational for her to choose to stop at a different setting.
Good question! Here is a possibility (but I haven't looked back at the Quinn, so I might be completely wrong). Couldn't we say that Quinn takes the lesson of the ST puzzle to be that you cannot read off that X is worse than Y from X is dispreferred to Y even when "worse than" is being read instrumentally (exactly because preferences can be rational but nontransitive, while "worse than" is transitive)? So let us say that ST decides to stop at N before the whole process begins. If this is the case, then, relative to this set of options ( I am understanding 'set of options' aggregatively here, which I don't think it is possible in your third interpretation), N + 1 is worse than N, even though N is dispreferred to N + 1. Had ST chosen to stop at N + 1, than N would be worse than N + 1. This depends on a "pick and stick to it" solution (which I believe Quinn favours), but I think it could be adapted to other types of solution.
My colleague Jonathan Weisberg has created an excellent free iPhone app. Among other things, the app allows you to run informal surveys, run intuition polls, etc. I was one of the beta testers and I can say that the interface... Continue reading
Posted Nov 20, 2012 at PEA Soup