This is Antti Kauppinen's Typepad Profile.
Join Typepad and start following Antti Kauppinen's activity
Join Now!
Already a member? Sign In
Antti Kauppinen
Recent Activity
Great stuff! This helps me formulate something I've been claiming for a while, namely that there's two kinds of forgiveness. Very roughly, one consists in ceasing to hold negative reactive attitudes towards the agent. This is familiar, but the other form is not equally well-recognized. To put it in your apt terms, it consists in opening up to positive second-personal attitudes of the heart. Crudely, if someone has something that hurt you deeply, you can cease to resent them (and thus forgive them in the first sense) without yet being open to love and trust toward them (so that in the second sense, you haven't yet forgiven them). Perhaps you could also come to trust someone while still resenting them for what they did. So I think recognizing the diversity of second-personal attitudes helps with understanding the phenomenon of forgiveness (and relatedly, the different forms of blame) as well.
Thanks for the constructive responses to my worry! In a parallel discussion on Facebook, I came to realize that my concern is most at home in some kind of luck egalitarian framework. For Dworkin, blindness is a paradigmatic example of brute bad luck that must be compensated for in terms of additional external resources, to a degree determined by a hypothetical insurance market, to ensure equal respect and concern for all. As it happens, I'm not a luck egalitarian. I'm much more sympathetic to the kind of democractic or social egalitarianism that Elizabeth Anderson, say, defends. According to Anderson, what we owe to each other is something like opportunity to function equally as citizens. Clearly, meeting this goal requires both organizing society so that it accommodates differences between people and providing different kinds of goods and services to people depending on what they individually need in order to participate in social life as equals (I'm trying to capture Hilde's point here, in part). Since this approach doesn't refer to people's being badly off as a reason to provide services to them, it fits much better with a mere-difference view (although it doesn't require it). (This may have been obvious to many all along, but since several people replied to my earlier comment, I figured I'd give some advice to my past self and others like him!)
This is a very interesting challenge to the bad-difference view, which I confess to holding unreflectively (as I suppose most non-disabled people do). In the spirit of Sobelian half-assedness, I'd like to make a few probably naive suggestions. First, I think that the bad-difference view should be formulated in pro tanto terms - if disability as such is bad, it's pro tanto bad. If this is the case, the formulations of the bad-difference view that refer to net loss in well-being don't capture the target. Take the following (ii): "Were society fully accepting of disabled people, it would still be the case that for any given disabled person x and arbitrary nondisabled person y, such that x and y are in relevantly similar personal and socioeconomic circumstances, it is likely that y has a higher level of well-being than x." I think the bad-difference view would be better formulated in something like the following terms: (ii*) Were society fully accepting of disabled people, it would still be the case that for any given disabled person x and arbitrary nondisabled person y, such that x and y are in relevantly similar personal and socioeconomic circumstances, there is a burden on x's well-being that y doesn't have. If true, this would suffice to distinguish between the disabled and the gay, for example. ii* doesn't claim that the badness of disability couldn't be compensated for (for all it says, a disability could be a net benefit in some situations), only that there is something to compensate. This should be easier to argue for than the all-things-considered badness view. I find the arguments in the paper against the inference from the impermissibility of causing disability to the badness of disability convincing. Still, I find the mere-difference view troubling because of its seeming consequences for other people's duties. Clearly, other people shouldn't discriminate against gay or disabled people. But beyond that, it would be absurd to maintain that other people should be taxed to provide support or compensation for gay people, or that there should be insurance against being gay paid from public funds. If being disabled is a mere difference like being gay, the same follows. Yet it seems to me that Eva is right above in saying that social justice requires we provide services and goods to the disabled. How can this be reconciled with a mere-difference view? Is there any argument for it that doesn't presuppose some version of the bad-difference view? (I wouldn't be surprised if there's a straightforward response - apologies for not being familiar enough with the debate.)
And Michael (in the off chance that you might still read this), sorry again for failing to reply to your very helpful suggestions! (I was in the very rare situation of staying in a place without an Internet connection for a week, and eventually forgot about the whole thing.) To be honest, I still don't have much to add. I agree with your point 2. On the first point, I think it matters whether the provocateur's ignorance is culpable or not. And I suspect that in a lot of real life cases of rage from below, the ignorance is indeed culpable - the powerful should (and are in a position to) know that what they're doing to the weak is unjust.
Toggle Commented Sep 16, 2014 on On Rage As a Moral Emotion at PEA Soup
Thanks, David! I like Frank's book a lot, though I confess I didn't think of it while writing this. I suppose your point is essential to explaining why we have an emotion with such seemingly counterproductive action tendencies in the first place.
Toggle Commented Sep 16, 2014 on On Rage As a Moral Emotion at PEA Soup
OK, everyone, thanks for the great comments! Let me try out a few replies. First, Justin (finally): I hope what I said earlier in response to David helped clarify the relation between anger and rage. On the wife-beater - well, I take it that it's a case of rage from above, in which case it's not warranted according to my sketch. So there's no transfer of responsibility to the rage provocateur (to borrow Michael's nice term). The point about self-control is a good one. Mauno Koivisto, the former president of Finland, used to say "When someone provokes you, you shouldn't be provoked." And maybe that's true, all things considered. What I find intriguing is the following possible combination: Even if B's actions warrant A's being enraged with B, A shouldn't (all things considered, or morally speaking) be enraged with B. However, if A nevertheless is enraged with B, and acts wrongly, B is co-responsible for that harm. Second, Eli (since this picks up on the point), thanks for the clarification. The issue hangs on the exact content of rage, if we take the accuracy line. It's not easy to settle what it is. But I think that if what I say about the dissipation of rage is along the right lines, it can't present its target as the most egregious wrong possible. Presumably some hope for room for action isn't going to reduce our desire to punish the worst possible kind of wrong. But it seems to transform rage. I think that's one argument in favour of thinking that the extreme element of rage doesn't contribute to its content, but only its motivational role. (Side note: this depends in part on what we think is the source of the representational content of an emotion. Jesse Prinz would say, roughly, that emotions represent characteristic triggers. That would support my line, I think. But I think the content derives from phenomenal character. That makes my view harder to defend, but, I think, still defensible.) On justification: I want to say that even if fittingness is a matter of ideal endorsement of manifestation, it's distinct from justification of the action that manifests the emotion. So rage might be fitting, even if acting on it isn't in any way justified. Michael: thanks for your comment, will reply soon!
Toggle Commented Aug 28, 2014 on On Rage As a Moral Emotion at PEA Soup
Justin, something funny must have happened with your comment - I'm only seeing it now, and didn't get an email notification. This is just a quick note to emphasize that I didn't ignore you, and will reply as soon as I get a proper chance!
Toggle Commented Aug 26, 2014 on On Rage As a Moral Emotion at PEA Soup
Thanks, David! Relating types of anger to relative standing seems promising to me (and a nice way to reconcile contrasting approaches). I don't have a huge problem with thinking of rage as a species of anger at a high pitch - it's certainly closely related. But I'd like to emphasize its distinctive features a bit. Let's first distinguish what we might call animal rage, which is triggered by at least somewhat persistent goal frustration (the f*ing computer froze up again), and interpersonal rage, which has to do with someone's exercise of agency. I'm here interested in the latter. On the input side, frustration certainly seems to play a role especially in rage from below. But it seems to me that it's specifically frustration resulting from the (perhaps tacit) realization that you're in a lose-lose situation as a result of someone's agency (possibly the diffuse agency of many). It needn't be the case that they deliberately got you there, but they didn't pay sufficient attention. Because it's a lose-lose situation, practical reasoning is next to useless - it'll just tell you that you minimize your losses by submission, self-respect be damned. Because of the submission element, I think the idea of being slighted also plays a role in this variety. (I hope I'm not being too specific - I'm thinking, in part, of my experiences of being enraged with college administration...) If this is right about rage from below, there may be more unity with rage from above than you suggest. (Incidentally, I used the Lennon example, as I'm reading Mark Lewinsohn's Beatles bio as a moment, and he describes Lennon as being enraged in the kind of circumstance I discuss.) Why is the jealous guy enraged and not just angry? Well, psychologists often link anger with a sense of control. But here there's loss of control, even if not for the same reason as in the case of rage from below. In spite of your repeated efforts, you can't (in your own eyes) impose control to the degree that would satisfy you, yet you can't walk out of the relationship either without emotional loss. (Do we get enraged with someone who doesn't matter to us, except in the animal sense?) So once again you're trapped, with no path to a desirable (or tolerable) situation visible - a lose-lose situation. Or, to vary the example: when things were going well, Hitler got angry with generals who made mistakes. But when he was by the time he was bunker-bound, he got enraged: he couldn't get the generals to do what he wanted them to do (it was, in fact, impossible), yet he couldn't get rid of them, because there was no one else left to do the job. (The famous scene from Der Untergang is of course a great depiction of this: https://www.youtube.com/watch?v=Q8A6V40LLrI. "Das war ein Befehl!" etc. from 1:10 is enraged, not just angry.) So perhaps interpersonal rage is characteristically triggered by (or has as its core relational theme, or whatever) a specific kind of frustration. The difference from ordinary anger might be clearer on the output side. With anger, there's a goal in sight, and a way to get there: make them suffer, and you'll achieve retribution; hit the nail on the head, and they'll see who you really are. But it seems to me that rage isn't so focused. You already know you can't eliminate the blockage and get to the goal. So you do things that are not instrumental to your goals, but at best mimic things that would be instrumental. You hit the glass ceiling (which you're told doesn't exist), and when you can't smash it, you smash your stupid Worker of the Year award. If it's true that rage motivates expressive or symbolic action, that seems like a clear difference from anger as ordinarily understood. Thanks again for forcing me to think of the difference harder, and for providing useful tools for it!
Toggle Commented Aug 26, 2014 on On Rage As a Moral Emotion at PEA Soup
It is not rare to see groups of enraged people engaged in destructive behavior when you turn on the news these days. Such behavior is puzzling when we think of the agents as rational choosers, since it is often obviously... Continue reading
Posted Aug 25, 2014 at PEA Soup
12
These are very helpful clarifications, Peter! I like the way you characterize the difference between Kantian and Humean positions on the indispensability of feeling, and have nothing further to add on that score. As the discussion is winding down, I feel almost like I'm imposing in adding a few brief responses - I'm not in any way expecting a further rejoinder. On sense of instrumental effectiveness - yes, what I said was at least misleading. I'm sure sensitivity to all those considerations can feed into the sense that something is the thing to do. I guess there's still a line of resistance open, though. It's that at any time, we'll have a number of goals. The lawyer wants to convince the jury, but also not lose future clients, go bankrupt, or cause pain to other people. Having acquired competence in her job, she'll be more or less attuned to the likely effects of her potential actions to the satisfaction of these desires. So some particular way of proceeding will appear as the one likeliest to convince the jury without offending future clients etc. - as the one expected to maximize overall desire-satisfaction. If this is right, the sense of aptness that issues from exercising tacit competence is still fundamentally instrumental. This might not be an accidental feature of an implicitly learnable skill. Finally, a few more words about empathic error signals. If I expected that no one would be hurt by my remark, and I empathize with the person who is stung by it, I do indeed get an error signal. But what I have trouble with is still whether the signal indicates a moral error. If I didn't already think that other people's feelings matter, how could I learn *that* by way of learning that someone unexpectedly feels bad? That is, even if it's true (and it probably is) that we learn from empathy that "the world is full of centers of feeling that work like our own mental life--pains and pleasures that differ in magnitude, acuteness, duration, depth, etc." and that our actions affect them, isn't that still different from learning that it's bad or wrong to cause pain (etc.) to those others, equally real though they may be? Or, to put it differently again, it is indeed a factual error to suppose that my subjective perspective is the only one (or that things only matter to me), but it's a different kind of error to suppose that only my perspective matters or is a source of reasons for me (or anyone). But perhaps I fail to grasp something crucial here (it wouldn't be the first time!).
Thanks for your thorough response, Peter! And apologies for being slow to respond in turn – I’m on a family vacation. I do think that our views are close to each other. My challenges are an attempt to put my finger on just where the difference might lie. (It would be nice if more traditional moral intuitionists who don’t regard intuitions as affective would jump in – but I guess they’re all on summer vacation.) One point on which it now seems to me we’re in agreement is the kind of content that affective intuitions have. What I say in my Humean intuition paper is that the content of the emotional appearances is perhaps best thought of as preconceptual. Although possibly below the level of articulation, the experience differentiates between different ways things might be, and is, I claim, sufficiently closely related to the corresponding proposition (such as thoughts of the form X is wrong) to provide defeasible justification for belief in it. I think something similar is true of perception – the content of the experience only approximates the propositional content of a corresponding belief, but may nevertheless rationalize it. I find it plausible that our conceptual repertoire in part shapes the content of the experience in both cases, which is another reason I resist calling it nonconceptual. (But what I said earlier was too strong.) I also think that our idea of what is involved in exercising moral competence is very similar. I like very much what you say about simulation and how it works (I’ve defended a similar picture in a different context in an earlier paper. I think now that I was really talking about moral competence, although I put the conclusion in different terms.) I suppose my objection, such as it is, is to understanding such competence on the model of tacit competence with empirical matters. I’m sure Bryce is right that our kind of ape cannot do without tacit competence. But being a redneck traditionalist reared on raw Wittgenstein, Kripke, and Husserl, I think thinking about mere possibilities can be revealing. That’s why if my earlier conjecture about in principle dispensability is correct (and it may not be), it suggests there’s something special about moral (and more generally evaluative and normative) intuitions. In this context, it may also be worth noting that a lot of the experiences we describe as something feeling the right thing to do are experiences of instrumental effectiveness – strictly speaking, they’re experiences of something being (maximally?) conducive to a goal one has. The content could be more precisely explicated in terms of “this will convince the jury” or “this will maximize the profit” or “this will lead to checkmate”. Such experiences are certainly not evaluative in the way that moral intuitions are, and are in my view best not described as intuitions about value in the first place. (This may relate to Josh’s earlier point.) They are, in contrast, very plausibly potentially manifestations of tacit competence. So maybe this is another, related line of resistance – tacit competence concerns instrumental effectiveness, but moral competence has to with the choice of ends themselves. Finally, the issue of trial-and-error learning. I agree that it is very plausible that change in attitudes towards gay marriage has happened roughly in the way you describe (although consistency-based arguments from universalist norms people already embrace may also have contributed to it). But I don’t quite see a moral error signal in the picture. As you describe it, many people started out with prejudices regarding what gay people and relationships are like, and how important social recognition is for them, and exposure to such individuals together with some degree of empathic identification helped correct them. But that’s to say people implicitly learned some empirical facts, which resulted in a change in moral intuitions that in part depended on false empirical assumptions. I don’t doubt that trial-and-error learning of this sort – which may happen entirely below the level of consciousness – is possible, but it falls short of implicit moral learning. Could we learn in a similar way that everyone’s well-being matters equally (for example)? That’s the key question I have for the approach. Again, thanks for your engagement, Peter, and thanks to Hille for inviting me to participate in the discussion! I realize I didn't get to address all the points already made, but I may not be able to jump back in before next week.
Thanks to Aaron and Regina for challenging my challenge to Railton (I’ve only met him briefly, so I don’t feel comfortable to call him “Peter” yet!). Let my try a few quick responses. First, on independent access. For a clear case of the kind of contrast I had in mind, consider a mechanic who acquires tacit competence that enables her to TC-intuit what is wrong with an engine on the basis of the sound it makes (I know some people like this). (Some might prefer to talk about perception here, but I don’t think it matters a great deal.) She might not be able to articulate her reasons beyond “Well, it just sounds like the mixture’s too rich”. But in principle, at least, there’s no need for anyone to have any such TC-intuitions. There’s a fact of the matter than can be discovered by observation and reasoning. There’s an intuition-independent access to the facts, and TC-intuitions owe whatever authority they have to approximating the standard set by other methods. This still seems to me significantly disanalogous with the case of moral intuition. I’m not a skeptic about moral intuition, or the possibility of calibrating intuitions on a holistic basis (which doesn’t allow for dispensing with intuitions altogether). To be sure, I haven’t made the case that what I say is true of all TC-intuitions. But it certainly holds for the sort of TC-intuitions that are most prominently studied by empirical psychology, such as the TC-intuitions of nurses, firefighters, and investors. And it’s true of Railton’s lawyer’s TC-intuition that she’ll convince her audience more effectively if she shows emotion rather than cool argument. So: if all TC-intuitions are in principle dispensable, but moral intuitions aren’t (as a whole, even if individual intuitions are), then moral intuitions aren’t TC-intuitions. Second, on feedback. If your moral sense does not conform with the moral sense of those around you, you will indeed most likely get negative feedback from those around you. But the feedback is only a sign of conformity or disconformity, not right or wrong. There’s surely a vast gap between the two! This contrasts with the feedback a nurse gets if her TC-intuition tells her that a baby is well and she isn’t. When the baby’s temperature rises and she keeps crying, the nurse gets information that helps her recalibrate and correct her sense of when a baby is unwell. Disagreement with others, in contrast, doesn’t signal that I was mistaken to begin with. (This is also a worry I have with Regina’s suggestion that we think of morality as a social system of rules – I don’t deny that we can acquire TC-competence with social morality (in the same way as we do with etiquette), but I take it that Railton’s view is more ambitious.)
There is much to like in Railton’s impressive and wide-ranging piece. His responses to psychologists and experimental philosophers, which seem to me to be somewhat independent from his theoretical framework, are particularly insightful. On a theoretical level, I agree with him that intuitions are affective and that they often provide defeasible justification for moral beliefs. But I have worries about two related points: on what exactly intuitions are, and why they provide justification. So, what are intuitions? In what Railton calls the observational sense, intuitions are, roughly, spontaneous and compelling non-doxastic appearances that can directly guide action. This corresponds to the recently popular view of intuitions as quasi-perceptual seemings, a kind of experience that someone may have. Importantly, what distinguishes intuitions from straightforward perceptions is their subject matter: intuitions concern things we can’t perceive, such as something’s being “good or bad, appropriate or inappropriate… reasonable or excessive, beautiful or ugly, and so on”. The reason why we can’t perceive these properties is plausibly that they don’t stand in the appropriate causal relation to our experiences. As I said, something like this is now a common view, and I’ve endorsed it myself (see ‘A Humean Theory of Moral Intuition’, or HTMI for short). What is distinctive, and in my view problematic, about Railton’s view is that he relates intuitions as appearances to the notions of tacit competence and preconditions of conceptual thought. The latter of these, I think, can be quickly dismissed as a model for thinking about moral intuition. Moral intuitions, and philosophical intuitions in general, do have conceptual content: I have the intuition that it is wrong to push the fat man, for example. Whatever exactly a Kantian Anschauung is, it isn’t a propositionally contentful appearance. But this matters little, as what Railton calls the ‘classical model of intuition’ does little work in the argument. The notion of tacit competence, in contrast, is central to his account and response to critics of affective intuition. According to the tacit-competency based model of intuition, intuitions can be the manifestation of an underlying grasp of rules or generalizable capacities, more broadly manifestations of a skill. Such competence is tacit, since one cannot, and need not be able to, articulate the underlying principles. Clearly, the sense that something is a good move in chess or that a sentence is ungrammatical or that the audience isn’t with us often fits this picture – our System 1 has been trained to give us reliable guidance about certain subject matters, often in the form of an affective response. There’s two main reasons why I think this is a bad model for understanding the authority of moral intuitions. The first is that tacit competence -based ‘intuitions’ (or TC-intuitions) are (at least in principle) dispensable. Deep Blue doesn’t need chess intuitions, and a linguist can articulate a principle for why a sentence is ungrammatical. An autist can in principle figure out how the jury feels while lacking the relevant mind-reading skills. TC-intuitions are just a convenient heuristic or a shortcut. This is not (in general) the case for moral intuitions. We don’t have independent access to basic moral truths – that’s why we’re so interested in the epistemology of intuitions. The second key disanalogy between moral intuitions and TC-intuitions has to do with the acquisition of tacit competence. Skills are typically acquired by way of a feedback loop: we try something, which results either in a success or failure signal, and consequently modification of behavior. Or, as Railton puts it in the language of affective neuroscience: The firing rates and interaction patterns in these subsystems are updated through experience via “discrepancy-reduction” learning processes that continuously generate expectations, compare these expectations with actual outcomes, and use this information to produce a neural “teaching signal” that guides forward revision of expectations. To borrow (and simplify) Railton’s famous example, if you have the wrong kind of drink when thirsty, you’ll feel bad, and will try something different the next time, and keep doing so until you hit on something that does the job, which you’ll select again in the future. Through repeated experience, some things come to feel ‘right’ or ‘wrong’ – not morally right or wrong, mind you (see below), but rather the thing to do or the thing to choose, in more neutral language. My claim is that in the case of moral intuitions, there is no right sort of feedback, because we’re not in causal commerce with the normative properties (recall the point about the difference between intuition and perception). Suppose it seems to you that civilians hiding in a UN school deserve to be bombed, because they provide at least moral support for enemy soldiers. Sadly, acting on this intuition won’t result in unambiguous negative feedback – someone might of course be indignant with you, but they might be indignant with you even if you did the right thing, were they prejudiced or partial. If your sense of the best chess move is bad, you’ll lose a lot of games, but if your moral sense is off the rails, you might even end up winning more than losing. (And, as Bryce points out in his comment, I think, there’s a good chance that you’ll get positive feedback from your similarly biased peers.) A further reason to question the link is that TC-intuitions at least typically have a different phenomenology from moral intuitions. Think of the compellingness of moral intuitions: there’s no room for question in our sense that knowingly shelling a school full of refugee children is morally wrong. We feel that the case is closed. The sense that something is the right gift for a friend (to use another of Railton’s examples) isn’t like that. Nor is the flash of liking or disliking that Haidt talks about. (For my positive view, see HTMI.) If affective moral intuitions aren’t based on tacit competence in Railton’s sense – if they’re deeply disanalogous with social or linguistic or chess intuitions – does it follow that the sceptics are right instead? I don’t think so. This is not the place to defend an alternative view. But briefly, on a Humean picture, moral competence (if we want to keep the term) is a matter of adopting the ‘common point of view’ when one reacts to something with moral sentiments. Such competence is not a mere shortcut, nor can it be acquired by trial-and-error implicit learning mechanisms that track mind-independent moral facts.
Thanks, Simon. Here's a very quick reply, without presuming to speak for Scheffler. First, much of Scheffler's focus is on how "we" (people pretty much like him) would likely respond to, say, the global infertility scenario (Children of Men). Our dismay and despair reveals that we as a matter of fact treat the existence of future generations as a condition of value of many activities. It's a further question whether the reaction is justified or rational. But don't you feel the pull of the thought that lots of things, like teaching at a university or doing research in philosophy, would be a lot less appealing if the world was going to end after you die? (Maybe you don't. Susan Wolf says you might initially, but feel differently after a bit of reflection.) Second, I didn't mean to say that the life of the penultimate generation would be fully value-laden, precisely because of the very brief afterlife. So my afterlife condition isn't as minimal as that. It's only if the world ends after what I called my meaning horizon that it makes no difference to my flourishing. Third, on why benefiting future others can contribute to my flourishing even if it doesn't make others flourish: I don't think I said we couldn't contribute to the flourishing of the last generation. It just won't be enough, because whether future people lead a value-laden life is crucially up to them, just like our flourishing is crucially up to us, whatever we've inherited. Think of it in the Aristotelian way: there's a limit in any case to how much we can benefit someone else, since their flourishing is ultimately a matter of their engagement in worthwhile activity. So nothing I can do for you suffices to get you to flourish. But that doesn't mean I can't benefit you (or indeed help you flourish, or contribute to your flourishing), or that benefiting you isn't one of the worthwhile things I can do with my time. In that respect, the penultimate generation isn't in a radically different position from any of us. They, too, can make some contribution for the last generation. But independently of what they do, the last generation's life will be significantly reduced in purpose. (Everyone seems to agree that the last word hasn't been said on just how significantly reduced the value of their activities would be.) I'll return later to the issue of fiction.
Toggle Commented Jan 4, 2014 on Why Afterlifism Isn't a Ponzi Scheme at PEA Soup
Samuel Scheffler’s original and provocative Tanner lectures, now published as Death and the Afterlife (OUP 2013), have already stirred discussion about the importance of humanity’s continued survival for the value of our own lives. In a witty and penetrating review... Continue reading
Posted Jan 4, 2014 at PEA Soup
Thanks for participating, Laurie (and everyone else)! The last comment is a useful and intriguing clarification. The idea that certain experiences have value beyond their (broadly speaking) hedonic quality seems quite novel, and I look forward to reading more about it. (My apologies again to all for the snail's pace of this discussion - this turned out to be a week during which I had hardly any time for work, and frequent Internet outages to boot.)
In the original post, I granted Laurie's claim that we can't rationally estimate the phenomenal value of having a child before we have one, and then argued that it's not very important, since there's other, in context more important prudential values on the basis of which we can potentially make a rational choice. Now, there's one potential objection I didn't discuss, but which comes up in the back-and-forth between Laurie and the Davids. I don't deny that extreme phenomenal values can outweigh non-phenomenal values. If having a child makes you utterly miserable, you may be worse off, even if as a result your relationship to your partner is deepened. This is where I think social scientific evidence is potentially important. Even if it suggests that parents have, on average, lower life-satisfaction (on which more in a moment), it doesn't support believing either that you'll be much happier or unhappier with a child. Indeed, in the happiness literature, it is common to talk about people having a set point of happiness, such that life events tend to cause only temporary departures from the level determined by your long-lasting characteristics. If this is true, it supports my contention that it would be foolish to make the decision to have a child on experience-regarding grounds - it's unlikely to make a dramatic difference anyway. What is more, some specific studies suggest that outcomes that you might think involve miserable experience don't. For example, there's evidence that parents of children with Down's Syndrome actually have higher life satisfaction than parents of typical children. Finally, although there's a popular notion that having children lowers happiness levels, the claim is unlikely to stand up to scrutiny. As readers of Dan Haybron's brilliant work know, happiness is best understood as a multidimensional emotional or affective condition rather than pleasure or life satisfaction judgment. Recent studies have found that parents report more positive emotion and felt meaning in their lives, and that parenting activities are among the most rewarding. Once we leave behind the 'smiley-face' conception of happiness, the data just look different. The issue is far from settled. But again, this is not a big deal: when you're making life choices, you rationally should think of which worthwhile activities and relationships are the best way forward for who you've become, and let your experience sort itself out.
Sven: I do find what you say intuitively appealing. And I think that part of the appeal has to do with the implausibility of directly aiming at a good story when making choices. (I actually talked about "primarily tacitly story-regarding choices" in the original draft of the post, but then dropped the 'tacitly' bit for stylistic reasons.) But I do want to stick to my guns here. Let me start with what I said to Jussi's related concern. I think it is a central aspect of good exercises of agency that they realize or successfully respond to some objective (non-prudential) value. So I think that if getting a college education is good for you, it is in an important part because it involves or results in activities that realize some value. Assuming that knowledge is valuable for its own sake, and that you acquire some important knowledge (understanding?) through education, the activity of studying realizes that value. (I don't know how important this is; I don't really understand the intrinsic value of knowledge or understanding.) Education will also (hopefully) develop your potential so that you can participate in other valuable activities, and (hopefully) improve your ability to tell which activities are valuable. If, on the other hand, college education doesn't improve your understanding, skills, or judgment, its only value lies in the hard-to-predict experience involved. If that were the only good thing about education, I wouldn't want my children to go to college. The same goes, mutatis mutandis, for marrying. I say in another paper that we shouldn't think of personal relationships as something we stand in or have, but as something we live or do. Beyond experience, they are only good for us insofar as they involve activities that constitute a successful response to some objective value. And responding to value is good for us because it contributes to a prudentially good life story, unlike exercises of agency that fail to do so (such as counting blades of grass or watching Seinfeld reruns). To be sure, there's a lot more that needs to be said in favour of this last point, in particular.
A few more (belated) replies. First, Brad: I think you're right that new and unpredictable experiences can change preferences, values, and commitments in unpredictable ways, and that the latter are relevant to welfare. But I don't know what to think of the relevance of future preferences (etc.) to rational choice. After all, they're not your preferences when you make the choice, and if you make another choice, you'll have different future preferences. To take a not-so-implausible example, it may well be that if you choose to have a child, you come to prefer life with children, and if you choose not to have a child, you come to prefer life without children. (If grass is greener on the other side, it could go the opposite way!) It seems you can't grant authority to potential future preferences as such in rational decision-making. What you can do, to be sure, is to assess the value of such preferences in the light of your current values and commitments - you might value the environment, and think that you'll develop environment-friendly preferences if you move to an environmentalist collective. But then you're looking at future preferences from the outside, so to speak, as possible facts about yourself.
(The spam filter is struggling - Laurie's second comment was marked as spam, so I almost missed it altogether. Fortunately, my latest comment also went to the spam folder, so I eventually found both...) Jussi: Let me start by saying that accepting the argument I make in the post doesn't require accepting my particular view of prudential value - which is great, because I don't think anyone else does! In any case, I talk more about the issues you ask about in the Arizona paper I should be writing right now. I emphasize that it is strictly speaking the narratable shape of a life that matters, not a story or narrative that one might tell (perhaps oneself) about the life. The ingredients of narratable shape are goal-related events (goal-adoptings, positive or negative contributions to goals, and goal-achievements). So as long as you have aims and plans that go beyond instinctive reactions to your environment, your life has a narratable shape (colloquially speaking, you have a story). What I'm working on is proposing criteria for prudential (rather than, say, aesthetic) evaluation of such life stories. They don't require you yourself to think in such terms - though some psychologists, such as Dan McAdams, do claim that people have a 'personal story' of themselves that informs their identity, choices, and well-being. Now, it is entirely fair to ask how this conception of prudential value relates to rational choice, since that's how I frame the issue in the post. At the same time, this isn't something I've worked out yet. I'm a little scared that I may be driven into a Vellemanish position. Let me try out something here. On a Paul-like conception of rationality, to make a rational choice, you need informed estimates of the value of different outcomes and their probability given options – not just any way of forming preferences and probability judgments will do. How to go about forming rational preferences depends on what is actually valuable. We’re focusing on prudential choice, so let’s ignore other considerations. I say what’s good for you is leading a meaningful and happy life, where the former is, roughly, a matter of pursuing objectively valuable goals in a way that builds on your past pursuits and makes use of your abilities. Not all choices will make a difference to this dimension of prudential value, but choices of high-level projects (in the pursuit of which you will undertake other activities) will. So what you’ll have to ask yourself when you’re deciding whether to go to college, for example, is how likely it is that it will in the long run best promote your activities realizing some objective value (where I’m quite liberal about what counts as objectively valuable and as realization), best build on your past projects and existing commitments, and call for the full exercise of your abilities and potential. That’s the kind of information you’ll need to form rational preferences and make rational choices. (You'll also need to estimate you it will influence your affective condition.) Whatever choice you make, you will, de facto, choose to shape your story one way rather than another. But this needn’t, and probably shouldn’t, be at the forefront of your deliberation. Perhaps (and this is the Vellemanish bit), concern for leading a life you can be proud of and be fittingly fulfilled by is a background motive that guides such deliberation; perhaps without such a motive your life choices won’t be rational, since it will be only by accident that you end up making them on the basis of the right kind of consideration.
David: I struggle to understand that section of the paper. In the section, Paul (L.A? Laurie?) discusses the possibility of estimating the utility out of the outcome of having a child on the basis of science of subjective well-being. That is in fact the kind of decision procedure that is recommended by Daniel Gilbert, a well-known sceptic about "affective forecasting". For example, having discussed some studies, he says that "This trio of studies suggests that when people are deprived of the information that imagination requires and are thus forced to use others as surrogates, they make remarkably accurate predictions about their future feelings, which suggests that the best way to predict our feelings tomorrow is to see how others are feeling today." (Gilbert 2006, 251). If this works in the case of having children, what you say seems right, even if the value of the outcome hangs on phenomenology. But Laurie's take is that if we look to statistical evidence, we have to ignore our own feelings and preferences (p. 19). That part I don't get. It seems that using this method, what we do is ignore our imaginative projections of what an outcome would be like, because they are likely to be misleading. We still keep our preferences. But perhaps I'm just missing something.
Okay, one toddler down, one baby in mother's arms, so I can start replying to at least some of the comments. Sven: That's an interesting suggestion. It makes me think that there's an analogue to the 'paradox' of hedonism when it comes to narrative value: if you aim at making your story good, you're bound to fail, since the good-making features are projects and relationships that you get involved in for non-instrumental reasons. (I hope that makes sense.) So if you get involved in cancer research because finding a cure for cancer would make your life story good and not because you want to heal the world, even success will count for less. Still, it is my official view that achievements and relationships are non-experentially good for you because and to the extent that they contribute to the meaningfulness of your life. So, I think that if you find the cure for cancer and give it away, you've done something intrinsically valuable, but it's not intrinsically *prudentially* valuable. What's prudentially valuable is that you've successfully exercised your agency and capacities in pursuit of an objectively worthwhile goal. Well, the baby has now migrated to my arms, so I better shut up and return to everyone's great comments tomorrow morning.
It is an interesting fact about many of our most important choices, such as the choice of what kind of education to pursue, whether and whom to marry, and whether to have children – for short, life choices – that... Continue reading
Posted Nov 19, 2013 at PEA Soup
29
It’s fashionable to call for supplementing traditional economic measures with measures targeting the impact of policies on well-being. Leaving aside worries about measuring well-being and implementing policies, a more basic question remains: should the state be in the business of... Continue reading
Posted Oct 3, 2012 at PEA Soup
Okay, it seems it's going to be hard for me to catch up with all the comments, so I won't even try to respond to everything. I appreciate all the feedback, and hope that the things I do say will be helpful to those I won't get to as well. Chandra: thanks for being so courteous! I think the kind of more sophisticated empirical methods you're using are moving the discussion forward, and I by no means intend to claim that the results couldn't be philosophically significant. I suppose one thing I want to resist is that psychological results, including those that have concepts or intuitions in the psychological sense as their object, are automatically or by default philosophically relevant. (Again, those X-Phiers who are Modest won't claim so either, and I thought they already constituted a majority.) If so, it's better not to label them philosophy. John: surely you wouldn't want to call just anything, or even just any form of intellectual inquiry, philosophy. The reason why we talk of different disciplines is that we take there to be different subject matters to be investigated and different methods for addressing them. I believe that there are questions that no empirical evidence will settle, such as those listed in the post. (That's why logical positivists thought they were meaningless.) What better term than 'philosophy' do we have for inquiry into them, and why should we use it for something that addresses a different subject matter, such as how things actually are or what is nomologically possible? Angel: that's a nice way to put the structure of an argument for x-phi. But first of all, even if all the premises were true, it would precisely establish an indirect role to experimental evidence, via a number of non-experimental premises. After all, to be even more explicit, you could add (P0): Substantive philosophical issues concern knowledge, goodness, causation, necessity, etc. The methodological premises then link that premise to the conclusion, but they don't say that the experimentally discovered facts just are the facts at issue in philosophical questions. Now, the substantive (not merely verbal) methodological debate concerns the truth of your premises. I'm agnostic on (P1), because I can see the arguments on both sides. (Notice, incidentally, the contrast with Ordinary Language Philosophy, which crudely speaking advocates a deflationist view of philosophical problems: philosophical problems just *are* problems about the meaning of expressions like 'knows'. If you're not a deflationist of this type, you have the further challenge of explaining why it is that facts about semantics are importantly relevant to settling substantive issues.) I think (P2) is true, if we read 'ordinary usage' along roughly Kripkensteinian normativist lines. But then (P3) is false (unless a sophisticated form of dispositionalism is true and X-Phi methods are correspondingly adjusted to get at just the right sort of dispositions). Alternatively, if we read ordinary usage in the statistical sense, (P3) is true but (P2) false. (I argue along these lines in my 'Rise and Fall' paper.) I hope that wasn't too telegraphic; the paper I mentioned has a fuller discussion (though it is in some other ways already outdated).
Toggle Commented Jul 1, 2011 on A Modest Proposal at Experimental Philosophy