This is Scott Forschler's Typepad Profile.
Join Typepad and start following Scott Forschler's activity
Join Now!
Already a member? Sign In
Scott Forschler
Recent Activity
Well, but we can only let "future findings" determine this if we correctly represent to ourselves and others what those findings are. So it is important to point out when this seems not to have happened, as when you claimed twice that the SciAm article established a relationship between inhibiting/stimulating "concern for one's own future selves" and concern for others, when in fact it only talked about inhibiting or stimulating *a specific brain region*, viz. TPJ, and the effect of this action on both of these types of concern. We agree that the evidence rules--of course!--but the evidence is simply not what you said it was. If you still can't see this (I did a ctrl-F search for "inhibit" and "stimul" on the article to make absolutely sure of this myself, please do so yourself), then I have nothing more to say. Peace out.
Marcus, I'm sorry that you feel insulted. For my part, I myself feel a little insulted that you twice cited and linked to a SciAm article which very obviously does not support the specific claims you made about both its own content and its relationship to your views. Did you think that I couldn't read the article and see that you were making false claims about its content? Pointing to some kind of holistic evidentiary support when individual ones fail is dangerous. While it is possible that S, T and U together support V, even if none of them in isolation would do so, this is a bad strategy to appeal to when your interpretations of S, T, and U have individually failed. Appeals to holism--like those of certain religious believers who say that everything (morality, the cosmos, biological teleology) supports their views, even though they show misunderstandings of each of the individual evidence areas discussed--may say more about the speaker than about the evidence. Your final link to an earlier blog post must have been typed incorrectly, for "this page cannot be found."
Marcus, I do think that your initial statement that the SciAm article suggests "that prudential concern for one's possible future selves is neurobiologically inseparable from moral concern for others" is much closer to the truth. I'm still not sure that the article *shows* this, as perhaps more detailed neurological examination could identify different ways of affecting one but not the other, or different mechanisms behind each (indeed, as long as we think the mind supervenes on the brain, there must be *some* neurological difference between the two, which are certainly different, albeit very closely related, mental states). But I'll grant that they are very hard to separate and typically co-functioning, noting only that a much more general theory--that they both rest upon the capacity to simulate a self other than the here-and-now one--also predicts exactly this very general result. But again, your follow-up comment about the one simulation causing the other is not so supported. In fact, if the two are truly "inseparable," how could you even test for B causing C as distinct from C causing B?
NO Marcus, the study did *not* confirm your prediction, as I pointed out; it was merely compatible with it. And it is irrelevant that moral theories did not make this prediction (either the one found, or the unconfirmed one you made), just as it is irrelevant to the test of an economic theory that it didn't predict the latest terrorist attack, or to the latest astrology column that it did. Indeed, I would find a theory which "predicted" an event so tangential to its basic subject area, and which offered no specific mechanism for the connection, and especially one which its proponents harped on as a major sign of success, to be a rather desperate move, perhaps implying that the theory had few enough intrinsic merits and had to rely upon something else instead. You point me to a SciAm article that, in your words, shows that affecting "concern for one's future selves has been shown to *cause* lack of concern for others" and vice versa. But Marcus, this demonstrates more than anything else that you are making stuff up in order to defend RAF, and abusing the concept of empirical support outrageously. Because this is NOT what the linked article says, not at all! It says in the very subtitle/byline: "A clever experiment pinpoints the brain region involved in taking the perspective of our future selves **OR** that of others." [my emphasis added so you don't miss it] This is repeatedly made clear in the short article: they found that they can stimulate a given brain regions to affect BOTH of these simultaneously, not that affecting one separately somehow causes the other to change. This supports, either that they are essentially different manifestations of the same underlying capacity, or (if we grant that they are different, and I do not care whether [we say] they are or not) that they are in any case both susceptible to manipulation by what is manifestly a third, independent causal factor. There is nothing here to suggest that we have an independent way of causing the one which in turn causes the other. If you A, and B and C result, you need to do a lot more than this before you claim that B causes C (and not vice versa, or that--more plausible--they are both independently caused by A). Thus, the SciAm article shows the EXACT OPPOSITE of what you claimed: it does not support your view, but strongly supports the exact criticisms I gave earlier of how you were reading the other study you placed so much weight upon. Sorry to be a bit blunt here, but this really is a fairly extravagant misreading of the article in question, and if you want to be taken seriously you need to not make such obviously incorrect claims about the findings of such research.
And certainly, it's common knowledge (well, for me anyway) that psychopaths can distinguish agents from non-agents, and can simulate the former enough to predict their future behavior. The issue is whether they *worry* (or more generally, care) about, the harmful consequences of their actions to themselves or to others. And again, we had better find that they don't worry about the latter either--lest how on earth would that not influence their behavior? The finding that they *also* don't worry about the former is interesting, but again not terribly surprising given their prudential deficiencies, and both predictable (and actually predicted) on non-moral grounds.
I'm afraid I still don't see this. RAF predicts not merely a correlation but--in your own words--that psychopaths harm others *because* they have problems simulating their future selves. The study only finds a correlation. Granted, simulating one's future self and other minds are distinct, and the studies cited don't assess the latter for psychopaths. But in your view, the one (or effective use thereof) leads to the other (or effective use thereof), right? Hence a theory which found a deficiency in one but not the other wouldn't support RAF, and you had better hope that psychopaths have the second deficiency as well (we all should, of course; I wouldn't know what to make of a finding that psychopaths can fully represent the harm they do to others and are somehow "able" to guide themselves by this, but in the end just don't--does God intervene to make the guidance fail?) Granted, I'm not aware of any other moral theory which predicts this result; but it is easily predictable from the fact that psychopaths are bad at prudential reasoning. I didn't know this fact, but given that fact, the study results are not surprising. Indeed, the abstract mentions that this idea had been floated before by psychologists, and for this very reason, and the study basically confirms a version of this, differentiating this from the alternative theory that psychopaths are emotionless. Critiquing other moral theories for failing to predict it when non-moral theories already predicted it on the basis of non-moral facts hence seems off-base. All this is aside from the fact, which I noted earlier, that equal concern for *all* your possible future selves, merely on the ground that they are possible, is *not* at all the same as, nor likely to lead to concern for the interests of all *actual* people (whose interests are a very small subset of the possible ones). So even if you showed a strong correlation between the two, using more of Mill's methods than are demonstrated here, the absence of a further mechanism to go from the first to the second leaves the connection very implausible, kind of like a correlation between, say, skirt lengths and economic booms or whatever. Even given some correlating evidence and a successful prediction of a crash when skirt lengths change again, one must strongly suspect that this remains either coincidental, or mediated by a third common causal factor, or if more directly connected, connected via some further mechanism more complex than the theory has so far described.
The first study does indeed identify a deficit in using prospective simulations to guide one's behavior. But this is entirely consistent with having a more general incapacity to use simulations, period to do so--including simulations not only of one's future experiences, but of other persons' experiences. It remains possible, then, that it is the latter which more directly causes psychopaths to make choices which are "bad for us" and the former causes them, unsurprisingly, to also make choices which are "bad for them." I actually didn't know that psychopaths are as bad prudentially as morally (is this all psychopaths, or an important subset?), so this is enlightening information. But still it suggests a more general incapacity for simulations of other minds, including those other minds which your future self might be. Indeed, it wouldn't even help RAF if they were found to have the former incapacity but not the latter, since RAF itself suggests that we identify w/ other minds *via* identifying with our potential future selves, so you either have both or neither. The second set of studies are likewise consistent with all kinds of non-RAF positions. A feeling of power does indeed give us more confidence that we can attain our ends without either the help of others, or favorable circumstances (luck, institutions, resources, etc.) As long as results seems to spring more or less magically and effortlessly from our will, we will ignore both kinds of contingencies--and hence act with greater moral and prudential risk. But that hardly means that these risks are in any sense the same or the former derivable from the latter.
I was certainly imprecise earlier; in saying you eschew universalization what I meant is that you don't grant it a fundamental role in moral reasoning; you purport to derive it from prudential reasoning regarding one's indeterminate future selves. So in the end you certainly do use it, but it feels like a rabbit coming out of a hat (or blood from a turnip), rather than something that follows logically or is earned by the argument. I'm not familiar with Batson, but don't contest the findings; doubtless we are very selfish creatures when push comes to shove. And yet there's the 10% of the time when we aren't; and of course if the rest of time we appear to act morally through various meta-strategems of aligning prudence with morality, including constructing/supporting institutions which help do so, etc., this must take some trouble which requires some motivation as well. The question is, where does *this* motivation, however small or large it looms in our lives, come from? I have trouble seeing how this could arise from the arguments you're making, or how focus on my indefinite possible future selves would increase the motivation. Now if it does, wonderful--let's put it in the water, so to speak. But even if it did, I'm not convinced that you've presented the mechanism by which it would do so. You say you'd like to show that "negotiation with actual people is the most *likely* way to ensure that one doesn't screw over possible future selves". This is precisely what I find logically implausible given that our future selves are *not* utterly indeterminate, and aren't equivalent to actual people around us anyway. Perhaps pretending that they are, in both cases, would do so. But I'd rather have a theory which didn't involve pretence, which seems unstable to me (and not what you really think you're using anyway). But since you concede that this is indeed an area that needs work, I think we have to leave it there for now.
"you and I have the same goal (of justifying actions to all agents). It is just that I argue--on methodological grounds--that the best way to think about this very notion is instrumental." I think that's right. But I'm still really puzzled as to why you think an instrumental justification is justifiable. The double dilemma I've now mentioned twice--between that approach either (A) biasing towards your actual interests if we use probability, or (B1) being either indeterminate or (B2) radically indiscriminate in what "interests" (mostly held by no actual persons, and often radically opposed to them) are thereby promoted. You reject probability here, which should lead you to B1 or B2; you don't endorse either, I think, but I don't see how you can escape this choice. This is the real nub of my objection to your argument; all the concern over FF and other steps preceding this move really only matter insofar as they shed any light (which frankly they haven't so far, for me) on why you take the later steps that you do. Help me out here--again, I've read the book, but I really don't understand how you get from "plan instrumentally to satisfy all possible interests of all possible people (because I might become or come to care about them)" to "respect the actual interest of actual people, and treat them fairly." If that, or something like it, is what you do. How do you avoid B1 or B2 along the way? I see you trying or wanting to do this; I don't see how you do so, and fear equivocation at one or more steps which I can't quite grasp. Rejecting probability, and rejecting universalizability in favor of purely prudential-instrumental reasoning, both strike me as radical violations of your FF, since we palpably do use both in our moral reasoning, and surely most people will agree when the question is framed properly. Indeed, there's a strong analogy between your moves here and one of the most plausible-appearing extant theory for deriving morality from instrumental reasoning alone, e.g., Gauthier's contractarianism. He begins by implausibly identifying morality and prudence, then argues implausibly that it is impossible to be a rational knave, and so the prudentially rational person will act morally, pretty much in the way prescribed by universalizable principles. Two mistakes to get back to where you could get without any mistakes, IMO. Which is telling. Different details than yours, but similar in telling us that you can blood from a turnip, as long as by "turnip" we understand something very different from what most people think turnips are; when you compound this by insisting that you're only using "firm foundations" about what everyone understands turnips to be, I am baffled by both moves (picking up the "turnip", and squeezing "blood" out of it). I'm not trying to be mean here, you understand; just trying to show by analogy how the project strikes me in hopes that you can see the logic of my objection more clearly. "we should *not* attach substantial weight to commonsense assumptions about what "morality" must be to be "morality."" Well, it depends what you mean; I certainly am no Gertian or intuitionist, uncritically accepting common sense. When, e.g., Benatar argues for antinatalism, I think this is worth paying attention to; it's not automatically wrong just because it feels wrong. Both unfamiliar decisions/contexts, and powerful instinctual drives, can bias our moral judgments against sound reasoning. But if you came up with a theory that purple alone is good, or that we should whack people in the head for fun, I would suspect a mistake somewhere. If you come up instead with a theory that said we should instrumentally satisfy all possible interests, and then said that in practice this means respect the actual interests of actual people around you--and I *think* you are saying something like that--then I suspect a double mistake. Not in the conclusion, which is quite reasonable, but in the logic of both steps, which seem to first go in what is clearly the wrong direction then take it back in some way I don't understand in order to reach something that's actually fairly sensible. Now if instead you just took the first step, and actually thought we should respect all possible interests period, even if that steamrolls over many of the actual interest of actual people (just because I *might* one day want to commit genocide, or study Martian persimmons, and so need to plan ahead to facilitate these amongst so many other possibilities, hence doing less than I otherwise could to hold doors open for people or cure cancer, which shouldn't be favored on the merely contingent grounds that this would benefit the actual interests of living people!), then just like "purple is good," I would suspect a serious mistake. Morality could vary a bit from common-sense, but not *that* much. :-) Now if I didn't have a logical argument for the approvability of principles that respect all actual interests equally from my own actual point of view, then I would admit my rejection was a little weak; but since I've got that too, I'm not so worried.
BTW, just to be clear: while I say that moral principles must be justifiable to all possible agents, that does not mean that they have to instrumentally accommodate the desires of all such agents. That's why it doesn't run into the problem that I see in your derivation. Because I believe that the kinds of principles which are so justifiable are precisely those requiring each agent to equally respect the interest of all other agents /in her world/. For, plausibly, this is what each agent would approve of any given actual agent doing--for then such agents would respect *her* actual interests as much as their own. And since this will always be true for any possible agent in any possible world, then such principles are justifiable (= are the unique governing standard for principles which the agent can rationally approve of any agent adopting) for any possible agent. But I frankly see no logical mechanism by which your argument can get to a plausible morality, since you focus on instrumental satisfaction of all the possible interests of all your possible future selves. That's either indeterminate, or monstrously different from the actual interests of people in the actual world, unless I'm missing something very crucial in your logic at this point.
Marcus, I have a doubtless much earlier version of your "Unifying the CI" paper, perhaps from ROME in Boulder many years ago...I have very strong logical reasons for thinking that the second formula uses a different kind of universalizability test than the other two (this is explained in my most recent Phil Studies article, which is not yet in a numbered issue but is on their website waiting to be assigned to one). So it will be pretty hard to convince me that they can be unified without radically changing them! As you stated your three psychological hypotheses just now, they actually don't go very far. I already know that instrumental reasoning is very important and looms large in each of our lives...but "dominant"? Well, it depend on what you mean by that. It is pervasive, certainly; but so is universalizable reasoning. Indeed, the two interpenetrate; we try to make our instrumental reasoning universalizable, and often try to bend what counts as universalizable so it fits our instrumental interests (via rationalization & hypocrisy; the tribute vice pays to virtue). So in practice we too often alter each to accommodate the other, though they are distinguishable with effort. But given this obvious fact, I don't know what hypothesis you are proposing. That we would discover that all our "universalizable" reasoning is prudential-instrumental in disguise, or just isn't even happening in the first place? As well ask me what I would conclude if scientists "showed" that the sun was blue, and always had been. I know what it means for the sun to be blue, but to make this consistent with life so far...well, then I don't know what you mean anymore. "(2) we experience the normative force of moral norms as a result of wanting to know the future (viz. the problem of possible future selves)" Well, again I don't know why you think "wanting to know the future" is the same as, or the heart of, the problem of future selves. I would rather say that the problem of future selves, viewed instrumentally, is the problem of assigning probabilities to various future interests and maximizing expected utility thereby. I'm puzzled why you think this would be dominated by the desire to "know" my future interest. I mean, sure, that would be nice. It would be nice to have the sun and the moon, too. But I don't, so I settle for probabilities, and think that all prudentially rationally people do the same. Trying to focus only on what we can "know" forecloses very rational calculations of probabilities...which won't get you to morality unless you apply universalizability, making your behavior jusifiable to anyone precisely because it is what you would approve of anyone doing. Put another way, what we do "know" is certain either you'll take those into account, or you mean something different from saying we should focus only on what we can "know" in the context of instrumental reasoning than I do. In any case, I previously suggested an alternative explanation for why drawing one's attention away from one's current, immediate impulses might lead the mind to universalizability constraints. If you are imagining some much stronger hypothesis, where you could show, somehow, that it leads to morality without going through universalization, then again I am baffled. What if scientists found that, a la Wittgenstein, some of our heads were just full of unorganized sawdust? Or that moral behavior was correlated with thinking of the color purple? Given my reasons for thinking that consciousness and morality are logically connected with quite different things, I simply wouldn't know what to say, and would have to suspect some missing empirical facts here. "(3) that problem leads us to want to justify our actions to all of our possible future selves, etc.?" That doesn't help either, since, again, justifying our behavior to all our future selves is merely a subset of what is really required--justifying it to all possible agents--which in turn is constituted by justifying it to/for *yourself* given a universalizability constraint (justifying X = approving of X for any agent). But if you don't use univerasalization, only prudential-instrumental critera, then justification *only* to your possible future selves would presumably either have some bias towards yourself, or would explode into indeterminacy or another form of immorality. For my future self might become a fascist, or take significant interest in the species of Martian persimmons which will evolve in their new habitats after terraforming. If I have to justify my current behavior to *those*, and all other, future selves by doing *something* to instrumentally satisfy each of their interests, and can't weight this by either the probability of my taking on such interests, or some prior principles for determining which of *those* choices are justifiable, I will either end up completely paralyzed, or I will end up supporting a range of interests very different from the actual interests of the 7 billion people actually inhabiting my known universe, which only have a small subset of these infinitely-possible ones which your theory suggests I must take into account. So again, if you're suggesting that morality could be "empirically" shown to come from *that* kind of instrumental reasoning, I don't know what you mean. It either wouldn't be morality, or there's something missing in the data or explanation.
Thanks, Marcus. I think that what you've just written is a very nice summary of the book, and helps me put it all into focus (I have finished it since my first post, BTW, though have to admit that initially I was more confused about some points which your last message helped me with a great deal). I am still unconvinced that FF, as you intend to use it, is anything new. You say: "anyone looking at a readout of the LHC can see resonances, etc.)" Well, not easily or right away of course. The judgement "there goes a resonance (or electron, etc." is only an empirical and obvious one to people with a great deal of training and theoretical understanding. Now the theory with which such observation-statements are laden is itself backed up by a large number of other empirical statements, and no doubt when you get down to the bottom they can all be ultimately reduced to things that, taken individually, "anyone" can see. But again, this hardly differs from the claims of standard moral theories. Even stark raving intuitionists (God help us!) claim that if you *really* looked at your own intuitions, you would see that their own claims about them (both normative and meta-ethical) are true, that everyone shares them; and if they don't seem true to you, it's because you're looking at them in the wrong way, your judgments are clouded by egoism, false theories, etc. There are books full of such explanations. Now: these explanations might not be adequate; you and I will doubtless disagree with many of them. But the devil's in the details. You can't reject them just by saying "your claims about intuition (or the judgments of ideal observers, rational contractors, the constituents of rational agency, etc.) aren't ones that everyone immediately agrees with." Because you concede that this isn't true for sound physical theories, and isn't required by FF when you specify this more precisely. I disagree with your suggestion that most other theories don't already completely agree with FF under their own descriptions; they may fail to live up to it, but that's quite another thing. "[FF] does not permit basing theories on what some people 'think' the facts are, when others disagree. It requires basing theories on things we can *all* agree to be facts." But here again you equivocate between whether the crucial question is that others *actually* disagree, or whether they *can* [which I think is the word that really needs emphasis in the second clause] all agree (of and course, do so /rationally/--not through coercion, etc.) Perhaps this is merely verbal carelessness I should not hold you to; and yet the former really does seem crucial in your quick dismissal of opposing views. "Error-theorists, quasi-realists, Kantians, etc., all base their theories on what their investigators 'think' the facts are, despite the fact that not everyone agrees that they are facts at all." Of course they do; and so do you. There is disagreement over whether a desire for X gives anyone a reason to pursue X; X might be genocide, for instance. Now, your initial principle qualifies that as an "instrumental reason," and when so qualified I will actually agree (I am deeply annoyed by the all-too-common tendency to simply argue about whether or not such-and-such a fact gives an agent a "reason" to act, as if there was only one kind of reason, an assumption which leads to enormous confusion). But then instrumental reasons alone aren't terribly interesting, and almost [again, excepting a subset of philosophers I strongly disagree with] everyone agrees that instrumental reasons by themselves do not ground morality. Again, the question is not whether anyone disagrees on the conclusion, argument, or even the data at first glance; rather, does anyone have a good explanation for why we should accept these after considerable reflection and deeper understanding has been achieved. But I've largely said all this before. If anyone else is reading this besides the two of us, I would love to hear their input. Even if they haven't read the book, I think that you [Marcus] have made your main argument very clear in the last two messages [and in far less than 105K words!], as I think I have with my misgivings, so a third-party reflection on both would be useful. Your latest post also makes it very clear--and again, impressively succinctly--how you think that psychological predictions can confirm your theory. I have three (probably predictable) misgivings about that. One, psychological correlations, even re-use of neural pathways, by itself may not confirm that two concepts are logically linked; there might be overlap which is not complete, and the subtle differences between them might be crucial. Recall, e.g., the studies that people who find spare change in a phone booth are more generous towards others; this can tell us a lot about the connection between emotions and moral behavior, but probably tells us very little about what moral principles we should follow. Second, additional psychological findings would support alternative theories as well or better; e.g., subtle reminders of moral rules, or of the existence and potential judgments of other people (e.g., painted eyes on a wall near a coffee pot whose users are asked to contribute a quarter to when they take a cup) encourages moral behavior. Such findings, at least on their face, give better support to various other theories which highlight the (possibly imagined) judgments of *others* on your behavior (ideal observer theory, Darwallian 2nd-personal demands, or my favorite: reflective agency theories) pride of place, than to your theory which privileges your own judgments and concerns about your own instrumental success. Third, it is far from clear that the evidence you point to is best explained by your theory, and often seems to fit alternative theories as well or better; e.g., heightened concern about one's future might simply distance oneself from your immediate interests or impulses, encouraging more reflective thinking in general, and hence free the mind from the shackles of the former so that it can consider the interests of others via a universalization principle. Indeed, since it is also possible that I will one day not care about some of the people I currently care about, such radical uncertainty should lead to more immoral behavior if, as you suggest, the mere possibility of having such changed interests gives us reason to respect them, setting aside all questions of their probability; if it doesn't, this would seem to directly undermine rather than support your theory. But again, I've made this last point before; can anyone else weigh in with an opinion here, based on what Marcus and I have said so far?
Your clarification and emphasis on what you said prior to p14 make it clear that FF doesn’t mean theories should rest on things that are obvious and commonsensical (as the definition by itself might suggest), but simply on objective facts—some of which might take considerable instrumentation, effort, patience, and both long-developed technical skill and further theory-laden observations. As long as once you do that, the results/observations are repeatable by and between persons. OK…but that’s rather trivial. No moral theory taken seriously by contemporary philosophers rejects the idea that we need objective arguments to support it. Even error theorists, quasi-realists, and emotivists think you need *that*, although they might disagree on whether the true theory delivers objective norms or just a set of meta-ethical facts. In any case, if this is all you mean by it, then it seem you had no ground for so quickly dismissing constuctivism, Kantianism, and so many other theories on the grounds that there is “controversy” over them. There is controversy over your “grounds” for morality, and indeed over any egocentrically-based moral theory (e.g., contractarianism). So if “controversy” is sufficient to make the foundations not firm, your theory is not firm. OTOH, it is possible that you have erringly dismissed a good theory, because you failed to bring to bear the patience, theoretical understanding, etc. needed to fully understand it—just as some people dismiss QM because they haven't taken the effort to understand it. Indeed, you reveal a quite different view on FF when you say my “purple is good” proposal violates it, “as this is not a proposition recognized by virtually all human observers as obviously true.” This fits perfectly your view that FF requires a theory to be non-controversial. But this is radically different from the standard that the theory be based on objective facts of some kind of other (possibly very obscure and hard-to-see facts). ["purple is good" may violate that too, but you can't dismiss it *merely* because it is not immediately obvious to all persons who consider it!] So can you see why I might have been confused about what you meant by FF? You explain it one way when you say it fits QM; you apply it in a very different way when you attack alternative theories and propose your own standards. It seems to me that you equivocate between them. And I think that is not merely my idiosyncratic “seeming” but an objective fact, one visible to anyone reading these two distinct passages in the reply you just wrote. :-)
A briefer comment, based on ch 3 & 4: at crucial points you seem to rely the satisfaction about being "fair" to your future self & others, regardless of how any of your other interests are satisfied, hence attaining in at least one area of life the "certainty" of "knowing" that you satisfied some of your interests. But this seems to rely, surreptitiously, upon some prior privileging of our interests in moral fairness. I can also *know* that I satisfied some of my immediate interests by, say, eating that ice cream cone or stealing that money *now* (I may not be happy with it later, but I will sure be happy with it in the next minute!--and you can't take *that* phase of satisfaction away). On p120 you rely, as you seem to in many other places, merely upon sentimentalism, more precisely the possible sentimentalism that I *might* take an interest in other people's lives (knowing that I sometimes do already, and others do at various times). But by the same token, I know that I might, sometimes have, and others sometimes have, become a religious or political fanatic, or a psychopath. Why should I then not equally satisfy these possible interests? Again, I don't see how expanding the realm of possible interests infinitely, and abjuring all probabilities, can lead to the results you (and I) want to reach; you seem to be equivocating on which ones you will point to and which ones you brush under the rug. We need principled ways of distinguishing good from bad interests. E.g, Bentham did so via principles like fecundity; helping psychopaths is less fecund for satsifying interests generally than helping charitable people, given the world as we find it. But this requires calculating probabilities, not ignoring them. This focus on certainty reminded me at times of similar moves on the parts of Stoics, as well as Gewirth and to some extent Kant, insofar as they also focused their ethics on things we could be *certain* of, avoiding uncertainties. It is curious that you didn't seem to address these parallels, which might have helped clarify the basis of your argument, and how you thought it compared to or improved upon these otherwise similar ones.
I'm about 1/3 through the book now, and have looked over some of your comments above. While a few of the reviewer's remarks seem a bit unfair, I am finding myself strongly agreeing with the criticism around the middle of your post above which says that you are using a very controversial set of assumptions about instrumental reasoning, and hence violating your own "firm foundations" principle. Actually I'm not even quite sure what you mean by the latter, despite having read through this part of the book. The definition on p15 is that we should prefer "theories based on common human observation(s)...taken to be obvious, incontrovertible fact..." But what does *this* mean--and why should we accept it? I don't expect a good theory of quantum mechanics to be based on obvious, commonsense facts; to the contrary. Of course we may have separate reasons for thinking that *morality* should be based on commonsense, but this does not hold for theories generally. Furthermore, isn't it also important that the theories be based on facts *related* to what the theory is about, and obviously so? E.g., if my theory of morality (or QM) is that more purple there is in the world, the better (or that quarks are purple), then while "the presence/absence of purple" in a given reason is an obvious, commonsense fact, that doesn't make my theory a good one. Indeed, it is quite obvious that it is not related to either topic. (This may sound /crazy/, but it is actually close to something sometimes proposed for morality: that, e.g., since the commands of God are *utterly* beyond our control, they are objective, giving morality the objectivity we intuitively think it has; but of course, this is a cheap and irrelevant kind of objectivity). Now, if you say it is obvious that we have interests, and reasons to satisfy them--well, sure, with some qualifications which needn't trouble us here. But it is far from obvious that satisfying them is what morality is about. So if "firm foundations" requires only the former, it's a cheap and misguided principle. If it requires the latter--again, an obvious *relationship* between the "foundation" and what the theory is about--then it is actually quite obvious that instrumentalism is a very bad & inadequate foundation for morality. Now of course you try to show later on that because of some radical uncertainty about our future interests, we have instrumental reasons to be fair to all persons and their interests in order to be fair to those possible future interests of ours. But I am puzzled at many points here. Suppose I (A) find myself to have an unfair advantage over another person (B) today, and take this option. True, tomorrow B (or C) might have such a position over me, and my interests will be slighted if they take that option. But how does my serving B's interests fairly today *count as* treating my interests of tomorrow fairly? For the latter is not identical to B's interests today, just analogous to them, or of the same type. Now, I think there is a good argument for fair treatment here: if I hurt B today, I am implicitly approving of behaviors of this type generally, and hence of my own harm tomorrow on the part of a similarly-situated person. But this is a Kantian/Harean/golden rule-style universalizability argument, not an instrumentalist one. My treating B fairly today neither counts as nor causes my being treated fairly tomorrow. You often say that we don't want to just make it probable that our future interests are satisfied or at least treated fairly, but to *know* that they will be. But I see no sense in which you have shown or could show that we can *know* this. Indeed, it seems radically unknowable; your arguments for radical uncertainty about our future interests make it impossible for any reasoning, let alone instrumental reasoning (and certainly no form of moral reasoning) to deliver this contented knowledge. In general, it seems that you're trying to get morality out of instrumentalism by radically changing what "instrumental rationality" means. Not only is this confusing, it would actually lead to very counter-intuitive results. You face a dilemma here. If we base our "instrumental" reasoning on probabilities of what our interests are likely to be and how to satisfy them, then this reasoning will be biased to ourselves and against others in an immoral fashion, unless we bring in universalizability criterion of the sort you are trying to avoid. But if we toss out probabilities and try to be totally "safe" by acting in ways that are fair to our future interests *no matter what these are*, without any consideration of what the probabilities of one set of interests versus another are, then we get disastrous results. And you seem to want to do this as you stress that merely the possibility, not probability, of various changes in your interests is enough to generate an obligation to be fair to those possibilities. Setting aside the other problems I raised above, consider that if we do that for *ourselves* then we must do it for all others: I must treat each person I interact with as if /their/ future interests are radically indeterminate, and be fair to all possibilities. Whatever the full range of logically possible interests of all persons could be--and I don't know how we could begin to understand that in any meaningful way--it is surely not identical with the *likely* future interests of people we actually interact with. If we attempt to satisfy the former, we are likely to not do very well to satisfy the latter. In short, by abjuring probability-based reasoning for yourself, you do it for all others as well, and are tossing out the baby with the bathwater, going from the frying pan into the fire. The path to morality, I think, is to someone *include* the *actual* interests of other persons into your reasoning as in some way on a part with *your* actual interests, not by inflating your "interests" to include all possible ones and trying to be fair to this entire universe of hypothetical interests which may have very little to do with the interest of actual people. It is *possible* that you, or everyone, will tomorrow take an interest in counting blades of grass. And if some of them do, perhaps we should not get in their way, or even help them in some cases. But it hardly makes sense to plan *now* for this possibility, or treat in on a par with an interest in world peace or curing cancer. And it is unclear how you can privilege interests like the latter once you abandon reasoning based on the probabilities of people actually having various interests.
How can I find journals who could benefit from additional book or article ms reviewers? I have more free time than most philosophers and my usual turn-around time for review requests is 1 week for books, 1 day for journal articles. I would love to do more--hook me up!
I have received a number of very helpful comments from reviewers over the years, some of which led to revision, others leading me to at least temporarily abandon the article. I have been also been astounded by the gross stupidity of some reviewers. Recently the chief editor at a major journal told me there had been too much published on the topic of my article recently, when my own literature review concluded that there was almost nothing published on that precise topic, ever. When I asked for an example, the editor gave one example--which was actually only an only marginally-related topic, and published 50 years ago! Off to the next journal, then. And several times I've gotten comments which reveal that the reviewer only skimmed the article and misunderstood it in elementary ways, sometimes even accusing me of making incorrect assumptions when the falsity of those assumptions was, rather, the precise basis of my criticism of other authors! These experiences are apparently legion. I highly recommend Mark Twain's own amusing experiences on this issue, related in his delightful story here.
I am an independent scholar; between my various research projects I do occasional short book reviews and also review article submissions for one philosophy journal. I almost always submit the former within a week of receiving the book, and the latter the same day I receive the manuscript. So my question is: can anyone else use me? I would love to do more reviews of books or articles, and can promise a quick turn-around time. Given all the complaints (and I have my own) about long delays after submission, I would hope that someone could use more hands. Is there any central place that journals needing reviewers post requests for help?
Scott Forschler is now following The Typepad Team
Apr 19, 2016