This is Wei Dai's Typepad Profile.
Join Typepad and start following Wei Dai's activity
Join Now!
Already a member? Sign In
Wei Dai
Recent Activity
Right, I should have known that. :) Anyway, I've created a new post on LessWrong to continue the discussion, since it's getting off-topic for this post.
Toggle Commented May 7, 2009 on Prefer Peace at Overcoming Bias
Robin, in that paper you wrote: For example, if you learned that your strong conviction that fleas sing was the result of an experiment, which physically adjusted people’s brains to give them odd beliefs, you might well think it irrational to retain that belief (Talbott, 1990). Suppose in the future, self-modification technologies allow everyone to modify their beliefs, and people do so in order to gain strategic advantage (or to keep up with their neighbors), and they also modify themselves to not think it irrational to retain such modified beliefs (otherwise they would have wasted their money). Would such a future be abhorrent to you? If so, do you think it can be avoided?
Toggle Commented May 7, 2009 on Prefer Peace at Overcoming Bias
Robin, my understanding is that if you take any consistent set of beliefs and observations, you can work backwards and find a prior that rationally gives rise to that set of beliefs under those observations. Given that human beings have a tendency to find and discard inconsistent beliefs, there should have been an evolutionary pressure to have consistent beliefs that give good strategic impressions, and the only way to do that is by having certain priors. I do not dispute that we also have beliefs that give good strategic impressions and are inconsistent with our other beliefs, and those can certainly be overcome by more rationality. But the better we get at detecting and fixing inconsistent beliefs, the more evolutionary pressure there will be for having consistent strategic beliefs. What can counteract that? BTW, Eliezer's idea of achieving cooperation by showing source code, if it works, will probably make this problem even worse. "Leaks" will become more common and the importance of strategic beliefs (and values) will increase. The ability to self modify in the future will also make it easier to have consistent strategic beliefs, or to create inconsistent ones that can't be discarded.
Toggle Commented May 7, 2009 on Prefer Peace at Overcoming Bias
It further occurs to me that this view of human beings as leaky agents of our genes can also help explain the "agreeing to disagree" phenomenon. Because we tend to leak our private beliefs in addition to our private preferences, our genes should have constructed us to have different private beliefs than if we weren't leaky, for example by giving us priors that favor beliefs that they "want" us to have, taking into considering the likelihood that the beliefs will be leaked. Each person will inherit a prior that differs from others, and thus disagreements can be explained by these differing priors. This kind of disagreement can't be solved by a commitment to honesty and rationality, because the disagreeing parties honestly have different beliefs, and both are rational given their priors. One way out of these dual binds (some conflicts are Pareto-optimal, and some disagreements are rational), is to commit instead to objective notions of truth and morality, ones that are strong enough to say that some of the ultimate values and some of the priors we have now are objectively wrong. But the trend in philosophy seems to be to move away from such objective notions. For example, in Robin's "Efficient Economist's Pledge", he explicitly commits to take people's values as given and disavows preaching on what they should want.
Toggle Commented May 6, 2009 on Prefer Peace at Overcoming Bias
All, I agree and said explicitly that there can be situations where the better-for-all deals can't be created or enforced. But do you really think the urges-to-take-sides I discussed in my post are of that sort? No, I think your specific examples may be better explained by an ideal for war, which you already hypothesized in your post: It seems that one of humanity's strongest ideals is actually war, i.e., uncompromising conflict. Game theoretic considerations suggest that such an ideal should exist. And if humanity really does have an ideal for war, in other words, if war is a ultimate value for us, not just an instrumental one, then some of the conflicts that you see as wasteful are in fact the better-for-all deals that you seek. And it's not true that "there is some deal that beats each conflict for each party."
Toggle Commented May 6, 2009 on Prefer Peace at Overcoming Bias
After writing the above, I realized that the descriptive version of "prefer peace" may not be true either. It may be that our genes "prefer" peace, but they've programmed us to prefer war. Suppose in the "double auction" example I linked to, the buyer and seller don't bid personally, but must program agents with utility functions and let those agents bid for them. But before the bidding, there's an additional round where one agent will reveal its utility function to the other. In this case, the principals should program the agents with utility functions different from their own. To see this, suppose the seller's agent is programed with U(p) = p-c if deal occurs, and this is revealed to the buyer's agent, then the buyer's agent will bid c+.01 and the seller's agent will bid c. If the seller wants to make more than a penny's profit, it has to program its agent with a higher c than the actual cost. Similarly, human beings tend to leak information about their private preferences, and therefore our genes should have constructed us with higher real preferences for conflict than if we could hide our preferences perfectly.
Toggle Commented May 6, 2009 on Prefer Peace at Overcoming Bias
The theory of games with incomplete information explains why mutually beneficial deals sometimes don't occur. When one side has private information about its costs and benefits for a compromise (compared to continued conflict), it will act as if its costs are higher and benefits lower. This way it gets a better deal if a deal does occur, but also means that sometimes deals don't occur when both sides could benefit. There's a nice example of this that I still remember from my game theory class, and I dug it up at http://books.google.com/books?id=pFPHKwXro3QC&pg=PA220. Isn't saying "prefer peace" the same thing as telling the seller in this game "bid your true cost" or telling the buyer "bid your true valuation", in which case it seems futile. Or is "prefer peace" is supposed to be descriptive rather than prescriptive? In other words, is the point that we all actually prefer peace, even if many act and speak as if they prefer the opposite?
Toggle Commented May 6, 2009 on Prefer Peace at Overcoming Bias