August 6, 2010
Here’s the fifth section of the Chapter 2 draft of my book, Finding the Li: Towards a Democracy of Consciousness. This section discusses our evolved drive for “altruistic punishment” and how that may have overcome the “free rider” problem discussed in the previous section, permitting our social intelligence to evolved altruistically as well as competitively.
Imagine you’re sitting alone in a room. In the next room is someone else, whom you don’t know. You’re never going to meet each other. A researcher walks in holding a hundred dollars and tells you that this sum will be split between you and the stranger in the other room. And the good news is, you’re allowed to decide exactly how you want to split it. But there’s a catch. You can only propose one split. The person in the other room will be told the split and can either accept it or reject it. If he accepts it, the money is shared accordingly. If he rejects it, you’ll both get nothing.
Welcome to the ultimatum game. If you’re like most people, you’ll decide to split the hundred dollars down the middle, so you get $50, the other person will clearly accept his $50, and you’ll both be ahead. Researchers view the ultimatum game as convincing evidence that refutes the earlier view of humans as fundamentally self-interested. If that were the case, then you (“the proposer”) would be more likely to keep $90 and offer $10 to the other stranger (“the responder”). The responder would be likely to accept $10 because, being self-interested, he would be happier with $10 than nothing. But that’s not what people do. Responders in fact frequently reject offers below $30, and the most popular amount offered by proposers is $50.
It seems that we humans have a powerfully evolved sense of fairness. So powerful, in fact, that we would rather walk away with nothing than permit someone else to take extreme advantage of us. Researchers call this “altruistic punishment.” But even altruistic punishment is not powerful enough by itself to overcome the free rider problem in human groups. Think back to the Ardi situation. Suppose that sneaky free rider has skulked back to camp and is coming on strongly to one of the females whose partner is out hunting. But there’s another male who had stayed home and sees what’s going on. What does he do? Does he confront the free rider, possibly risking his own life? Or does he turn away and do nothing? This has been called by researchers the problem of the “non-punisher.” In a way, if someone lets a free rider get away with things without punishing him, they’re really a free-rider too and deserve to be punished. When modeling these situations, the researchers have indeed found that cooperation can be maintained in sizable groups indefinitely, but only in situations where both free riders and “non-punishers” are punished. These groups would tend to be more effective than groups of self-interested individuals, and their members would be more likely to pass their genes on to later generations.
Thus, the possibility exists that, over thousands of generations, our social intelligence was molded by cooperative group dynamics to evolve an innate sense of fairness, and a drive to punish those who flagrantly break the rules, even if it’s at our own expense. Some researchers have gone so far as to argue that this evolved sense of fairness has led to “the evolutionary success of our species and the moral sentiments that have led people to value freedom, equality, and representative government.”
 Gintis et al., op. cit.
 Fehr and Fischbacher, op. cit.
 Gintis et al., op. cit.