August 6, 2010

Altruistic punishment

Posted in Language and Myth tagged at 4:10 pm by Jeremy

Here’s the fifth section of the Chapter 2 draft of my book, Finding the Li: Towards a Democracy of Consciousness. This section discusses our evolved drive for  “altruistic punishment” and how that may have overcome the “free rider” problem discussed in the previous section, permitting our social intelligence to evolved altruistically as well as competitively.

[PREVIOUS SECTION]

Altruistic punishment

Imagine you’re sitting alone in a room.  In the next room is someone else, whom you don’t know.  You’re never going to meet each other.  A researcher walks in holding a hundred dollars and tells you that this sum will be split between you and the stranger in the other room.  And the good news is, you’re allowed to decide exactly how you want to split it.  But there’s a catch.  You can only propose one split.  The person in the other room will be told the split and can either accept it or reject it.  If he accepts it, the money is shared accordingly.  If he rejects it, you’ll both get nothing.

Welcome to the ultimatum game.  If you’re like most people, you’ll decide to split the hundred dollars down the middle, so you get $50, the other person will clearly accept his $50, and you’ll both be ahead.  Researchers view the ultimatum game as convincing evidence that refutes the earlier view of humans as fundamentally self-interested.  If that were the case, then you (“the proposer”) would be more likely to keep $90 and offer $10 to the other stranger (“the responder”).  The responder would be likely to accept $10 because, being self-interested, he would be happier with $10 than nothing.  But that’s not what people do.  Responders in fact frequently reject offers below $30, and the most popular amount offered by proposers is $50.[1]

The "ultimatum game" shows that we have a natural sense of fairness

It seems that we humans have a powerfully evolved sense of fairness.  So powerful, in fact, that we would rather walk away with nothing than permit someone else to take extreme advantage of us.  Researchers call this “altruistic punishment.”  But even altruistic punishment is not powerful enough by itself to overcome the free rider problem in human groups.  Think back to the Ardi situation.  Suppose that sneaky free rider has skulked back to camp and is coming on strongly to one of the females whose partner is out hunting.  But there’s another male who had stayed home and sees what’s going on.  What does he do?  Does he confront the free rider, possibly risking his own life?  Or does he turn away and do nothing?  This has been called by researchers the problem of the “non-punisher.”  In a way, if someone lets a free rider get away with things without punishing him, they’re really a free-rider too and deserve to be punished.  When modeling these situations, the researchers have indeed found that cooperation can be maintained in sizable groups indefinitely, but only in situations where both free riders and “non-punishers” are punished.  These groups would tend to be more effective than groups of self-interested individuals, and their members would be more likely to pass their genes on to later generations.[2]

Thus, the possibility exists that, over thousands of generations, our social intelligence was molded by cooperative group dynamics to evolve an innate sense of fairness, and a drive to punish those who flagrantly break the rules, even if it’s at our own expense.  Some researchers have gone so far as to argue that this evolved sense of fairness has led to “the evolutionary success of our species and the moral sentiments that have led people to value freedom, equality, and representative government.”[3]

[NEXT SECTION]


[1] Gintis et al., op. cit.

[2] Fehr and Fischbacher, op. cit.

[3] Gintis et al., op. cit.

Advertisements

3 Comments »

  1. Yianni said,

    This is a good read.

    Of course punishment in this context really means “genetic punishment” – that is, any behavior that reduces the likelihood of the genes of the target being passed to the next generation.

    It could, for example, entail giving the children (of the free-loader) lots of lollies, so they have a vastly increased likelihood of diabetes. Doesn’t look like punishment, and it’s not directed targeted at the free-loader, but it works.

    Also, I’d like to clarify a small mistake in the article.

    It is generally accepted that group selection doesn’t occur at a rate fast enough to offset individual-level selection.

    So the following line, while true, is deeply misleading:

    “These groups would tend to be more effective than groups of self-interested individuals, and their members would be more likely to pass their genes on to later generations.”

    This gives readers the wrong idea.

    It should read:

    “Any particular group will tend towards consisting of individuals whom punish both (i) free-loaders and (ii) non-punishers.”

    In fact, the stable situation might be better described using a recursive definition, as follows:
    1) Free-rider’s must be punished.
    2) If you don’t punish a free-rider, you yourself are a free-rider (and therefore must be punished).

    In other words, it’s turtles all the way down.

    “Thus, the possibility exists that, over thousands of generations, our social intelligence was molded by cooperative group dynamics to evolve an innate sense of fairness, and a drive to punish those who flagrantly break the rules, even if it’s at our own expense.”

    Despite my criticism, this conclusion is still correct. As far as I can see.

  2. Yianni said,

    Also, a word about the ultimatum game.

    I will show that it almost never occurs in nature.

    Suppose that we evolved in conditions where the ultimatum game actually did arise. And no one would know who played such a game or what the outcome was; except that each participants would know their own decision.

    Suppose furthermore, that both participants had a way of determining that they were playing such a game. This is a binary classification test, so we can assign to such a test both a sensitivity and a specificity.

    If the test had high specificity (by which I mean, the test rarely, if ever, told you such a game was occurring when in fact it wasn’t) then human behavior in the ultimatum game would be radically different.

    The humans on the receiving end would become okay with taking any amount of money, because they would not be punished by other group members for taking any particular amount of money. In such cases, all money is good money.

    And the person whom chose the divide (e.g. $80 for me, $20) would therefore be motivated to give the receiver a very minimal quantity of money, e.g. $1, or $0.001 etc.

    Since this is in fact not what is observed, we conclude:

    The ultimatum game almost never occurs in nature. (or at least, it occurs, but we can’t easily know it’s occurring; in particular, the specificity of our binary test is low under all circumstances).

  3. kladionice said,

    Wow, amazing blog format! How lengthy have you ever been blogging for? you made running a blog glance easy. The full glance of your website is great, as smartly as the content material!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: