May 7, 2010
From Kinship to Capitalism: The Layered Values Hypothesis
Reason and emotions – how do they relate to each other? That’s easy, right? We’ve had over two thousand years of people telling us that our reason controls our emotions. Or at least it tries to. And when the emotions get out of hand… well, that’s when we do things we later regret. Right? No, wrong.
In the past couple of decades, there’s been something of a revolution in the approach to this area of moral psychology. In a paper that’s now become a classic, called “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” psychologist Jonathan Haidt turned the reason/emotion relationship upside down. 
When “we see an action or hear a story…we have an instant feeling of approval or disapproval,” Haidt tells us. This is what’s known as our affect-laden, or moral intuition. Think for a moment about taking a pin and sticking it into the hand of the next person you see. Ouch! You instantly feel that that’s a bad thing to do, without having to reach for a set of values. And reaching for those values, which is known as “moral reasoning,” is now believed to be something we do to rationalize whatever our instant moral intuition told us was right or wrong. Here’s how Haidt contrasts the two:
Moral intuition refers to fast, automatic, and (usually) affect-laden processes in which an evaluative feeling of good-bad or like-dislike (about the actions or character of a person) appears in consciousness without any awareness of having gone through steps of search, weighing evidence, or inferring a conclusion. Moral reasoning, in contrast, is a controlled and “cooler” (less affective) process; it is conscious mental activity that consists of transforming information about people and their actions in order to reach a moral judgment or decision….
Moral reasoning, Haidt believes, is less like a judge and more like an attorney: arguing a case after the action’s already been taken. It’s usually “a post-hoc process in which we search for evidence to support our initial intuitive reaction.” So, for the last two thousand five hundred years of moral philosophy, under this approach, the tail’s been wagging the dog: in reality it’s the emotions (not reason) that does the wagging.
In the same year of Haidt’s revolutionary paper – 2001 – another psychologist, Joshua Greene, reported an experiment which backed up Haidt’s theory and has now become the archetypal example to support this new view. It’s based on two different moral dilemmas, the “trolley dilemma” and the “footbridge dilemma,” which are described here by Greene:
… the trolley dilemma: A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Ought you to turn the trolley in order to save five people at the expense of one? Most people say yes.
Now consider a similar problem, the footbridge dilemma. As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge that spans the tracks, in between the oncoming trolley and the five people. In this scenario, the only way to save the five people is to push this stranger off the bridge, onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Ought you to save the five others by pushing this stranger to his death? Most people say no… 
In both situations, you would be causing the death of one person in order to save five people. So why would you feel OK about doing that in the trolley dilemma, but refuse to push the poor guy to his death in the footbridge dilemma? The answer, Greene suggests, is that the footbridge situation engages your emotions in a way that the trolley situation does not. It’s more “emotionally salient” to grab hold of someone, feel their living body, and push them off a bridge to their death, than it is to simply flick a switch. That emotional salience is what triggers your instantaneous moral intuition, whereas flicking a switch is just a numbers game. One death versus five. One is better. Flick the switch. Now your moral reasoning is running the show, because there’s nothing emotionally salient to kick your moral intuition into gear.
Interestingly, this experiment sheds light on a phenomenon pointed out by philosopher Hannah Arendt several decades ago which she called the “banality of evil.” She came up with this famous phrase in a report on the 1961 trial of the Nazi mass murderer Adolf Eichmann, sometimes referred to as the “architect of the Holocaust.” What was striking to people in the televised trial held in Israel was how mild-mannered and bureaucratic this monster appeared. In terms of Haidt’s and Greene’s thesis, the notion of the “banality of evil” can be partially explained by the separation of Eichmann’s moral reasoning (framed by the hateful ideology of Nazism) from his moral intuition which remained unengaged. If Eichmann had been forced to murder Jews personally, rather than signing orders for their genocide, his moral intuition might have made it much more difficult for him to have carried it out.
In the past two decades, so many experiments and fMRI investigations of the brain have supported Haidt’s and Greene’s findings that their approach has now become the new orthodoxy in moral psychology. So, it was with more than a little interest that I read a recent Opinion article in Nature magazine by another well-regarded psychologist, Paul Bloom, arguing against the Haidt/Greene viewpoint. “I predict,” he wrote, “that this theory of morality will be proved wrong in its wholesale rejection of reason. Emotional responses alone cannot explain one of the most interesting aspects of human nature: that morals evolve.”
Bloom’s basic argument – that morals evolve – is a powerful one. He points out that currently we have very “different beliefs about the rights of women, racial minorities and homosexuals” than people did at the end of the 19th century, and even our “intuitions” about the “morality of practices such as slavery, child labour and the abuse of animals for public entertainment” had changed substantially. How do you explain that under the “moral intuition” theory? Bloom suggests that “the role of deliberate persuasion” is what’s missing from Haidt’s theory. He gives examples of the classic book Uncle Tom’s Cabin in helping to end slavery in the United States, and Peter Singer’s 1975 book Animal Liberation, which served as a powerful catalyst for the animal rights movement. So in Bloom’s view, moral reasoning does more than offer a post-hoc rationalization. It can actually change minds. Not just a few minds, but over generations, millions of minds.
While I agree with Bloom’s point that morals evolve, I think he’s focused on the wrong dynamic in emphasizing “deliberate persuasion.” What Bloom’s missing is the crucial importance of cultural norms. Yes, individual writings can definitely have an impact on those norms, but most people pick up their norms from those around them as they grow up. Imagine Animal Liberation published in 18th century, or Uncle Tom’s Cabin published in Roman times: they would have been too distant from the contemporaneous cultural norms to have had any impact.
In fact, I think that it’s possible to incorporate the dynamic of evolving morality into the Haidt/Greene approach by conceptualizing what I would call a “layered values” hypothesis. In this view, as an individual grows up (mostly as a child but to some degree into adolescence and beyond) significant cultural norms become associated so strongly with emotional valence (good/bad reactions) that they become embedded into the individual’s moral intuition. I call it “layered values” because some of an individual’s core values are intrinsic to being human, whereas other culturally derived values are then layered over this core.
To be fair to Haidt, he already included this idea in his original 2001 paper. “Moral intuitions,” he wrote, are “both innate and enculturated.” He describes how “moral intuitions are developed and shaped as children behave, imitate, and otherwise take part in the practices and custom complexes of their culture,” and he views an individual’s “moral development” as “primarily a matter of the maturation and cultural shaping of endogenous intuitions.”
And in a recent paper, neuroscientists Chadd Funk and Michael Gazzaniga have focused on the way that external, cultural values embed themselves into our moral intuition, emphasizing the role of what they see as an “interpreter” in the left hemisphere of the brain, which acts as a “critical bridge” between our subjective experience and the “ideological infrastructure of society.” Here’s how they describe the process:
Once captured as cultural norms or laws, these ideas feedback through development and learning mechanisms to fine-tune the workings of the underlying neural circuitry… Thus, hard-wired patterns of neural connectivity that establish innate functional modules, like those that foster basic social evaluation in infants, are dynamically sculpted by cultural experience.
Whether we view it as sculpting or layering, it’s the same idea. A core set of human values being shaped by cultural factors. What’s missing, however, in these descriptions is a further analysis of the different types of cultural factors that shape each individual’s moral intuition. What makes one person shudder at gay marriage, while another person seethes with anger at Islamic women wearing a veil? Or to go back to Bloom’s example, why is everyone against slavery now when it was considered a normal part of society in so many cultures in the past? While every individual and every culture have their own idiosyncrasies, I think it’s possible to isolate different sets of what we might think of as “value constellations” that evolved together at different stages of human social evolution, which explain a great deal of these differences in cultural norms. These value constellations emerged from the major societal and technological characteristics of that stage of social evolution, and once they became embedded in each individual’s moral intuition, they were passed on from generation to generation by the process that Funk & Gazzaniga described above.
Elsewhere in this blog, I’ve broadly categorized what I see as four distinct stages of societal development with clearly differentiated value constellations: hunter-gatherers, agriculture, monotheism and the scientific age. The table below summarizes these stages in terms of predominant timeframes, major technological innovations and characteristic values. [Click on the table for a bigger view.]
You can read further descriptions of each stage by clicking here, but right now I’m primarily interested in the value constellations that each stage produces. I think that, by understanding these different layers of value, it’s possible to gain more insight into the conflicts that we frequently find ourselves engaged in. I’m referring to conflicts that arise within ourselves, between us and other people, between one social group and another, and even between different countries. Here is a brief summary of some of the values characteristic of each stage:
Hunter-gatherer: Kinship bonds, fairness, reciprocal generosity, altruism (within the group), aggression (to other groups).
Agriculture: Ownership, social hierarchies, gender inequality, ancestral worship, regional identification.
Monotheism: Immortal soul separate from the body; worship of God as universal law-giver; identification with religious co-affiliates.
Scientific age: Multiple universally applicable values derived from abstract conceptualizations: liberty, reason, democracy, progress, fascism, communism, capitalism.
I’m proposing that, just as an individual is layered with values that embed themselves in her psyche as she grows up, so our modern culture is the result of layers of cultural norms that have themselves evolved through different stages of societal development. As each different stage of development became predominant, the new sets of values were layered over the old, sometimes seamlessly, sometimes causing great conflict.
For an example of seamless layering, it’s easy to see how the agriculture-derived notion of ownership fits well into the scientifically-derived theory of capitalism. On the other hand, clear conflicts arise between agriculturally-derived ancestral worship and monotheistic belief, or between agriculturally-derived values of gender inequality and modern scientific-age values of liberty and democracy. Interestingly, following on from the latter example, you can see a powerful linkage between modern liberal democratic values and some of the hunter-gatherer values of fairness and within-group altruism, which partly explains why some modern thinkers occasionally romanticize hunter-gatherer cultures of the past.
Over the next few blog posts, I’ll explore each of these layers of value constellations, and look at how they accumulated, layer by layer, to form our modern set of values and conflicts. And, as we get to the modern day and the rapidly-changing world that we find ourselves in, we can look at the implications and possibilities for new layers of values in the 21st century and beyond.
 In fact, in a 2007 Science paper, “The New Synthesis in Moral Psychology,” Haidt credits the “‘affective revolution’ of the 1980s—the increase in research on emotion that followed the “cognitive revolution” of the 1960s and 1970s,” for his 2001 insights.
 Greene, J., and Haidt, J. (2002). “How (and where) does moral judgment work?” Trends in Cognitive Sciences, 6(12: December 2002), 517-523.
 Haidt, J. (2007). “The New Synthesis in Moral Psychology.” Science, 316(18 May 2007), 998-1002.
 Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., and Cohen, J. D. (2001). “An fMRI Investigation of Emotional Engagement in Moral Judgment.” Science, 293(14 September 2001), 2105-2108.
 Funk, C. M., and Gazzaniga, M. S. (2009). “The functional brain architecture of human morality.” Current Opinion in Neurobiology, 19(6), 678-681.
 Bloom, P. (2010). “How do morals change?” Nature, 464, 490.
 Haidt, J. (2001). “The Emotional Dog and Its Rational Tail: a Social Intuitionist Approach to Moral Judgment.” Psychological Review, 108(4), 814-834.
 Funk, C. M., and Gazzaniga, M. S. (2009) op. cit.