There’s a couple of new papers out of Tomasello’s Leipzig lab (Kaiser et al., 2012, Riedl et al.,2012) about chimpanzees’ (and bonobos’) failure to punish defectors in different experimental settings. Kaiser et al. discuss a new experiment confirming the group’s earlier finding (Jensen et al., 2007) that apes act as “rational maximisers” in an Ultimatum Game, i.e. that they accept unfair offers where humans frequently reject them despite a cost to themselves from doing so. Riedl et al. discuss the absence of third-party punishment (while showing that chimpanzees are able to discriminate, and selectively punish accordingly, unfair acts towards themselves). In both cases, the human behaviours that fail to be replicated in chimpanzees contribute to creating an environment where cooperation is more rewarding than defection.
For example, in th Ultimatum Game, one player, the proposer, is given a choice to carve up a reward (typically cash) in any way she sees fit, which the recipient can either accept, in which case both players get their shares as determined by the proposer, or reject, in which case neither gets anything. When the game is played anonymously and without iterations, the “rational” strategy, in economic terms, for the recipient would be to accept any non-zero offers – getting $.5 out of a $10 loot is better than getting nothing. Knowing this and expecting the recipient to act rationally, the proposers should consistently offer the lowest possible non-zero amount. This is not what happens in humans: When tested in student populations, the mean offers are typically between 40-50%, and proposals lower than 30% frequently get rejected. The rejections are the most interesting point: Given moderately high rejection rates, the generous offers of the proposers turn out to be in fact a rational, reward-maximising strategy: The expected return from a 70:30 partition, with a 1/3 chance that it will be rejected, is lower than the expected return from a 50:50 partition that will be accepted. (Reality is, as usual, a bit more complicated than that: The actual mean offerings, as well as rejection rates for low offerings, differ considerably across cultures, and the mean offerings within each population do not track the respective “income maximising offer” as calculated from rejection rates particularly well – see Henrich et al. (2005)) . The recipients’ willingness to forgo a potential reward if it’s perceived as unfair cannot be similarly reduced to self-interest and remains paradoxical. This is why this behaviour has been termed “altruistic punishment”, and it appears to be specific to humans.
Altruistic punishment creates an environment where cooperation wins, and thus stabilises the population against the intrusion of defectors. We’ve found a way out of the paradox of the evolution of strong pro-sociality (the kind that distinguishes humans from other apes, as Tomasello likes to point out) without invoking group selection or other ill-defined concepts – the social environment of man in the making was such that cooperating was simply more beneficial to the individual, and could thus be selected by standard natural selection.
Or have we? I don’t think so. The paradox of altruistic behaviour – that it benefits the group, but not directly the individual and should thus be selected against within the group and be unable to reach fixation – extends to altruistic punishment, unless I’m missing something big. If defecting is a winning strategy in cooperative populations, why wouldn’t you expect defecting from punishing defectors to crop up as well? As with first-order altruistic behaviour, the benefit – creating an environment where others will willy-nilly cooperate – accrue to the group, while the individual pays a non-trivial cost in forgoing an “unfair” but non-zero reward. It rather seems that we’ve just shifted the locus of the paradox, from “how does cooperation stabilise despite its costs” to “how does a behaviour that makes it so that cooperation pays off stabilise despite its costs”.
The Leipzig group likes to believe that humans have a biological predisposition towards pro-social behaviour, including altruistic punishment, that qualitatively distinguishes us from our closest relatives. But as we’ve seen, this extra level doesn’t seem to resolve the paradox as long as we want to maintain that the central unit of selection is the individual. What if it is instead the product of cultural evolution? In cultural evolution, with horizontal transmission enabling phenotypic uniformity within groups beyond what genetic evolution allows, the long-dismissed concept of group selection may actually play a tangible role, and behaviour patterns be selected for because they’re beneficial for the group without having any direct benefit for the individual. Of course, we have been cultural animals for long enough, so if cultural evolution +group selection can reliably create groups within which cooperation just pays of, we may even have evolved a biological predisposition for “hyper-sociality” – but it would be culture that enabled this development, not the other way round.
Henrich, J., et al., 2005: “‘Economic man’ in cross-cultural perspective: behavioral experiments in 15 small-scale societies”, Behav. Brain Sci. 28, 795–815. DOI:10.1017/S0140525X05000142 Free copy: http://authors.library.caltech.edu/2278/1/HENbbs05.pdf
Jensen, Keith, Joseph Call and Michael Tomasello, 2007: “Chimpanzees are rational maximizers in an ultimatum game”. Science 318, 107–109, DOI:10.1126/science.1145850, Free copy: http://wkprc.eva.mpg.de/pdf/2007/Jensen_Call_Tomasello_2007_chimps_ultimatum_game.pdf
Kaiser, Ingrid, Keith Jensen, Josep Call and Michael Tomasello, 2012: “Theft in an ultimatum game: chimpanzees and bonobos are insensitive to unfairness”, Biology Letters, August 15, 2012, DOI: 10.1098/rsbl.2012.0519
Riedl, Katrin, Keith Jensen, Josep Calla, and Michael Tomasello, 2012: “No third-party punishment in chimpanzees”, PNAS August 27, 2012, DOI: 10.1073/pnas.1203179109