Dutton and Woodley praise this book often:
I haven’t read it yet, but it sounds interesting. Here’s an excellent review from H Mccartor explaining why:
This is an exciting book because of the explanatory power of the ideas expressed. Group selection as an evolutionary force was largely dismissed on the basis of William’s work in the 1960s. Now Wilson not only revives it but suggests how our very human nature has been dependent upon it. The problem has been that evolution at the level of the individual rewards self concern and weeds out altruism. This makes cooperation difficult because individuals will always do better by avoiding the effort of cooperation while enjoying the fruits of the efforts of others (free-loading). However when humans evolved culture there was the possibility that the non-cooperators could be made to comply by punishment, banishment etc. This fundamentally changed the name of the game. Groups with norms fostering cooperation and punishing non-cooperators could out compete less functional ones and as a consequence grow and multiply in comparison. What Wilson suggests is that religion has been a potent method of establishing the norms, motivation and punishments required. Once groups could effectively cooperate then traits that facilitated this would be further selected by genetic evolution allowing further cultural progress and the development of our “human nature”. He further suggests that the “irrational” beliefs fostered by religion are selected by group selection because they foster more functional groups in the evolutionary survival sense. He is also clear about the down side of religion in that while it fosters within group function it also fosters antagonism to outside groups and individuals.
I don’t agree with the basic premise that religion owes its existence to multi-level selection under the Darwinist model (“You’re just saying that because you’re group-selected for irrational, altruistic religiosity!”), but it’s a scientific observation that religious behavior has a strong genetic, deterministic explanatory factor. I’d like to offer a potential explanation for this observation (it’s probably not a new idea because it’s quite obvious).
If two players play prisoners’ dilemma more than once in succession and they remember previous actions of their opponent and change their strategy accordingly, the game is called iterated prisoners’ dilemma.
The iterated prisoners’ dilemma game is fundamental to some theories of human cooperation and trust. On the assumption that the game can model transactions between two people requiring trust, cooperative behaviour in populations may be modeled by a multi-player, iterated, version of the game. It has, consequently, fascinated many scholars over the years. In 1975, Grofman and Pool estimated the count of scholarly articles devoted to it at over 2,000. The iterated prisoners’ dilemma has also been referred to as the “Peace-War game”.
If the game is played exactly N times and both players know this, then it is always game theoretically optimal to defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit.
If your consciousness is finite, then overall defection is always your best strategy. But the idea that the soul is immortal means that the game never ends. If God sees your actions and judges, that potential punishment can serve the same function as reprisal by the other prisoner. This may be generalized to Bruce Schneier’s societal dilemmas. And this matches observed human behavior:
Unlike the standard prisoners’ dilemma, in the iterated prisoners’ dilemma the defection strategy is counter-intuitive and fails badly to predict the behavior of human players. Within standard economic theory, though, this is the only correct answer. The superrational strategy in the iterated prisoners’ dilemma with fixed N is to cooperate against a superrational opponent, and in the limit of large N, experimental results on strategies agree with the superrational version, not the game-theoretic rational one.
For cooperation to emerge between game theoretic rational players, the total number of rounds N must be random, or at least unknown to the players. In this case ‘always defect’ may no longer be a strictly dominant strategy, only a Nash equilibrium. Amongst results shown by Robert Aumann in a 1959 paper, rational players repeatedly interacting for indefinitely long games can sustain the cooperative outcome.