Cooperation is behavior designed to benefit the group rather than the individual. This ultimately benefits the individual, which is the purpose of cooperation.

Why this is important

This is big. Unless we can quickly reach a critical mass of global cooperation there's no way to solve the environmental sustainability problem. Understanding cooperation thus lies at the heart of solving the problem.

Application example

Because cooperation is so critical to understand there's been a mountain of research on the subject. The central question is: How can cooperation in a given social system be increased to the critical amount needed to solve certain problems?

Every once in awhile someone takes a huge innovative whack at a problem everyone else has been tinkering with and makes a leap of discovery.

Book cover

The work of Robert Axelrod, a political scientist at the University of Michigan, is one such leap. His breakthrough research was published in 1984 in the seminal work of cooperation theory, The Evolution of Cooperation.

Theorizing that cooperation theory could be more correctly studied by pitting rule driven computer agents against each other, Axelrod staged a tournament. Anyone could enter. The entry consisted of a set of rules an agent would follow when facing each other in multiple Prisoner's Dilemma encounters. The agents were all the same except for their rules. Each agent was paired with every other agent one at a time.

The results were astonishing. It was the simplest rule set, called Tit-For-Tat, that won. It's rules were:

1. When you first encounter an opponent you cooperate. Tit-For-Tat always cooperates on the first move. It's a nice guy.

2. Thereafter, if on the previous move the opponent cooperated, then so do you. If he defected, then so do you. On all but the first move, Tit-For-Tat does exactly what his opponent did on the previous move.

Of the 14 entries, most were greedy. All except RANDOM were more complex. But that didn't work. It was the simplest and the "nicest" strategy that won.

Why was this?

It might have been because game theory was too immature, so Axelrod held a second tournament. 62 entries were submitted this time, including Tit-For-Tat with no changes to its original rules. The results were stunning. Tit-For-Tat won again.

Why was this?

Axelrod theorizes there are fundamental rules that form the foundation for all social groups. The basic rules are simple. If you will encounter another social agent again and you can gain more from cooperation than competition (defection) when you meet this time, then it pays to cooperate. This forms the basic Theory of Cooperation. For a peek at the rest of the theory: 1

Much more can be said about the conditions necessary for cooperation to emerge, based on thousands of games in the two tournaments, theoretical proofs, and corroboration from many real-world examples. For instance, the individuals involved do not have to be rational: The evolutionary process allows successful strategies to thrive, even if the players do not know why or how. Nor do they have to exchange messages or commitments: They do not need words, because their deeds speak for them. Likewise, there is no need to assume trust between the players: The use of reciprocity can be enough to make defection unproductive. Altruism is not needed: Successful strategies can elicit cooperation even from an egoist. Finally, no central authority is needed: Cooperation based on reciprocity can be self-policing.

These are exciting discoveries because we'd like the solution to the sustainability problem to be "self-policing", with "no need to assume trust between the players", and with "no central authority" since the United Nations is not a central authority (it uses consensus decision making with a small group having veto power), and no "commitments" since international treaties are so difficult to achieve.

Here's what applies more than anything else to the sustainability problem:

For cooperation to emerge, the interaction must extend over an indefinite (or at least an unknown) number of moves, based on the following logic: Two egoists playing the game once will both be tempted to choose defection since that action does better no matter what action the other player takes. If the game is played a known, finite number of times, the players likewise have no incentive to cooperate on the last move, nor on the next-to-last move since both can anticipate a defection by the other player. Similar reasoning implies that the game will unravel all the way back to mutual defection on the first move. It need not unravel, however, if the players interact an indefinite number of times. And in most settings, the players cannot be sure when the last interaction between them will take place. An indefinite number of interactions, therefore, is a condition under which cooperation can emerge.

The hallmark of the sustainability problem is delays in time and space for damage to the environment. If the negative effects of pollution and natural resource depletion were immediate, people would immediately stop such misbehavior. But they don't because the better payoff is to destroy the environment now and suffer the consequences later. This has made it extraordinarily difficult to get the widespread agreement necessary to solve the problem.

Axelrod's work on cooperation theory opens a chink in this seemingly insolvable problem. His research, along with that of many others, shows quite clearly that if "the interaction must extend over an indefinite number of moves" then the better payoff is cooperation. So why isn't that happening in the real world?

Research shows it's because there are more conditions for cooperation than we've stated. These are too complex to explore here. The nub of the matter is that even though social agents like people, governments, and corporations live on the same planet, their interaction does not extend over an indefinite number of moves. Instead, it extends over a short number of moves. The further a move occurs in the future, the less it matters.

This is the phenomenon of short term versus long term payoffs. It's why short term profit matters far more than long term profit. That phenomenon is more than just a phenomena of interest. It's a fundamental law of behavior, one that could be called the Law of Short Term Payoff Preference. The law states that the agent choosing the shorter term payoff will win out in the survival of the fittest game over an opponent who chooses a greater payoff over a longer term. The exact numerical value used in calculations is about a 10% discount rate. That is, the future is discounted 10% every year, so the future is worth less the further forward in time you go. For example, one dollar today is worth 6 cents in 30 years at a 10% discount rate. In 50 years it's worth 1 cent.

It follows that:

Since that law cannot be broken,
the sustainability problem is insolvable.

So conventional wisdom goes. But conventional attacks on the sustainability problem have fallen into the same ruts and narrow mindedness that cooperation researchers before Axelrod did. They could not see there were other possibilities.

The Law of Short Term Payoff Preference can be broken. Not for all players, but for the one that matters most. When you put the right pieces of the puzzle together, it's really quite simple:

1. In the sustainability problem, the eight thousand pound gorilla is the New Dominant Life Form, also known as the modern large for-profit corporation.

2. The corporate life form is an artificial life form.

3. All artificial life forms were created by Homo sapiens.

4. All artificial life forms follow their goals.

5. These goals were defined and created by their master, Homo sapiens.

6. Therefore these goals can be changed.

7. The goals can be changed from short term profit maximization to long term optimization of quality of life for Homo sapiens.

8. After this the Law of Short Term Payoff Preference no longer applies to the dominant life form in the human system.

9. It's been replaced by the Law of Long Term Preference, which solves the problem because the goals of a social system's dominant agents determine the dominant behavior of the system.

Changing that goal will not be easy, due to monstrously large change resistance. The eight thousand pound gorilla will put up the fight of its life.

But if we can find the various root causes involved, we can win that fight.


(1) Quote from The Evolution of Cooperation, as adapted here.

Browse the Glossary
Previous Next
The Opposite of Cooperation

The opposite of cooperation is competition. A small amount of competition between social agents makes for a healthy social system. It keeps it from degrading and becoming inefficient.

But too much competition destroys a social system every time, whether the system is a couple, a family, an organization, a team, a community, a nation, or a planet.


Why do people perform altruistic deeds, like giving money to charity or helping a stranger, when that will not help themselves?

Altruism is behavior that promotes chances of survival of others at a cost to one's own. There seems to be no logical reason for altruistic behavior. It appears to be suicidal. So why is the so much altruism, not just in people but in bee and ant colonies?

This was one of the great puzzles of evolutionary theory because it seemed to fly against the survival of the fittest rule. If a member of a species performs behavior that reduces his chances of survival, then his likelihood of reproduction is reduced and his genes will die out. But that's not what was happening in so many cases. The phenomenon of altruism seems to discredit the theory of evolution.

Charles Darwin sensed the reason for altruism was that is benefited a group sharing the same genes. In this passage from The Origin of Species, he wrote:

“This difficulty, though appearing insuperable, is lessened, or, as I believe, disappears, when it is remembered that selection may be applied to the family, as well as to the individual, and may thus gain the desired end. Breeders of cattle wish the flesh and fat to be well marbled together. An animal thus characterized has been slaughtered, but the breeder has gone with confidence to the same stock and has succeeded.”

But what was the exact reason altruism was beneficial? Why did it appear in one place and not another?

The answer did not appear until 1964, when William Hamilton published what came to be known as Hamilton's Rule. It stated that if genetic Relatedness times Benefits to the recipient was greater than the Cost of the behavior, then the behavior would be preferred, because it would maximize the chances of ones own genes entering the next generation. This is expressed as:

Relatedness x Benefits > Cost

which is the formal form of Hamilton's Rule.

At first glance it appears possible to apply Hamilton's Rule and related concepts to the sustainability problem, since solving it requires huge amounts of altruistic behavior. However, this line of research is a trap. It commits the Fundamental Attribution Error because it assumes that individual behavior is the source of the problem. It's not. The source is much deeper, at the root cause level.