A student of the SD's is immediately presented with the prospect of having to study a variety of scholarly fields to get a complete picture - as suggested in the Introduction. References to the subject can be found in the sciences of Philosophy, Game Theory, Social Choice Theory, Political Theory, Economic Theory and Behaviorial Theory. While this diffusion of research over so many fields may make it difficult for the student of this subject it, on the other hand, shows the universiality of the phenomena and provides many useful examples.
The first major work related to this phenomenon seems to be the Thomas Hobbes' Leviathan written in 1651. Modern interest seems to center around two concepts, "The Tragedy of the Commons" and "The Prisoner's Dilemma".
In 1968, Garret Hardin published an essay called "The Tragedy of the Commons" in which the conflict between group interest and self interest was clearly described. This classic essay shows the hopelessness of population control by using the example of a common pasture being shared by the local community in which access was free and without restrictions.
Each individual realizes that his best interests are served by putting as many of his cattle as possible on the pasture, even though the pasture has reached its carrying capacity and even though it is obvious that if everyone does this, the Commons will totally collapse. This characteristic is in common with the Voter's Paradox phenomenon.
Hardin's insight into this phenomenon has not been surpassed to this date. Some of his more interesting observations are:
He states that the problem is a member of a "class of human problems which can be classified as having 'no technical solution'".
"Freedom in a commons brings ruin to us all".
"Conscience is self eliminating."
Melvin Dresher and Merrill Flood are credited with the first formulation of the "Prisoner's Dilemma" problem. This social model or game, as it is generally referred to in the literature, has a peculiar payoff matrix. In particular, the payoff is structured such that individuals "playing" the game would fare best, in total, if both cooperate, but the individual can always get a greater reward by defecting. Since it can be assumed that both will use the same logic, this results in both defecting -- an inferior strategy to both cooperating.
A typical payoff structure might go like this:
If both parties cooperate, the reward is 3 units each.
If one party cooperates and the other defects, the reward is 0 and 5 respectively.
If both parties defect, the reward is 1 unit each.
An individual playing the game is faced with the realization that his or her best strategy is to defect regardless of the assumed decision of the other person! Put yourself in one prisoner's shoes: What should you do if you assume that the person is going to defect? If I defect I will get a reward of one unit but if I cooperated I would get only zero units. But what if he cooperates? In that case I should defect since that pays 5 units versus 3 if I cooperated too. I'm better off to defect whatever the other guy does -- which makes for an easy decision.
An excellent discription of this phenomenon is in the book, The Evolution of Cooperation, written by Robert Axelrod.
Back to the Table of Contents