VIII. Is Altruism Self-Defeating?
In the foregoing chapters I have suggested that the theory of altruism provides the best justification for the voluntary provision of public goods. Now I want to ask, is altruism always advantageous, is it always more efficient than egoism? Would we all be better off if we were all altruists? Or is altruism, like egoism, sometimes self-defeating?
There are more and less interesting ways for a theory to be self-defeating. It is not so interesting that a theory is self-defeating due to a person's ignorance of the facts. Suppose Mary's aim is to be healthy. She eats vegetables because she believes vegetables are good for her health. But in fact, because vegetables are laced with pesticides, eating them harms Mary's health. Mary's action is self-defeating, but not in an interesting way.
A theory would be self-defeating in only a slightly more interesting way if a person had a hard time following the theory. Mary knows that she should get at least seven hours of sleep, but when she tries to get seven hours, she worries so much that she never does. She would get more sleep if she tried less hard.
In sum, the pursuit of a goal is not self-defeating in an interesting sense if the failure to achieve the goal results from lack of information or lack of psychological ability. Let us stipulate that interesting instances of self-defeating goals are ones in which an individual can reasonably expect not to achieve a certain goal by aiming at it, although she is able to do what the theory prescribes and does not have any relevant false beliefs.
Does this leave any way for a goal to be interestingly self-defeating? There are at least two possibilities. First, a goal might be conceptually self-defeating: for example, to have as a goal at a certain time not having any goals at that time. Second, while an individual acting alone might not act self-defeatingly in the interesting sense, she, together with one or more others, might act in ways that defeat the purpose of their actions. Everyone in a group adopting a certain goal might be, to use Parfit's phrase, collectively self-defeating.
A paradigm example is the rational pursuit of self-interest in a prisoner's dilemma situation. Each player is able to do what self-interest commands and yet each reasonably expects not to achieve the best possible outcome as a result. In these situations, egoism is collectively self-defeating.
Is altruism ever collectively self-defeating in the interesting sense? I will examine six examples that seem to suggest it is. I will argue that, except in one instance, they do not show how altruism can be self-defeating in the interesting sense. In some of the examples, partial altruists experience the same frustration as egoists because they are not altruistic enough, or because they are excessively altruistic ("excessive altruism" is a concern for others greater than what perfect impartiality would prescribe). I do not consider such results embarrassing for altruists. In another instance, the cause of the trouble is lack of information. However, in one situation everyone would be better off if everyone were less altruistic and not because of an excessive altruism, psychological inability, or lack of information. Even in this one rare set of circumstances, however, everyone would do best if everyone were perfectly impartial in their altruism (or close enough). I will conclude that altruism is not significantly self-defeating.
The six examples are: the Prisoner's Dilemma, Voting, Spouse Selection, the Bequest, Repeated Sharing, and the Market. Only the Bequest example presents a slight embarrassment, which a retreat to utilitarianism or near utilitarianism effectively removes.
The Prisoner's Dilemma.
It is sometimes said that prisoner's dilemmas apply to altruists as well as to egoists. Imagine, for example, that each of the two players has a sick mother. Each needs extra money to help pay for an expensive operation. The payoffs are not prison terms, but money. Each player is motivated by her altruistic concern for her own mother, and this motivation causes each of them to defect. When both defect each has less money for her mother than if both had cooperated.
This example shows that we need to refine what we mean by altruism. If altruism is just taking into account, to some extent, the welfare of someone other than oneself, then of course, it can sometimes be self-defeating. If the mothers were playing against each other, and they were rational and self-interested, then their self-interest would be collectively self-defeating. The example simply substitutes the mother's children for the mother. Each daughter simply acts, as it were, as the mother's representative in the game. But what if the mothers themselves were altruistic and, let's say, knew about each other's situation? If each mother cared as much about the other as about herself, she would instruct her daughter to cooperate.
Let's define altruism, not as regard for the welfare of someone other than oneself, but as regard for the welfare of the other person or persons with whom one is interacting (or with regard for their welfare and the welfare of the people they would benefit). If this sort of altruism is strong enough in a prisoner's dilemma situation, then the players will not defect and their altruism will not be self-defeating. Our troubles are not completely over, however.
If the player's altruism is too strong, if they care more about the other person than about themselves, then this too will allow for prisoner's dilemma type situations. These people are excessively altruistic. We skirt this problem by suggesting that we are not interested in the fate of excessive altruists. What is important is whether there are situations in which having a level of altruism closer to perfect impartiality is worse for overall welfare than having a level of altruism that is further away from perfect impartiality.
Here is a simple argument that there can never be situations that are prisoner's dilemmas for utilitarians with full information. Two utilitarians will always value an outcome in the same way, if they know the same facts about the outcome. Hence, each cell in any normal form game will have identical utilities for each player. This excludes the possibility of a prisoner's dilemma game between utilitarians. Consider the diagram below.
A prisoner's dilemma requires that both players prefer a to d and that Row Chooser prefers c to a and d to b, while Column Chooser prefers b to a and d to c. If both players evaluate outcomes in the same way, this leads to an inconsistency. The players prefer a to d, c to a and d to c. Hence, informed utilitarians will never have to play each other in a prisoner's dilemma. In other words, they will never suffer if they are in a situation with outcomes that would make their circumstances a prisoner's dilemma game for egoists.
This argument, of course, does not prove that all altruists are as blessed as utilitarians when it comes situations that lead people into prisoner's dilemmas. However, altruism never traps someone in a prisoner's dilemma who would not be so trapped if they were less impartial in their concern for human welfare. People who are more impartial more closely imitate utilitarians and hence are less likely to fall into the prisoner's dilemma trap. If we ignore excessive altruism, this implies that altruists will never do worse than egoists in situations that, for egoists are prisoner's dilemmas and they will often do better.
If a situation is self-defeating for utilitarians, then it is self-defeating for altruists. Voting seems to present a paradox for utilitarians and hence for altruists. The paradox is different from Condorcet's voting paradox. It is rather that utilitarianism, and hence altruism, encourages too many people to vote. Everyone following the utilitarian principle leads to a loss of overall utility. How does this happen?
Consider the utilitarian's method for deciding when to vote. (Ignore the benefits associated with the margin of victory and other secondary considerations. Imagine, for example, that the exact count is never published, that no one will see you going to the polls, that you will immediately forget that you ever voted, etc.) To determine whether to vote or not the utilitarian should take the total benefit of victory, multiply it by the probability of a tied election and subtract the cost of going to the polls. If the sum is positive, she should vote; if the sum is negative, she should not. Call this method, Parfit's rule. Following Parfit's rule is not always efficient. Voting for a superior candidate may not be worth the trouble, once one considers the total inconvenience of everyone going to the polls--not a trivial amount when the numbers are large.
The following example illustrates the difficulty. Suppose each voter expects only 10 other voters to go to the polls. Each believes that who votes at the polls is a random draw from a population equally divided between those who favor one and those who favor another candidate. Everyone agrees, however, that the difference between the two candidates is small. All concede that the superior candidate will improve each eligible voter's life by only 10 units, while the cost of voting is 27 units. Each person chooses whether to vote independently of the choice of every other person. Parfit's rule is to vote if the following figure is positive:
(benefit x number benefitted x probability of deciding an election) - cost to individual of voting
The probability of deciding an election, P, is the same as the probability of a tied election. It is difficult to estimate the probability of a tied election, but one method calculates the probability of a tied election using as a model the probability of tossing a (possibly biased) coin n times and getting exactly 50% heads. We can estimate this probability using the formula:
where n = the number of people who will vote, p = the probability that a voter will vote for the superior candidate, and q = the probability that a voter will vote for the inferior candidate (q=1-p).
If n = 10 and p = q = .5 then P = about .246. Filling in Parfit's rule we arrive at:
(10 x 11 x .246) - 27 = .06
So everyone votes, which costs 11 x -27 = -297 units--considerably more than the advantage provided by having a superior candidate, which all would concede is no more than 110 units.
Do we want to object to this method of estimating the probability of a tied election, on the grounds that it gives too high a value? With a lower probability of a tied election, the utilitarian principle will not recommend voting, and hence won't be self-defeating. I don't think we want to defend the utilitarian in this way. She needs a generous method, one which gives a high enough probability to breaking a tie, in order to recommend voting in large elections.
What is responsible for the loss of utility in such instances? I would suggest that it is not the rule that is at fault, but the fact that people are not able to communicate with one another before voting. If communication were free, the 11 voters would coordinate with each other to avoid having too many people going to the polls. (Of course, if communication were free, there would be no need to vote anyway, since everyone would know the outcome of every election ahead of time.) It should not be too surprising that ignorance should lead to a loss of utility. Although Schelling has taught us how coordination is sometimes feasible without communication, obviously, it is not always feasible. When it is not feasible, people pay a social cost. So altruists may defeat themselves in this instance only because of ignorance of the facts, which we agreed earlier not to consider to be an interesting form of self-defeat.
The next three examples all involve pairs of individuals.
Imagine two groups of people, say males and females, in which each member of one group seeks to pair with a member of the other group. If I am a male will I always do better by selecting an altruistic female? Not if I am unhappy and also altruistic. For if I select an altruistic partner, my unhappiness will depress my partner and thereby further depress me. On the other hand, if I select someone indifferent to my own happiness, she will be happier than a more concerned partner, and so I too will be better off. This seems to show that in some circumstances people would prefer a state in which everyone was less altruistic.
My reply appeals to a distinction that is sometimes made between sympathy and commitment or between public and private preferences. Altruism need not take the form of being sadder when others are sad. An altruist may simply choose to make others happier when it is in her power to do so and the cost in happiness elsewhere is not too great. We should view the exercise of this choice as the expression of a preference, but the satisfaction of this preference does not make the person who has it any better off. It makes someone else better off. In the above example, my own unhappiness does not diminish the happiness of my altruistic partner, although she prefers that I be happier rather than sadder, better rather than worse off.
It might be said that dividing one's mind in this way is psychologically unrealistic. It is hard to care about someone else's welfare and not have this person's fortune affect one's own well being. However, then the altruist simply lacks the psychological ability to achieve her goal. Earlier we stipulated that no interesting form of self-defeating preferences would depend on having a psychological flaw. If an altruist were able to wish her spouse well without in any way sharing the spouse's fate, then altruism would not be self-defeating when it comes to spouse selection.
Doesn't the fulfillment of a person's desire necessarily make the person better off, all else being equal? In order to sustain the defense I am advocating here, I must deny that fulfilling a person's desire per se improves her welfare. Such a denial is reasonable as long as one does not make the mistake of analyzing welfare purely in terms of desire satisfaction. An appropriate model of welfare would be the objective list approach (which may include the satisfaction of certain kinds of desires).
Our next example involves savings and a bequest. It is the only circumstance of which I am aware that demonstrates how altruism can be self defeating. Imagine that two brothers have common knowledge of the following facts. The younger brother is selfish, but will live longer than his older brother. The siblings take turns making decisions about their fixed individual resources. During the first period, the older brother chooses how much of his private resources to spend and how much to save. During the second period the younger makes a similar decision about his resources. During the third period the older brother makes a final decision about what he will spend. After the third period, the older brother dies. The remainder of his resources he bequeaths to his younger brother. Naturally, the younger brother, with no close kin, consumes all of the resources available to him.
Bernheim and Stark have shown that there is a certain range of altruistic concern which, if the older brother's altruism is within that range, then both brothers would do better if the older brother were less, rather than more altruistic. To see how the trap works consider each person's choice, starting from the last.
Being partially altruistic, the older brother may choose not to consume all of his resources during the third period, in order to have some resources to bequeath to his brother. He will prefer to do this if the younger brother is particularly low on resources. The younger brother knows this, so during the second period, when deciding how much to save for later, the younger one will have an incentive to consume more of his resources to prompt his caring brother to make a larger bequest. The older brother knows how his younger brother will act, so to protect himself from the younger brother's strategic over consumption, he consumes more on his first choice. Had the older brother been completely egoistic, he would have consumed an optimal amount (say, half) of his resources during each interval, as would his younger brother. His altruism leads him save inefficiently, making himself worse off and his brother no better off than he would be under mutual egoism.
A numerical example may help to illustrate the point. Suppose the older altruistic brother aims to maximize a weighted sum of his own and his brother's consumption according to the formula:
where Ue is the elder brother's utility as a function of the consumption of both brothers, Ve the elder brother's felicity as a function of his consumption, Ce the elder brother's consumption, andI his degree of altruism. For simplicity suppose that the younger brother is perfectly selfish. His utility is given by:
[Equation - can't read]
Assume thatI = .56, each brother's felicity is the natural logarithm of consumption, and each brother starts out with 100 units. To simplify matters even further, assume binary choices throughout. During the first period, the older brother has a choice between consuming 50% or 60% of his resources; during the second period the younger brother likewise has a choice between consuming 50% or 60%; and during the third period the older brother has a choice between 60% or 100%.
Hence, the older brother's utility is:
[Equation, can't read]
and his younger brother's is:
where Ce1 is the elder brother's consumption during the first period, Cy1 is the younger brother's consumption during the second period, Ce2 is the elder brother's consumption during the third period, and Cy2 is the younger brother's consumption after benefiting from the older brother's death and bequest.
One can model this family situation as a game in extensive form. (See the figure at the end of this chapter.)
To illustrate how we calculate the payoffs: if the elder brother decides to consume 60% of his resources during both periods, while the younger brother consumes only half during the second period, then the elder brother will first consume 60 units, the younger brother 50, then the elder 24 and finally the younger 66. Plugging these numbers into the above formula results in a payoff of about 7.737 and 8.102 for the older and younger brother respectively.
The double lines of the figure represent the preferred choice at each node of the game. The single connected double line traces the path prescribed by backwards induction, the path that perfectly rational players with common knowledge of the payoffs would travel in finite games of perfect information without chance moves.
The reasoning of backwards induction is simple. Consider all of the choices the player making the last move might face. Suppose that she selects the better alternative in each of them. Next, examine the alternatives facing the player making the second to last move. Assume that the player making the last move acts rationally. What alternatives will the player prefer on the penultimate move? This procedure is repeated until one reaches the first move.
Backwards induction shows that the first player will decide to consume 60% of her resources on the first move, although the first player would prefer, and the second player would not mind, the alternative in which both players consume 50% during each period. Unfortunately, this option is precluded by the strategic behavior of the players.
Earlier we agreed to ignore self-defeating situations that take advantage of a person's excessive altruism (see the section on the Prisoner's Dilemma). The level of altruism in our numerical example (.56) is a bit excessive. However, one can obtain the same result for levels of altruism below .5 if we adjust the felicity function for the younger brother. If the younger brother's felicity function is the natural logarithm of the square of the younger brother's consumption, then anI of .39 will allow us to construct an extensive form game with the same strategic properties. Inflating the younger brother's felicity function is not a surreptitious way of preserving excessive altruism. Just imagine that the younger brother has a midas touch which squares any resources he receives.
There is no modification of the example that produces the same result if both individuals have levels of altruism equal to .5. That is to say, the example doesn't work for a pair of utilitarians. Furthermore, sufficiently close to perfect impartiality will allow people to avoid such altruistic traps. While this example is striking, it would seem to occur only rarely and does not embarrass all altruists, but only those partial altruists who stray too far from the utilitarian ideal. A retreat to utilitarianism would be at odds with the recommendations of the previous chapter, so we must hope that the example remains as rare as it seems to be.
Imagine that two neighbors cultivate different fruits in their private orchards. Mary grows apples while Anne grows oranges. Mary and Anne alternate visiting each other once a week. When Mary visits she has the opportunity to bring Anne some apples. Likewise, Anne has the opportunity to bring Mary oranges. Of course, while each prefers having more to less of her privately grown fruit (none of which can be sold commercially), each experiences declining marginal utility improvements with increasing amounts of fruit of one type.
If Mary and Anne are perfect egoists, each will give the other an optimal amount of fruit when she visits. Mutual suspicion induces cooperation. If Mary is tempted not to bring a full basket of oranges on any one visit, Anne can threaten to forever horde her oranges. This threat is credible because being an egoist, Anne doesn't care if Mary never sees another orange in her life.
Compare how Mary and Anne would behave if each were partial altruists. If on a single occasion, Mary is stingy with her gift, Anne could threaten never to give Mary another orange, but that threat would not be credible. Anne cares too much about Mary to eternally deprive her in this way. She would rather give her a small number of oranges occasionally than see her without any oranges at all. Being altruistic, Anne would prefer to help Mary to some extent, however uncooperative Mary's behavior. Mary's ability to credibly threaten Anne is likewise diminished. But neither is so altruistic that she will give the other as much without the threat of retaliation as she would with such a threat of retaliation. Altruism diminishes the egoistic incentive each has to give to the other. It seems both would be better off if each were less altruistic.
That is the argument for saying that altruism is self-defeating in the situation of Repeated Sharing. The example is intriguing, but it ignores other possible threats and strategies. While mutual hoarding is a Nash solution in the one-shot stage game between egoists, the threat of eternal punishment, or refusing to make any future gifts, in the iterated game is not the only available strategy. Other threats are more credible.
Each player might choose to punish defection by following a strategy of Tit-For-Tat, leaving exactly as much for the other player as the other player left on the previous round. The justification would be that adherence to this strategy of punishment will deter defection and, in the long run, best serve each player. No matter how many times this threat fails to provoke cooperation, Mary or Anne might continue to make it in the hope of convincing her neighbor of her sincerity and hence eliciting future cooperation. Remember that with an infinitely repeated game, the cost of any finite period of defection can always be outweighed by the benefit of influencing an infinite number of future choices.
Parents familiar with the way in which altruism can erode the credibility of their threats are also familiar with this strategy of altruistic self-discipline. Parents seem to use this strategy when they punish their children saying, "This is going to hurt me more than it hurts you." Nothing keeps Mary or Anne from using such a conditional strategy. Intuitively, it is quite likely to succeed in producing just the outcome desired, one which is best for everyone concerned. Further, a pair of Tit-For-Tat strategies will be in Nash equilibrium, even in subgame perfect equilibrium. That is, given the strategy of the other player, no other strategy will provide a higher payoff, not just for the game as a whole, but also for every subgame in the game tree.
One might argue that this strategy is not compatible with the motivations of an altruist. However, I don't see why an altruist, as defined in our earlier section on the Prisoner's Dilemma (someone who cares to some extent about the welfare of others, in addition to his own welfare) would not pursue this strategy for the reasons given above. Such a strategy seems available to anyone, regardless of her level of altruism.
I trace our last circumstance to Bernard Mandeville. In The Fable of the Bees Mandeville argues that private vices make for public benefits, that pride and self-love are conducive to a thriving and prosperous society and that the pursuit of virtue leads to a sluggish economy and general poverty. Adam Smith's famous metaphor of an invisible hand suggests a similar argument. It is a common apology for capitalism to argue that a competitive free market economy, based on individual property rights and personal incentives, better serves the general welfare than a command economy based on state ownership of production and appeals to collective interest. Behind these assertions is the notion that social planning and the attempt to guide human action towards the common good is self-defeating. We would do better to give free reign to greed and selfishness. Describing the hive at its apogee of villainy and prosperity, Mandeville writes, "Thus every Part was full of Vice, Yet the whole Mass a Paradise".
Modern economists make similar claims for the superiority of selfishness. The arguments are not always as impressionistic as Mandeville's. They attribute one of the primary advantages of a market economy to the information bearing sensitivity of the price system. There is a great deal of information that contributes to the price of a commodity, or service, and the price a laborer charges for his work. Countless different factors determine the level of supply and demand, including the preferences of consumers and laborers. One of the problems with central planning is the loss of this information.
If the problem is just one of information, then we have what earlier we said was not an interesting form of self-defeat. However, even so, there are some simple corrections which would save a society of altruists from economic ruin.
If people were altruistic, an agency of the government, or an independent firm, could conduct a survey to collect the needed information, since people would lack the strong egoistic incentives to conceal their true preferences. In principle, an agency could use this information to coordinate optimal systems of production and distribution. However, collecting this information and utilizing it properly would be inordinately expensive. Fortunately, there is a much more plausible alternative.
If people were altruistic, they would be able to retain the advantages of the price system, while avoiding a pareto loss due to free-rider problems, by adopting certain context sensitive habits. In most trading situations, altruists would try to mimic the behavior of egoists. Each person would attempt to obtain the best bargain she could in private commercial transaction. Each would act as if she were an egoist for the purpose of setting prices and running an efficient economy. However, being an altruist at heart, each would be willing to depart from the game of imitating egoists when it came to public goods.
People wouldn't need to depart too dramatically from the conceit of being egoists. Even with public goods, it would be best if they did not, in order to set efficient prices and wages within the public goods economy . The only departure required is for the beneficiary of a public good to pretend that unless she pays the price demanded by a supplier of a public good, the supplier will not provide her with the good. For public goods that people consume voluntarily, each person would agree not to consume the good unless she paid the price set by the supplier. For those goods people cannot help but consume, such as clean air, each person would be willing to pay the price set by the supplier if the good was worth that price to her, that is, if she would be willing to pay the price in the hypothetical situation in which she could not consume the good unless she paid for it.
In this fashion, a community of altruists would do much better than a community of egoists, for they would have the advantages of a free market, without the cost of free-riding. The more altruistic the members of the community, the more efficient the outcome of the system.
I conclude that we have found only one instance in which altruism is genuinely self-defeating. It does not apply to utilitarians or nearly perfect altruists and seems to be a very special case.
THE BEQUEST GAME E's Y's utility utility .5 .5 .6 E +---------Y----------E------------* 7.788 8.161 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ 1 ¦ ¦ +------------* 7.824 7.823 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ .6 .6 ¦ +----------E------------* 7.803 8.189 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ 1 ¦ +------------* 7.801 7.783 ¦ ¦ ¦ ¦ ¦ .6 .5 .6 +---------Y----------E------------* 7.737 8.102 ¦ ¦ ¦ ¦ ¦ ¦ ¦ ¦ 1 ¦ +------------* 7.806 7.824 ¦ ¦ ¦ ¦ ¦ .6 .6 +----------E------------* 7.747 8.120 ¦ ¦ ¦ ¦ ¦ 1 +------------* 7.783 7.783 Figure 1
Back to Table of Contents page