II. Can Egoists Take Care of Themselves?

 

 Preliminaries

 

Suppose I only care about my own welfare. I see some litter on the ground, but I don't pick it up. It's not quite worth it to me to make the effort, since I dislike carrying around garbage more than I dislike seeing the litter (ignore for now the fact that I also don't like to encourage litterers or be exploited by them when I pick up the mess they leave behind). If the choice were between everyone (including me) picking up litter on the one hand and no one picking up litter on the other, I would feel differently. I would prefer everyone picking up litter to no one doing so. I might favor a local ordinance that required everyone to collect a pound of litter once a year (were it not for the obvious impracticalities of such a proposal). Suppose many others were inclined to pick up litter voluntarily without being required to do so by law. If they were, I would still prefer not to follow their example, being the self-interested person that I am. I would rather free ride on the good deeds of my neighbors. If everyone is like me, no one will pick up litter and we will all be the worse for it.

 

The problem that concerns us arises because I not only benefit myself when I pick up litter, I also benefit others. When others collect bits of garbage, they benefit me. All and all, the benefit to everyone of not having litter is greater than the cost to everyone of cleaning it up (or so we're supposing). When I pick up litter individually, I bear the full cost of my action, but only a part of the full benefit. At that price, it's not worth the trouble. This example illustrates what is commonly called, "the free-rider problem".

 

Normally, we don't think we are obligated to pick up litter. The sorts of situations on which we will focus--what I will call the "core examples"--are instances in which there is no obligation to perform certain actions, although the actions are intuitively reasonable and worthwhile. The central cases of interest, or core examples, involve individuals making small donations to large organizations that provide valuable public goods. In order not to mislead or distract the reader with an example involving obligation, I have discussed picking up litter. However, I could have as easily given the more familiar example of not littering in the first place. The problem is the same, although in the example of not littering there is the added dimension that one is expected not to litter. Unfortunately, I offer no theory to explain why there is an obligation in the one instance and not the other. My view, for which I will not attempt to argue here, is that this is mostly a matter of convention and historical accident. We may criticize or seek to preserve conventions on consequentialist grounds. As with any practice, what we can practically expect from people will depend on the cost of change and the limits to human perfectibility. Whether we ought to continue to distinguish between the good of not littering from the good of picking up litter and similar omission/commission distinctions is not one of our concerns.

 

In this chapter I wish to explore whether there is really a problem here for people who are purely self-interested. I will argue that there is, that rational self-interest alone does not justify taking actions such as picking up litter. However, we intuitively think that picking up litter is at least reasonable and worthwhile, even "a good thing to do" and not irrational or stupid.

 

A fair question is, "Why consider what purely self-interested people would do?" We are supposed to be interested in arguments that decent rational people would find persuasive. Egoists are fringe characters, so why bother reflecting on their predicament?

 

The answer is that while decent people do not ruthlessly pursue their own interest, they are certainly capable of doing themselves a favor, so long as it does not conflict with their moral principles. If an action advances a person's interests, that serves as a presumptive reason for the person to perform the action. It is not a presumptive reason, even for the most honorable person, not to perform the action. Hence, if we could show that purely self-interested people will adequately provide public goods for themselves, then we could motivate people who are rational and decent to contribute to public goods by simply appealing to their rational self-interest, their prudence. We are able to appeal to their moral character if we need to, but we may not need to. This chapter discusses whether rational self-interest is enough and I shall argue that it is not.

 

What do I mean by someone who is "purely self-interested" or a "rational egoist"? I intend these phrases to refer to people who are solely concerned with advancing their own welfare (although they may be enlightened about how best to do this). We find a difficultly with this common sense understanding of "egoist" when we try to be specific about a person's welfare or good. As Allan Gibbard remarks in a discussion of human motivation:

 

The notion of a person's good is vague, and different specifications of what constitutes a person's good will give different content to the theory that each person pursues his own good. Pleasure and pain seem obvious cases of personal goods and ills, and money makes a good surrogate. Common sense is puzzled, though, when it turns to other things people want, such as fame, being valued, being worth valuing, or being father to a line of kings.

 

In the first sections of this chapter, I will assume that egoists are not motivated by a desire for prestige, the respect of others, or being father to a line of kings. These motivations, such as wanting others to have a favorable opinion of oneself, are not ignored, merely postponed. A later section of this chapter is devoted to discussing them.

 

It is easy to show the destructive results of free-riding in many particular examples, both real and imagined. However, I need to show more than this, I need to show that for egoists, given the constraints of practical circumstances, there is no way out, no clever schemes or considerations that would motivate an enlightened egoist to contribute. Showing this turns out not to be as easy as one might expect on first acquaintance with the free rider problem. There are several arrangements which would allow for adequate provision and many considerations which would motivate the egoist to contribute voluntarily. I intend to show that these possible arrangements are impractical or apply in only restricted circumstances and that none of the proposed considerations will motivate egoists in certain core examples.

 

To show that egoists fall into a self-destructive trap, I do not need to demonstrate that they will give nothing voluntarily to provide public goods, only that they will not give enough. They will give less than is optimal (using a measure of efficiency to be defined). Even if I prove that provision will be insufficient, my overall strategy leaves me with a further complication. I think perfectly optimal provision is beyond the reach of all but saintly utilitarians. So my claim will be that, in general, the level of provision that egoists can justify is too low. Achieving levels less than optimal but better than zero, egoists perform in a fashion qualitatively similar to a group of ordinary individuals, only quantitatively much worse. Although ordinary people cannot justify ideal levels of provision, they can justify more than what egoists would provide. What egoists achieve when they act in large groups is not moderate provision, but hopelessly poor provision. You and I give moderately, whereas egoists give disastrously little. What is too little and what is a moderate amount is a matter of judgement. The claim I wish to defend is that (when gathered in large groups) egoists will tend to give a tiny amount while ordinary folk will give significantly more.

 

In the Introduction I defined public goods as having two properties. A good is non-excludable if it is too costly or unfeasible to provide to some and not others. A good is non-rivalrous if one person's enjoyment of the good does not interfere with another's. The use of the term "public good" varies from author to author. Some authors have in mind the first property, some the second, some both. I will use the term "public good" to mean a good that is non-excludable. This is the sense of "public good" that is most relevant to the free-rider problem.

 

Before we begin an examination of non-excludability, it might be helpful to repeat and extend the list of assumptions I am making in this chapter, especially in the next two sections. I assume that self-interested people only care about their own welfare, narrowly defined. They lack envy and spite. They have no pride, do not care about the opinion of others and do not think that the welfare of anyone else, not even their children, directly affects their own well-being (which does not preclude thinking that the well-being of others may instrumentally affect their own).

 

Rational egoists have preferences that satisfy the standard axioms of utility theory. They are consistent utility maximizers. Not only that, but in interactions between more than one person, egoists act strategically, preferring Nash equilibria where feasible. In Nash equilibria, each person's action is a best reply to the action of others. Furthermore, rationality and Nash behavior is common knowledge. Each knows that the other participants prefer Nash equilibria, knows that they know this, and so on.

 

The population of egoists we will consider is homogeneous. Their income, wealth, personal characteristics, expectations about the future, tastes and preferences, etc., are the same. Like the assumption of narrow self-interest, we will want to lift this supposition at certain times, but when not explicitly lifted, it is implicitly assumed.

 

Finally, I assume there are no sudden steps either in the production function of a good or the ability of someone to contribute. The public goods we are considering are not lumpy. There are no thresholds for total contributions, below which no amount of a public good can be produced. Output of a public good is always a continuous, concave function of the donors' input. More of a public good is always better, while the improvement of an additional unit diminishes with greater production. There are no transaction costs which would also induce thresholds. No one is deterred because of the inconvenience of writing and mailing a check, for example.

 

I will first make a general case for the inadequacy of egoistic provision, then consider six ways egoists might still do a decent job of providing public goods voluntarily: through contingent agreements, during repeated games, with selective incentives, by gaining prestige or other intangible benefits, by diagnostic or quasi-causal reasoning, or by appealing to more than a single motive to justify voluntary provision in diverse settings. I shall argue that none of these considerations are adequate to avoid free-rider problems, at least not for our core examples: contributing relatively small amounts of money, without fanfare, to large organizations.

 

 The Problem of Non-excludable goods

 

The free-rider problem and the problem of externalities are closely related, if not identical. Without the threat of exclusion, people will tend to take advantage of other people's efforts at providing a public good. If they are self-interested, they will neglect to consider the external effects their own provision has for others. This is not to suggest that they will contribute nothing towards a public good, but that what they contribute will be much too small. The amount will be Pareto inefficient, since all will prefer that all gave more. A state of affairs is Pareto superior to another state of affairs if and only if at least one person prefers the first state of affairs to the second and no one prefers the second to the first. A state of affairs (or outcome) is Pareto efficient if and only if there is no state of affairs that is Pareto superior to it.

 

Consider a simple situation consisting of two people (A and B), with increasing, strictly concave utility functions for two goods (each person prefers more of a good to less, but marginally less of the good, as more is consumed). One good is excludable and the other is not. Remember that while A and B cannot communicate, the utility functions of both individuals are common knowledge, and these functions are the same for A and B. Each must decide how much of her resources to spend on the private and how much on the public good. They make this choice only once.

 

The situation I have described is essentially a game, that is, the outcome for each person depends, not only on his own action, but also on the action of the other person. In this game, there is only one point (a point here represents a pair of actions: how much each person will contribute to the public good) where the players are in Nash equilibrium, that is, where each person's action is a best reply to the action of the other person. Call the point of Nash equilibrium the "equilibrium point" and the amount the person contributes at the equilibrium point her "equilibrium contribution".

 

The reasoning which leads to a unique equilibrium point is as follows. How much each person contributes to the nonexcludable good will depend on how much she believes the other person will contribute. The more the other person contributes, the smaller is one's optimal contribution.

 

Suppose A expects B to give nothing. Then A will give a1, the amount that he prefers to give if B gives nothing. B knows A's utility function and thus knows that if A expects B to give nothing A will give a1. However large a1 is, B does a bit better by giving some amount, however small, say b1. So b1 is the amount that B prefers to give if he expects A to give a1. But now A, who knows B's utility function, expects B not to give nothing, but b1. If A expects B to give b1, A prefers to give a bit less than a1, say, a2. But now B can expect that A will give a2, which is less than a1. So B will increase his contribution to b2, the amount he prefers to give if A gives a2.

 

This process of adjustment continues until A and B give amounts such that each knowing what the other is giving does not cause the person to change her contribution. The figure below depicts the reaction of each person to the contribution of the other. The y-axis represents A's hypothetical contribution and the x-axis represents B's hypothetical contribution. Line A represents A's utility maximizing contribution, given B's contribution. Likewise, line B represents B's optimal contribution, given A's expenditure. Only at point e will A and B not adjust their contribution in reaction to the hypothetical contribution of the other person. Were B to begin with the assumption that A will give nothing, A and B will continuously adjust their expectations until they arrive at e from the opposite direction.

 

Figure 1.

So far we have only described how A and B will arrive at a unique equilibrium point. We have not shown that the equilibrium point is inefficient. This is straightforward. Suppose B were to offer to match A's contribution. A would spend a bit more than e on the non-excludable good, since the marginal benefit of spending on the non-excludable good would rise. Likewise for B. This implies that each prefers a state in which both A and B spend more than e on the non-excludable good to the state in which each spends just e. Hence e is Pareto inferior to this other arrangement. Alternatively, we may allow players to reach an agreement under the condition that winners must compensate losers so that all have the same amount in the end. In such circumstances both players would prefer to maximize the sum of their benefits minus costs. This mechanism introduces a complication, the incentive to bargain, which we shall ignore for now.

 

A third way to see the inefficiency is simply to compare the equilibrium point with the point at which the sum of marginal costs equals the sum of marginal benefits. Note that this measure of inefficiency goes beyond the Pareto criterion in supposing interpersonal comparisons of utility (necessary if summing is to be meaningful). I have no problem with a measure of inefficiency different from Pareto's and will even suggest rejecting it in Chapter Nine. The third method is especially useful for estimating the relative size of the inefficiency, as is the second version of the matching agreement above. "Optimal contribution" will refer to the amount a person would give under any of these three conditions. They will generally be the same amount. Where different, the phrase will refer to the third measure.

 

We have considered a two person economy. The results remain qualitatively the same, but quantitatively more dramatic for larger groups. As more and more people join the group, each person's equilibrium contribution diminishes. As group size approaches infinity, each individual's equilibrium contribution approaches zero. On the other hand, as group size increases each person's optimal contribution increases. Imagine that every new member offers to match an individual's contribution. As there are more members that individual has a greater incentive to raise her contribution. Alternatively, as group size increases so does the amount each person must give so that the sum of everyone's marginal costs equals the sum of everyone's marginal benefit.

 

The size of this inefficiency or gap between what egoists will contribute and what would be the optimal amount to contribute can be quite remarkable. To get a feel for the magnitude just imagine how much you alone would be willing to give to a public radio or TV station if you just wanted to improve the quality of the broadcasting for yourself and how much you would be willing to give if a million other people were to match your contribution.

 

Hence the difference between the equilibrium contribution and the optimal contribution grows dramatically with group size.

 

What about the total amount of the public good? Howard Margolis (1982, p.20) asserts that "the total amount likely to be raised from an arbitrarily large number of [contributors], taking account of feedback effects (technically, under a Cournot equilibrium), will never reach twice the amount that the most generous single [contributor] alone would contribute". This statement applies when the marginal propensity to give is low. Andreoni has proved the weaker claim that while total contributions continuously increase with group size, there is some amount they never exceed, no matter how large the group is. This result compliments the simple examples we have already used to show how, as group size increases, total contributions fall further and further behind the Pareto efficient amount.

 

This completes the basic argument that rational egoists will under provide non-excludable goods. Now we shall explore a few considerations that might lead rational egoists to provide higher levels of provision than one might otherwise expect. None of these considerations, in the end, is sufficient to lift egoists out of the free rider trap.

 

 Conditional Cooperation

 

People may use conditional (or "contingent") agreements to side step the problem posed by non-excludability if the conditions are just right. Conditional agreements promote voluntary provision by allowing each person to agree to pay for a public good only on the condition that others do so as well. If not everyone agrees, no payments are made and the good is not produced. So each person has an incentive to contribute because unanimous participation is required. With common knowledge of everyone's actions, payment is assured once universal agreement is achieved.

 

The conditions that must be met for conditional agreements to succeed are very demanding. There must be cheap (or cheap enough) communication and a way to resolve a difficult (n-person) bargaining problem. We cannot realistically meet these demanding requirements with today's technology. It is worth noting, however, that the theoretical possibility of successful conditional agreements shows that voluntary provisions among egoists is not inherently impossible, only practically unattainable. Unattainability in the real world of today is a sufficient indictment for our purposes.

 

It may seem pointless to proceed to explain in detail how a project would succeed, given certain assumptions, after having rejected those assumptions from the start as impractical. However, it is surprising and instructive that although the assumptions are not feasible with today's technology, they are sufficiently few and simple that they might become more widely satisfied at some future time. In some peripheral instances, we can find ways to realize the stipulated requirements, and in these situations conditional agreements may prove a useful means of solving the free-rider problem. Were we able to devise ways to satisfy the assumptions generally, especially for our core examples, that would be a great accomplishment, freeing us from the predation of free-riders while appealing only to human motivations that are robust, strong, and pervasive. Unfortunately for us, but fortunately for the thesis I am defending, the required technology remains beyond our immediate grasp. What follows is a systematic discussion of the assumptions necessary for successful conditional cooperation.

 

 Known or homogeneous preferences

 

In addition to the two special assumptions just mentioned, a pair of assumptions introduced in the first section of the chapter are particularly important: that everyone have common knowledge of everyone's preferences and that everyone have the same preferences. We may weaken the former assumption so as not to require that everyone know everyone else's preferences, but require instead that at least one person (the right person) knows everyone's preferences. Either homogeneous or known preferences will suffice--we do not need both--but one or the other is necessary (although we will want to add some minor qualifications here and there).

 

Consider first the instance in which a producer of a public good (or an entrepreneurial broker) knows each person's preferences. The producer could offer each person a conditional agreement which charged each consumer the maximal amount she would be willing to pay for the public good. Known preferences keep consumers from misrepresenting their preferences in order to gain a more favorable contract. It prevents consumers from exploiting the fact that when a person's preferences for a public good are weak, the producer must lower the price stipulated for that consumer so she does not refuse the contract. With known preferences, no one can free ride or misrepresent the most they would be willing to pay for a good and everyone is assured of either the public good at an efficient price or a refund.

 

Several economists have proposed ways to discover consumer preferences for public goods. We may think of these schemes as voting mechanisms which will allow people to determine an equitable tax for the governmental provision of public goods. The tax is equitable in that these voting schemes insure that everyone approves of the tax. Alternatively, we may think of the various proposals as techniques which allow for an efficient decentralized market for public goods, i.e. private provision. None of the proposed schemes meet our initial conditions, however. In particular, they fail to allow for strategic behavior among participants. Consider, for example the following procedure.

 

Each person states a proposed bid and a proposed quantity of a public good. The bid is a price that the person is willing to pay to provide the public good (say, $20) and the quantity is how much he would like to be provided (say, 24 hours of daily radio programming costing $2 million). A person's actual payment is not her bid, but the sum of everyone else's bid subtracted from the total cost of producing a certain amount of the public good. After each round of bidding, each person learns the average of everyone's proposed amounts and what her payment would be if the average were produced. Bidding ends when everyone proposes the same amount and the payments exactly equal the cost of producing that amount of the good.

 

While the calculation of each person's payment does not depend on her bid and in this respect does not provide her with a direct incentive to underestimate her willingness to pay, a person may attempt to get others to raise their bids to cover the cost of production and end the bidding process by stating bids lower than her true willingness to pay. So this procedure, as well as all the others proposed, fails to eliminate strategic behavior. Hence there is no reliable way to get self-interested rational consumers to honestly reveal their preferences for a public good.

 

Now consider a population of people with homogeneous preferences. If everyone has the same preferences, then a producer could set a single price and provide the good only if everyone pledges to pay that price, contingent on everyone else making a similar pledge. If even one person refuses, the producer closes shop and no one has to send in their checks. No one is tempted to free ride and no one risks paying for nothing, or paying for someone else's free ride. Let's take a moment to see exactly how this works.

 

How does a producer know what amount of a public good to produce if she is ignorant of the strength of everyone's preferences? The stronger everyone's preferences for a good, the more units of the good a producer should provide (or else the better the quality of each unit). For every strength of preference there is an amount of the good that maximizes the sum of costs and benefits that results from spending on that particular good and not some other desirable item. As it happens, a producer that knows that everyone's preferences are the same need not know independently the strength of those preferences. If she knows the former, then she can determine the latter.

 

A producer can gauge the strength of everyone's preference for a public good, and hence determine just how much of the good to produce, by starting off with an attractive offer to provide a small amount of the good at a competitive price. The amount each has to pay for the good is small, marginal desire is high, individual benefit exceeds individual cost, so everyone accepts the offer. The producer then offers to provide more and more of the good (or the same amount, but of better and better quality) for a higher price, until everyone declines the offer. Or she can start off making an offer to produce so large a quantity of the good at such a high price that everyone will decline. She may then incrementally offer to provide less and less for pledges of smaller and smaller amounts until everyone accepts.

 

(For simplicity, we assume no collusion on the part of consumers to make the producer think everyone's preferences for the good are less strong than they really are. Even if people colluded, it would only have the effect of allowing consumers to extract a better deal from the producer. Collusion would not prevent provision. We also assume competition and no collusion among producers, which insures the lowest possible price for production.)

 

If the producer knows that every preference is the same, and there is no collusion, no one will be able to cheat: cheaters will risk identification by refusing deals others accept. Even if everyone tries to cheat, everyone will probably not be able to coordinate their actions to holdout for the same offer. Some cheaters will be less greedy than others. So producers will know that all those looking for a better deal than the worst deal accepted by a consumer are cheating. Still, some people may hope that everyone will be pretty greedy and try to gain a contract as good as the contract accepted by the least greedy cheater. To discourage this, a producer could simply threaten not to provide the good to anyone caught cheating.

 

Having either a homogeneous population or known preferences avoids another potential problem, that of the "honest holdout". There is no spoiler whose indifference towards a good leads him to reject a conditional offer that others prefer to accept. Either everyone responds in the same way to an offer, or the price is tailored to each person's preferences. People who care not at all for the good get it for free. People who care very little get it at a price they can't refuse.

 

I will mention one final problem relating to the condition of homogeneous preferences. If a producer or entrepreneur does not know the preferences of each person, she must at least know the size of the population. Otherwise, she will not be able to recognize universal acceptance of an offer. Unless she knows the population size she will not be able to take advantage of similar preferences.

 

If she knows the population size and knows that everyone in the population shares a certain minimum desire for the public good, then she can use this information to gain universal agreement. For example, if an entrepreneur knows that exactly 10 thousand people have rented or purchased docks along the coast of Maine, then she may offer to maintain the region's lighthouses only if ten thousand individuals subscribe to her service. Having been contacted by the entrepreneur, each dock user knows that if he doesn't subscribe, the entrepreneur will not, or at least will be much less likely to, fulfill her quota. (To avoid issues of fairness, suppose that the only users of Maine lighthouses are users of Maine docks). Unless the entrepreneur receives her quota of subscribers, the lighthouses will not be maintained, or at least maintenance will be unlikely.

 

Knowing how many people enjoy a public good is not an easy condition to meet. First, there is often a problem of defining sufficient use. If someone listens only five minutes a week, should a conditional agreement include them among the users of the radio station? Presumably, this problem can be solved by pricing proportional to the consumer's known desire for the good. Secondly, there is a much more serious problem of privacy. A radio station, for example, would not be able to know exactly who listens to their broadcasts without violating the listener's right to privacy. This is not a minor impediment. With regard to many public goods it is nearly as difficult to know who uses the good as it is to know the preferences of the users. With regard to other public goods, such as the use of a park, or national defense, tracking who uses the

good would not seriously compromise anyone's privacy. For these goods, knowing everyone's true preferences is the more demanding assumption.

 

 Cheap communication

 

In addition to the standing assumption of homogeneous and known preferences, successful conditional agreements require that communication between all the participants not only be possible, but sufficiently cheap. If communication were free or cheap enough, after some discussion, everyone might agree to a conditional arrangement which would result in an efficient total level of provision.

 

Alternatively, an entrepreneur could save a group from having to engage in extended discussion while providing effectively the same result. As discussed above, the entrepreneur would offer to provide a certain amount of a public good on the condition that each person pay the entrepreneur a certain price. If one person refuses to pay, the entrepreneur does not provide the good and no one else has to pay. Either method removes the independence of each person's action. Either method provides each person with the incentive to cooperate in order to elicit the cooperation of others.

 

Relying on an entrepreneur is bound to be less costly than direct communication between participants. So communication need not be almost free, only cheap enough to allow an entrepreneur to explain her offer and obtain consent.

 

A lucrative opportunity will attract more than one entrepreneur, creating competition for the market. Suppose two entrepreneurs offer slightly different deals. A consumer might think that it's safe to make a conditional agreement with both of them. After all, if one of the entrepreneurs is not able to convince every other consumer to enlist, a subscriber will not have to hand over any money. However, what if both entrepreneurs succeed in getting everyone to enlist? Then each consumer will have to pay both suppliers, which would double the cost for the same service.

 

Consumers can get around this problem in one of two ways. They can monitor the relative success of each entrepreneur in enlisting subscribers or they can include a special condition in their agreement with the entrepreneur. If more than one entrepreneur gets everyone to agree (conditionally), a procedure is undertaken to select just one. This complicates the situation somewhat. Every entrepreneur must include in her contract the same procedure, or compatible procedures, for settling who gets to provide the good. Otherwise, there may be a conflict. How the issue is settled will probably be the object of bargaining between consumers. Some consumers may prefer a majority vote while others may prefer a decision procedure which gives more influence to certain constituents--say, those people who have the greatest desire for the good.

 

Today the task of explaining a relatively complex contract to a large population would be quite costly. That may not be true forever. The cost of communication has been dropping very quickly over the last few decades, due to advances in information technology. If the cost of communication were the only major obstacle to private provision of public goods, one could look forward to more widespread private provision in the future. Larger and larger groups would be able to initiate private arrangements for the provision of public goods.

 

 Solution to bargaining problem

 

Known or homogeneous preferences and cheap enough communication are not the only obstacles to efficient private provision. People would fail to consent to a conditional contract if they were unable to agree on how much each should pay. A very serious bargaining problem remains.

 

There are many ways to distribute the cost of paying for a public good, even among people with the same preferences. While it might seem that each person ought to pay the same amount, their ability to pay or bargaining power may differ. People who earn less may want to pay an equal portion of their income, while higher wage earners may want to pay an equal amount. Even assuming identical incomes, there is no reason to believe that everyone will eventually give up their hopes for a better bargain and reach an agreement. Some people may try to hold out for a more favorable deal in which they pay less and others pay more. Each person may just be too stubborn to ever agree to a contract that is not partial to him. Remember, we are talking about egoists here.

 

Recent work in non-cooperative bargaining theory may be close to solving the problem of how egoists will divide the costs of provision. If waiting is costly, then consumers have an incentive to agree to a subscription rate schedule sooner rather than later. If bargaining is conducted in an orderly, predictable fashion, within a fixed structure, then in many instances, rational consumers with known preferences will reach definite agreements and we can state what those agreements will be. The problem of deciding who pays how much towards supplying a public good is very similar to a simple game of division. Under the right conditions, the game of division has a unique solution.

 

Imagine a situation in which 50 people must agree on the distribution of $50. Suppose that each person is able to communicate a proposal to everyone else for their approval. If all approve, the proposal passes. Imagine, for example, that everyone can see a large blackboard on which the proposals and the results of voting are written. All other means of communicating are too expensive to use. People line up to write proposals on the board. Everyone votes immediately on each proposal as it appears. After making a proposal, one must wait until everyone else has had a turn before offering a new proposal. Finally, suppose that a delay of one period (the defeat of one proposal) costs $1, which is subtracted from the original $50. So, for example, after the first delay, the group has only $49 to divide.

 

Careful analysis of this game shows that everyone will agree to the very first proposal and that first proposal will give $1 to each of the fifty players.

 

Consider what happens if 49 proposals fail. The fiftieth person in line (call her Person 50) proposes a division of the last remaining dollar. She will be able to capture most of the money by offering a deal which gives each person a tiny fraction of one cent. They must accept or get nothing. If 48 proposals fail, Person 49 will realize that Person 50 can expect to get $1 or nearly that amount. So Person 49 will offer Person 50 $1 and each of the others a tiny fraction of one cent, leaving for herself $1 (or an amount arbitrarily close to $1). Likewise, Person 48 will realize that Person 49 and Person 50 can reasonably expect to make very close to $1 each. So Person 48 will propose that she, Person 49 and Person 50 each receive $1 and next to nothing for the others.

 

The same argument applies all the way back to Person 1. This reasoning, which is commonly called "backwards induction", leads to the conclusion that Person 1 will propose that each player receive roughly $1 and everyone will agree.

 

Note that the above game assumes a cost to delay, common knowledge about everyone's preferences, knowledge of the size of the group, and a rigid structure in which bargaining takes place (unanimity voting rule, no jumping ahead in line, etc.). These conditions are not universal. The bargaining problem has different solutions or no known solutions under different conditions. The point of the example is to show that while the problem is not trivial, there are conditions under which people will mutually agree to a unique solution, with the outcome a foregone conclusion. If there is cheap communication, common knowledge about rationality, and homogeneous or known preferences, then the bargaining problem is not always an insurmountable obstacle.

 

While large populations rarely meet these conditions of homogeneous or known preferences, cheap (enough) communication, and invulnerability to bargaining impasses, small groups within the larger population may sometimes satisfy these conditions. If so, people in intimate groups that communicate cheaply, know each other well and are able to reach agreements easily, will manage to offer each other conditional agreements that increase the level of funding for public goods. These small groups will improve efficiency somewhat. However, these groups are likely to be a small fraction of a much larger population. If so, the gains they achieve over equilibrium levels will be dwarfed by the far greater gains that could be achieved were each to spend the optimal amount. Imagine, for example, the difference between everyone in the country offering to match your contribution to reduce the deficit and a few of your friends doing so.

 

 Iterated Games

 

So far we have considered conditions under which people will contingently agree to pay for a one time provision of a public good. An example of a one time provision of a public good would be contributing towards the purchase of a wetland to protect an underground aquifer that provides a large community with drinking water. The members of the community only need to purchase the wetland once to protect their drinking water. The benefits of preserving the wetland would be a public good if, say, the water company were not allowed to impose a surcharge on households to raise enough money to buy the wetland. If households needed to band together to protect their water source, this one time purchase would be a public good.

 

Most providers of a public good, however, do not provide the good once for an eternity. Rather, they provide the good for a period of time and then require further payment to provide the good for another period. This allows consumers to avoid having to make an agreement contingent on what others are going to do during the same period. Instead, consumers may pursue a strategy in which they pay for provision for one period contingent on how others have behaved during earlier periods.

 

Generally, it is easier to discover how others have behaved at prior times than to discover their commitments at the present time. Someone can learn about the level of cooperation during a previous round simply by observing the resulting extent of provision. This allows for a margin of interdependence between one person's action at an earlier time and other people's actions at later times.

 

I will argue in the next chapter that one may usefully model the voluntary provision of a public good as an n-person prisoner's dilemma game. If we consider the dynamic situation in which each person makes her choice, we will be tempted to conclude that we are really examining an iterated prisoner's dilemma game. Unlike one time prisoner's dilemma games, defection is not the obvious choice in iterated prisoner's dilemma games. Some theorists have exploited the rationality of cooperation in the iterated prisoner's dilemma game to argue that people can voluntarily provide public goods.

 

There are several reasons why rational egoists will still fail to provide public goods even in ongoing situations. The first and most serious problem is that there are very many equilibrium strategies for iterated prisoner's dilemma games. Suppose you and I are playing an infinitely iterated prisoner's dilemma game. If I think your strategy is to cooperate on the first round and to mimic my previous action on every subsequent round (Tit-For-Tat), my best strategy is to do the same. Tit-For-Tat is my best response to Tit-For-Tat. Likewise, if you think I will play Tit-For-Tat, then Tit-For-Tat is also your best strategy. If we both follow this strategy, then we will both cooperate on every round, assuming there is no noise which causes one of us to misunderstand the other person's previous move.

 

Two people pursuing a Tit-For-Tat strategy, unfortunately, is not the only equilibrium. Suppose I think you will defect the first two rounds and will play Tit-For-Tat thereafter unless I defected on one of the first two rounds, in which case you will defect every round thereafter. My best response is to cooperate on the first two rounds and to play Tit-For-Tat every subsequent round. If this is my strategy, then your best response is, as above, to defect on the first two rounds and play Tit-For-Tat subsequently.

 

The situation is substantially the same for n-person prisoner dilemma games, in fact, for any iterated non-cooperative game, with an infinite horizon, sufficient discounting of future payoffs, or a low enough probability that the game will end after the current round of play.

 

In addition to the problem of multiple equilibria, people acting in ongoing situations face the same obstacles as people in static situations. Unknown heterogeneous preferences, information costs and bargaining problems will still impede efficient outcomes. For example, unless everyone's preferences for a public good are known, contributors will not be able to punish defection in a heterogeneous group. A non-contributor might be someone who is genuinely indifferent to provision or someone who is attempting to free ride. Where contributing is not a binary choice, people may quibble over price. These problems are not solved by being able to punish people for past behavior.

 

Hence, rational egoists are unlikely to provide public goods in ongoing situations which allow people to support the good on the condition that others have supported the good previously.

 

 

 Privileged Groups and Selective Incentives

 

In his classic work on collective action, Mancur Olson proposed at least two reasons why rational egoists might contribute to public goods. Some people may care enough about the good to make each person's contribution worthwhile (Olson would say that sets of people with such members belong to a "privileged group"), or people may contribute in order to receive certain special advantages that they could not otherwise enjoy ("selective incentives" encourage contributing).

 

Our discussion of an equilibrium contribution has already shown how people might find it worthwhile to contribute something to a public good. Clearly, however, there are instances in which either no one supports a public good or only a small number of those who benefit from the good support it. To justify the actions which lead to these outcomes, we need only to lift a couple of our assumptions so as to include the possibility of a threshold and a heterogeneous population.

 

Recall the game of two players and two goods. Suppose that the players prefer not to give any amount unless the other person gives at least a certain amount. Start-up costs are sufficiently high that each person prefers no public good to shouldering the expense of initial production. Or suppose that the transaction costs of giving are too high to make small contributions worthwhile. This situation is illustrated in Figure 2. The equilibrium in this game has each person giving zero.

 

Figure 2.

There might be a threshold level which is less destructive of provision. Consider the reaction curves in Figure 3. This game has two equilibrium points: zero and e. Such a situation illustrates a shortcoming of the Nash equilibrium concept: every non-cooperative n-person game has at least one equilibrium, but not necessarily just one. Figure 3 represents an Assurance Game. If there is common knowledge of each person's rationality and preferences, then both will choose to contribute amount e. However, if there is some doubt about the other party's rationality, or if there is not common knowledge, as when A think B doubts A's rationality, then both may choose to give nothing rather than e. They will do so if one of the pair begins her reasoning with the assumption that the other will give either very little or very much. (If A supposes B will give a lot, A will give a little. B will then give nothing and then so will A.)

Figure 3. 

These examples explain how many groups may fail to provide even a small amount of a public good: the threshold level suppresses each person's contribution. What about the actual free-rider situations in which some people contribute to the good, allowing others not just to give less, but to give nothing at all, that is, to ride completely free? These are situations in which people's tastes differ and some people with stronger desires for a good drive the contributions of other people with weaker desires to zero.

 

Again, consider the simplified two person situation depicted in Figure 4.

 

Figure 4.

Suppose A gives nothing. Then B will give b1. If B gives b1, A gives a1. If A gives a1, B gives b2, which is less. If B gives b2, A gives a2, which is more. The less B gives, the more A gives until B gives nothing and A gives e, the equilibrium amount. Olson would say about this situation that A and B belong to a privileged group.

 

Thresholds levels explain why some public goods are never provided. Wide differences in preferences explain why some people never contribute, while others carry the full burden. While in principle it is easy to justify a gift of any amount by supposing that it constitutes an equilibrium contribution, in reality, when someone is giving to a medium size or large organization, her equilibrium contribution is likely to be exceedingly small, certainly less than the average actual gift. Were everyone to give no more than her equilibrium contribution, organizations providing public goods to sizeable populations would not do very well.

 

How then can an egoistic theory justify the significant amounts that people give to sizeable organizations? Olson argues that support for medium sized and large groups must have some other motive than the direct benefit each person receives from the organization having a little more to spend on producing the public good. Olson suggests that indirect benefits or selective incentives are also at work. Selective incentives are really the private component of "impure public goods", goods that jointly provide a public and private product. Many organizations offer special privileges to their members. Some unions are closed shops, professional associations offer publications, education, training and invitations to restricted gatherings, radio stations offer mugs and museums offer tote bags with their logos on them.

 

Rational egoists will no doubt pay to possess the private good and incidentally support the public good in the process. This works best when the provider of the impure public good has a monopoly on the private good component. If people will pay to meet other donors to a radio station, then a radio station can attract members by offering banquet invitations, or even by promising to sell its membership list to a firm that will organize the banquet.

 

However, there is a limit to the number and value of the goods and services over which an organization can exploit its monopoly power. Where the provider of the impure public good does not have monopoly power, i.e., where there is free entry, a profit making firm will be able to offer consumers the private good component at a reduced cost (ignoring the competitive advantage a non-profit firm has by not paying taxes).

 

Furthermore, people support organizations that provide a pure public good but few or no selective incentives (e.g., the Red Cross) and support organizations, without collecting the incentive (as do contributors to public radio or TV that decline to receive their available premium). In addition, it is unlikely that organizations that offer special opportunities or privileges would receive as much revenue if all they were offering were the prizes. Considering the large size of voluntary spending and the small volume of merchandise, it becomes implausible to think that people value the goods and services that highly.

 

 

 Prestige, Pleasure and Guilt.

 

When people voluntarily support public goods they receive some benefits that are not tangible. They may gain the respect of their peers or derive a special pleasure from their virtuous action. If they don't contribute, they may have to pay an intangible cost in the form of diminished respect or painful feelings of guilt. One motive rational egoists might have to contribute above the equilibrium amount is to gain these kinds of rewards and avoid these kinds of punishments. I want to first consider the quest for prestige and then the concern for internal feelings.

 

The desire for the favorable opinion of others or a good reputation would motivate people in small groups to support public goods. With small groups and face-to-face interaction, people often observe each other's actions and form opinions about those that do and do not contribute to projects that benefit them. When someone acts in a way that has serious consequences or has the chance of serious consequences for each member of the group, each member will tend to hold a strong opinion about that act and the person who performs it. The desire to avoid these strong feelings will encourage cooperation and discourage free-riding.

 

The desire for prestige will not be so successful in motivating people in large groups. In these instances, action is often anonymous. Admittedly, friends or family sometimes learn about these actions, either incidentally or by the agent's own disclosure. It might seem that if the free-rider cares about the opinion of a few people, even if they are a small portion of the population, she will contribute. But why should even a few egoists strongly disapprove of her action? When the group is large, even if the action cannot be or is not hidden, the people who learn about the defector's action are not significantly affected by it. If the degree of approval or disapproval is relative to the harm or benefit to the observer, then the sanction is likely to be so light as to have a very small effect on the person observed.

 

If the observer is a rational egoist what reason does she have to judge the action of another person, other than based on how that action affects her own welfare? We can imagine why an egoist might care about the approval of another egoist, especially if they belong to the same small group: the other person may come to be in a position to help or hinder the first person. But why would the observer hold an opinion about actions that have little or no effect on her? It would be odd for a rational egoist to develop a set of evaluations that disapprove of actions that have little or no effect on her well-being.

 

Forming an evaluation might be less odd if the observer interpreted the act of contributing as an act of generosity and inferred from this disposition that the person might benefit the observer on other occasions. But really, if the donor wished to have this effect, she could achieve it more directly by demonstrating her generosity, not just to people or causes in general, but to that person in particular, or to the donor's other friends and acquaintances. In so far as the donor's resources are limited, giving resources to benefit a very large group leaves less to spend on the smaller group, those people in the best position to return the favor, or to acknowledge it. The observer should actually disapprove of undiscriminating generosity. She ought to give the highest respect to those who are most likely to benefit her and discourage others from excessive self-interest or harmfully broad sympathies.

 

Supporting a public good that benefits a large number of people might signal other qualities of the donor besides her universal generosity. It might signal great wealth, for example. The practice common in some cultures of sponsoring expensive celebrations or, more dramatically, "potlatches" among the Indians of the northern Pacific coast (such as the Kwakiutl) seem to have this effect. If there were more advantages than disadvantages to having others think one is wealthy, this would be a good reason to give. However, small donations, such as any ordinary person can afford, do not signal such wealth. Note that even if the rich were more likely to give small amounts than people of modest means, as long as a contributor is more likely to be of modest means than rich (i.e. most contributors are not rich), then being a contributor would not effectively signal being rich. Yet many small donations are characteristic of our core examples, such as public radio and TV.

 

Of course, if the rational egoist lives among people who are themselves not perfectly rational and who do tend to think better of a person for actions that have hardly any effect on their own individual welfare, then the person might be reasonably motivated to contribute toward a public good to protect or enhance her reputation. However, in this project I am concerned with ideally rational creatures. This assumption applies to the agent whose actions we are evaluating and to all of the participating players. Among a population of perfectly rational agents, having observers form a favorable opinion of agents that perform actions which do not significantly improve the observer's welfare would not be stable.

 

If egoists are not necessarily motivated by a concern for prestige, perhaps they are encouraged to provide public goods to large groups by internal rewards and punishments. They might find pleasure in contributing and feel guilty and uncomfortable were they not to contribute. This suggestion runs into a number of problems, only one of which is decisive.

 

If the pleasure is a byproduct of a moral disposition--if it is, for example, the pleasure someone finds in doing a morally good act--then it would not be accessible to the egoist. The person would first have to be other than an egoist to derive such pleasure. However, the pleasure taken in performing "good" actions or feelings of guilt that follow not performing such actions are not just a byproduct of a moral disposition. Even egoists may be subject to these psychological states. These reactions are a product, not of their self-interest, but of other causes.

 

How might such dispositions arise among egoists and how might they be sustained? Consider a genetic explanation first. Sociobiologists have found evidence for altruism towards close kin among birds, mammals, fish and insects. It might be that innate mechanisms are responsible for the pleasure of giving and the pain of withholding aid even in humans. However, even if we were to accept the leap of inference from other species of animals to humans, still an evolutionary genetic explanation would only suggest a disposition in favor of close kin, or small groups at best. Evolutionary forces would likely select against broader dispositions providing internal rewards or punishments that encourage sacrifices for the sake of large groups.

 

Perhaps the internal dispositions are the effect of culture, transmitted from person to person through interaction and learning. If this is the case, then even large groups of egoists might develop emotional dispositions which provide incentives to cooperate in the provision of a public good.

 

However, as long as members of a group remain egoists, the effect of cultural transmission will be very precarious. Egoists have a higher order desire to advance their own interests. They will perceive their first order desire to gain certain kinds of pleasures and avoid certain kinds of pains as excessively costly. They will seek to free themselves from these expensive tastes and acquire emotional reactions more compatible with their fundamental concern for themselves. If first order desires are malleable (as we must concede they often are), then rational egoists will eventually reshape their characters to enhance their ability to promote their own self-interest. This is the most decisive difficulty with the internal sanctions hypothesis. The theory of internal rewards ignores the degree to which people can reform themselves.

 

 Quasi-Causation.

 

Imagine that you have travelled long and far and finally have reached the end of the universe. There you see someone who looks and acts exactly like you. You think you are looking in a mirror, but you're not. When you move your right hand, this other fellow (call him your "double") moves his right hand. When you wink, he winks. If you want to kill two birds, you need only throw one stone because your double will throw one too. The reason your double seems to imitate you is that his internal states and external circumstances are just like yours. With enough experience, you come to rely on having your actions mirrored by your double. You do not cause him to behave the way he does, but--we shall say--you quasi-cause his behavior (and he yours).

 

Suppose you were to play a prisoner's dilemmas game against your double. Would you cooperate or defect? If you want to perform the action with the most expected utility, then you will cooperate. You can expect to do better if you cooperate than if you defect. Now, we know from the analysis of Newcomb's Problem that maximizing expected utility is not always wise, at least according to those of us who would choose both boxes. So instead of performing the action with the most expected utility, we might choose the action whose causal consequences have the greatest expected utility. While you will not cause your double to defect if you defect, you will quasi-cause him to do so. If you are particularly taken with quasi-causation, and mistake quasi-causation for causation, you will cooperate.

 

If you are like most people you don't need to travel to the end of the universe to reveal a weakness for quasi-causal reasoning. Thinking about your double at the end of the universe just makes quasi-causal reasoning seem more plausible. But quasi-causal reasoning is a mistake, as the same kind of reasoning in circumstances closer to home demonstrates. Quasi-causal reasoning is no different than confusing actions that are diagnostic with actions that are causally efficacious. You don't need to believe you have a double to make this mistake. Calvinists, for example, are sometimes said never to waiver from the path of righteousness, since wavering would reveal a soul predisposed to sin, and hence destined to damnation in hell.

 

Tell subjects that a strong heart and good health makes a person more or less tolerant of cold water after a period of exercise and they will be more or less inclined to endure cold water after a period of exercise. The subjects adjust their tolerance although they are told, and would readily concede, that more or less tolerance is only a sign of good health, not a cause of good health.

 

Quasi-causal reasoning or deceptive diagnosis temps us in still other contexts than science fiction scenarios, Calvinism and medical testing. In the next chapter I will argue that the voluntary provision of a public good, including voting, is a prisoner's dilemma. If an egoist thought that others were sufficiently like her in relevant ways, and favored quasi-causal reasoning, she might choose to make a voluntary donation or vote.

 

As I think the medical testing experiment shows, this kind of reasoning is fallacious, whether the situation involves tolerance for cold water, giving money to public radio or voting. When you are deliberating whether to vote or not, what matters is the real causal effect of your voting. Likewise for contributing. Presumably, others will act as they are going to act, whether you vote/contribute or not. The fact that your action may be a statistical indicator of what others are going to do should not affect your deliberation.

 

So however egoists might be tempted to act, it would not be rational for them to let quasi-causation influence their deliberation. Causation, not quasi-causation is the appropriate concept for decision making.

 

 Patchwork Theory

 

Financial contributions to non-profit organizations in the U.S. totaled to over $122 billion in 1990. Donated labor was roughly equal in value. While we are not primarily concerned to explain these empirical facts, we are concerned to offer normative arguments in favor of this kind of action. We have seen that no one simple egoistic argument can justify every kind of contribution to a public good. However, a combination of several different egoistic arguments might justify most of them. What remains may be more easily dismissed as either unjustifiable, or justified on some other basis, but in either instance, too rare and meager to be significant or worthy of our attention.

 

A patchwork of egoistic justifications might look something like this. Almost half of all contributions go to religious organizations. The fear of god and the prospect of heavenly rewards are sufficient to motivate a believer on purely self-interested grounds. (Depending on the character of the deity, giving for self-interested reasons might be self-defeating. For the sake of the argument, we will assume otherwise.) Other contributions are to small local community organizations, for which the free-rider problem--while still present--is less severe than for large organizations. Many contributions are not made anonymously, thus providing prestige for the donor. Finally, some contributions may be given for the benefit of selective incentives, over which an organization has monopoly control. These four factors taken together--religion, size, prestige, and premiums--justify most of the vast sums that are voluntarily donated each year. The proponent of egoism may not be able to justify every possible donation, but he can dismiss the remainder as relatively insignificant.

 

There is a bad objection to the this argument. We might think that principles of simplicity apply to competing normative justifications, just as they do to competing explanatory theories. One normative justification is better than another if it is simpler. The bad objection states that the patchwork theory is too complex, more complex than rival non-egoistic theories. However, while the patchwork theory might seem complex, at a fundamental level it is in fact quite simple. The theory only appeals to each person's self-interest. The complexity enters when the theory necessarily takes account of various empirical facts. This combination of simplicity in fundamental principles and modest complexity in the application of those principles is a suitable and not an objectionable mix for an applied normative theory.

 

A better objection to the patchwork theory raises a more direct challenge. It disputes the theory's claim to justify, one way or another, the bulk of voluntary contributions. None of the set of justifications utilized by the patchwork theory justifies modest but significant contributions to large organizations. Conditional agreements, selective incentives, prestige and guilt, all fail to justify contributions which are barely recognized and serve large constituencies. When these justifications succeed they succeed only with small groups. Buttressing this challenge would require data showing that a significant portion of all giving comes in small amounts and goes to large organizations. I do not have the numbers to support this claim. Instead, I would merely point to the many large non-profit organizations which receive much of their income from individual contributors, organizations such as the American Cancer Society, American Heart Association, Amnesty International, Muscular Dystrophy Association, Nature Conservancy, Oxfam America, Save the Children Federation, and various museums, hospitals and public broadcasting stations with large budgets. Numerous individuals give small donations to keep these organizations active. Together their contributions sum to levels significant enough to warrant inquiry into the normative arguments that justify this behavior.

 

Hence, so long as we focus on the provision of public goods to large non-religious constituencies, rational self-interest will not be able to justify voluntary provision.

 


Next Segment

Back to Table of Contents page