VII. Partial Altruism and Two Competitors

 

 

I have argued that none of the approaches we have considered so far adequately justifies the voluntary provision of a public good in all of the relevant contexts. Now I want to present and explore a kind of theory that does justify voluntary provision: altruism.

 

Altruists are people who value not only their own welfare, but the welfare of others. Their utility is a function of both the welfare of themselves and of others. One kind of altruism is utilitarianism. Utilitarians give equal weight to each person's welfare. Utilitarianism solves our fundamental problem: it justifies the voluntary provision of public goods. I will briefly show how utilitarianism succeeds in justifying two central examples of voluntary provision: financial donations and voting. Many objections have been raised against utilitarianism as a moral theory. I do not intend to consider all of them, or even all of the standard ones. Instead I will focus on those objections that especially pertain to the circumstances of charitable giving. Some of the objections fail or are left unresolved, while others are sustained.

 

After criticizing utilitarianism, I will offer an alternative altruistic theory which avoids all of the previous objections. The theory, which I will call "partial altruism", allows one to include the welfare of others in one's utility function, but at a discount. After discussing the advantages of this theory over utilitarianism, I will defend it against two prominent objections. Finally, I introduce, and argue against, a third, non-consequentialist, altruistic theory.

 

 Utilitarian justification of voluntary provision

 

We have seen in the first chapter that egoists will not provide themselves with efficient levels of a public good because each will ignore the benefit her contribution has on her fellow contributors. Utilitarians do not ignore these effects, but are instead very sensitive to them. Hence, while a small contribution to a large organization, such as WNYC or Channel 13 in New York City, will make a small difference, or even a fractional probability of a small difference, in what the organization produces, the utilitarian will consider not just the improvement in her own welfare, but the potential improvement in the welfare of every other consumer. A large group of beneficiaries counter balances the small size or probability of the benefit. There is no free riding because all externalities are considered in the utilitarian's utility function.

 

The utilitarian can justify contributions that are divided among so many people as to make what one might call an "imperceptible" difference to each person, by appealing to the increased probability of making certain perceptual judgements or having a certain physical condition obtain. There are a million donors; each pours a cup of water into a large container from which a million thirsty Bedouin each takes a cup of water to drink. No Bedouin can unmistakably tell the difference made by any individual contributor. But pouring a cup of water into the receptacle has a small probability of making a small difference, both to what each Bedouin perceives and, more importantly, to the Bedouin's physical condition. This small probability of a small difference is multiplied over a large number of people. Altogether, it is significant.

 

One might ask why someone should give to an organization with substantial administrative costs, such as Oxfam America, rather than give directly to individuals. The answer is the same answer one should give to the question, why pay someone to manufacture a shirt for you when you could sew one yourself. There are economies of scale and advantages to the division of labor. Just as the Arrow shirt company can make a shirt more efficiently, so too Oxfam America can relieve poverty more efficiently than someone who attempts to do this alone. Poverty relief is not simply a matter of transferring money from rich individuals to poor ones. One can be more effective by giving to organizations which fund model projects, allow for assessment and accountability, build networks that promote the dissemination of information, solve coordination problems (for donors and recipients), preserve an institutional memory, and hire informed, trained professionals to manage the operation intelligently.

 

One type of voluntary provision is voting, where the good provided is the selection of the superior candidate or the better answer to a ballot question. A utilitarian will vote in spite of the slim chance that his vote will decide the election. The large benefit a better leader will have, due to the large number of people his decisions will affect, overcomes the small probability of any effect at all, or at least enough to justify the inconvenience of going to the polls. I have not yet discussed the voting situation in detail, so I will take a moment to do so now.

 

One might object that not all elections are close. In some contests, voters reasonably believe that one party is a clear favorite. If two candidates are not evenly matched in popularity, the probability of a tied election becomes "infinitesimally small". The probability declines exceedingly fast, as the candidates become less evenly matched, so that even for a population as small as a million people, if the general population favors one candidate 55 to 45 over the other, the estimated probability of a tied election is 10-2185. To say that such a probability is small is an understatement. There are only 1079 electrons and protons in the observable universe.

 

While a utilitarian might have a chance of deciding the outcome of a close election, lopsided elections are very different. Here she must appeal to other considerations, such as having an effect on how the election is perceived, publicly supporting the democratic process, enhancing her motivation to stay informed about future candidates, etc. One might object that a single vote will also have no significant effect on most of these other considerations, such as how people view an election. Not many people ever learn of the exact results. But the margin of victory may have a significant effect. In fact, it will have various effects with various probabilities. The bigger, more significant the effect, the smaller the probability that a single vote will induce that effect. But the probability is still there. While the public may be less interested in exact results, the people for whom the results have the most significance, campaign analysts and candidates, pay closer attention. Affecting the perceptions of this group is not a trivial consequence.

 

Hence, only in certain special circumstances, such as elections with clear favorites, where the margin of victory is unimportant, where high voter turnout is not needed to encourage good citizenship, where there are no compelling private motivational considerations or other secondary factors, will the utilitarian have to concede that some people have no compelling reason to vote.

 

If in those special circumstances, utilitarianism recommended that everyone simply not vote, then this would present a problem of instability. If one reflective person believed that all the other voters were going to follow the utilitarian recommendation not to vote, then she would expect her vote to be decisive. The utilitarian principle would recommend to this reflective person that she vote. But if everyone else followed the same reasoning they too would expect everyone else not to vote and hence vote themselves. If a still more reflective citizen believed this, then the utilitarian principle would again recommend not voting. An infinite regress of alternating recommendations seems to follow from all or nothing advice to either vote or not vote.

 

Fortunately, a group of utilitarians with mutual knowledge of each other's preferences will be able to reach a stable equilibrium by adopting a mixed strategy. To follow a mixed strategy, each citizen uses a random device to decide whether or not to vote. Just how high she would set the probability of voting would depend on how lopsided she believed popular opinion to be. Nash's 1951 theorem for non-cooperative games assures us that every non-cooperative game has at least one equilibrium strategy, if mixed strategies are allowed. The voting situation is essentially a non-cooperative game.

 

The utilitarian defense of voting is successful if voters are statistically reliable choosers of the superior candidate. However, one might reasonably doubt that voters have the ability to recognize the better candidate. There are three issues here: i) are voters reliable choosers of the better candidate ii) do they believe they are reliable choosers and iii) are voters rational to believe they are reliable choosers?

 

Voters need not know who is the superior candidate for them to believe they are voting for the superior candidate and for them to be justified in their belief. After all, in a close election it must be the case that almost half of the voters have erred, maybe even the majority.

 

Nor need voters actually believe they are reliable choosers for them to be in a position that would make such a belief rational. Some people confess to having little confidence in their own comparative judgement of candidates. This is probably excessive modesty. Whatever someone believes about her own fallibility, if she is rational and inquiring, she most likely knows enough, or has access to enough information, to allow her to make a reliable judgement. It may be rational for someone in her position to have some measure of confidence in her own judgement, whether or not she herself has such confidence.

 

In fact, most voters believe in their own judgement about the candidates. Further, if they choose to inform themselves reasonably well, they are justified in holding a belief about who is the superior candidate. It is easy to argue for the rationality of a modest level of self-confidence on the part of voters. To have a propensity to pick the better candidate one only needs to do better than chance. Most voters have access to some useful information about the candidates (their voting record, record of public service, speeches, performance in debates and interviews, position papers on various issues, the opinion of other well-informed people, etc.). One must suppose that all of this information is totally worthless to conclude that no voter is ever justified in having more confidence in her own judgement than in the flipping of a coin. I conclude that well-informed voters are justified in believing they are voting for the superior candidate.

 

The difference over chance, however, may be quite small. If the reliability of voters is small, then they should take this into account when calculating whether to vote. Given the potential benefits, I suspect that the reliability of voters is sufficiently high to justify their trip to the polls. However, arguing for a particular estimate of voter reliability is beyond the scope of this dissertation. We conclude that utilitarians can usually justify their contributions to large organizations and their voting behavior.

 

 Objections to Utilitarianism

 

We know from experimental results that most people free ride to some extent. Is this consistent with utilitarian theory? In idealized circumstances, utilitarians will contribute amounts that are Pareto efficient. This would preclude any degree of free riding. However, it is possible to argue that most of the experimental situations in which people free ride are not ideally suited to eliciting full contributions from utilitarians. First, they lack mutual knowledge about the preferences of others; second, each may be more concerned with her own payoff because each believes that she alone is a utilitarian and so is more likely than others to use her winnings to impartially advance human good; in effect, she thinks there are alternative uses of her money that will do more good than giving the money to the other players in the experimental game; third, each may fear that by cooperating fully she encourages the exploitative behavior of the unscrupulous free riders, which in the long run will hurt humanity. Finally, the utilitarian may justifiably point out that actual human behavior does not impugn the adequacy of utilitarianism as a normative moral theory.

 

The utilitarian may make similar replies to explain why she does not use her tax form to reduce the federal debt (an example discussed in the previous chapter). She may believe there are better uses of her money or that by contributing she is encouraging those who free ride on the generosity of naive or overly trusting donors.

 

So the utilitarian can justify both giving voluntarily to a public good and not giving to every public good or not giving fully to some. Still, utilitarian morality seems too demanding if we look beyond donations for public goods. Consider the utilitarian's opportunities to contribute to organizations that serve more purely charitable purposes, where donors realize little or no benefit from the work of the charity. The utilitarian faith would require most people who live in a prosperous northern country to make severe sacrifices in their personal lives in order to prevent hunger, famine, disease and desperate poverty among people in less prosperous southern countries. The utilitarian may counter this objection in two possible ways, by either showing that expenditures on oneself are, contrary to appearances, the best one can do to promote happiness for all, or by recognizing that the theory is a moral ideal, at which we ought to aim, however unsuccessful we may be in achieving perfection. Both of these replies have merit.

 

By not immolating oneself through a radical alteration of one's expenditures, personal habits, career plans and social relationships, one may be able to do more good for others in the long run, especially if one has children and breeds an indefinitely long line of future utilitarians. Further, failure to live up to a demanding moral code is no argument against the code itself, especially if one sincerely tries to meet its demands, but fails through weakness of will. Just how adequate these replies are depends on delicate practical judgements. How comparatively beneficial can a conventional life be? To what degree is the utilitarian's failure to live up to her own standards beyond her control and to what degree it is merely backsliding or hypocrisy? However we answer these questions, there are still other objections to consider.

 

Consistent utilitarians ought to be relatively reckless with their investments, more reckless, in fact, than a prudent person would be with her own money. Consider the rational egoist who invests in the stock market. A dollar means more to her if she is poor than if she is rich. As a result, if she has a choice between two stocks, both with the same expected return, she will prefer the less volatile stock, the one with less expected variance in its performance. For example, suppose stock A has an expected return of $10 per share, but has a 50% chance of returning $20 per share and a 50% chance of returning zero dollars per share. Stock B has the same expected return, but a 50% chance of returning $12 per share and a 50% chance of returning $8 per share. The egoists for whom utility is a strictly concave function of money will prefer stock B.

 

Compare the rational utilitarian. A dollar does not mean less to him if he were rich than if he were poor. He can easily skirt the decline in the marginal utility of his money simply by dividing his earnings among many people. If he earns more, he will want to give more people the same amount of money, not the same number of people more money. (I am here assuming no transaction costs.) In effect, his utility function for money is a straight line, or nearly so. (See figures below.)

 

Figure 1.

Figure 2.

Rational utilitarians ought to be daring gamblers and aggressive investors. They should be willing to seek out the investment opportunity with the highest expected return and risk everything on that one investment. My bet is that few utilitarians are so indifferent to risk.

 

Even foundations that have no purpose other than promoting public welfare do not manage their portfolios in this way. In fact, they tend to be quite conservative investors. No doubt, this is due, in part, to the Foundation's perception that they need a steady source of income to fund projects that extend over many years. If a foundation were to give a lump sum to a grant recipient at the start of a project, the recipient might fail to budget her expenses properly and overspend in the early stages of a project's life. However, the foundation could avoid this problem by using the profits from a core of highly risky investments to buy separate annuities for individual projects. The annuity would then pay out a fixed sum over a regular interval to cover project expenses. When the income from the risky investments are down, fewer (or no) annuities are bought. When income is up, more are purchased to fund more projects. An annuity enforces budgetary discipline in the funding of an individual project, while the central holdings of the foundation remain invested to maximize return. Few, if any, foundations operate this way. Utilitarians should recommend a change.

 

One might argue that this is effectively how the philanthropic process as a whole works, only with the foundation's capital assuming the role of an annuity in the above two tier arrangement. The philanthropist takes daring risks while she is building her estate, investing in ventures with the best expected returns, and then, if these investments pay off, is able to endow a foundation that will pay for a certain number of projects each year. The more successful the initial investments, the more money is available for the endowment and the more projects will get funded. The foundation merely serves the purpose of preserving budgetary discipline, rather than capital accumulation or growth, as does the project related annuity.

 

The issue now revolves on how the relevant individual ought to behave. Should a utilitarian building an estate maximize her return? Or is it rational for altruists to be risk averse, just as egoists are? Utilitarianism renders risk aversion morally impermissible. If we think risk aversion is not impermissible, then we must reject utilitarianism.

 

The next objection to utilitarianism is that it is incompatible with a diminishing concern for others as their welfare improves. Suppose that I have great wealth and you have very little. If we make some additional assumptions, such as supposing we have similar physical and psychological constitutions, then we may conclude that an extra dollar will bring you more welfare than an extra dollar will bring me. Hence, by utilitarian principles, I ought to give you my money until our assets are equal. But now suppose that while I have more money than you do, you too have great wealth (and we are alone in the universe). If I am a utilitarian I will be no less inclined to share my wealth with you. Utilitarianism doesn't allow a wealthy donor to regulate her generosity as the welfare of her beneficiaries improves. Utilitarianism requires one to maximize human welfare no matter how well off everyone already is. Diminishing concern for others as their welfare improves seems a more plausible moral requirement.

 

One can successfully defend the theory by pointing out, first, that if the poorer person is closer to the richer person in income, her welfare will also be closer, so there will be less of a welfare gain from equality and hence less urgency or importance attached to equalizing incomes.

 

Secondly, if welfare is a concave function of money, then an income difference between two rich people will result in less of a welfare difference than an equal income difference between two poor people. Equalizing incomes will not improve total welfare as much between the richer as between the poorer pair. Hence, the utilitarian will be less concerned about establishing equality for people who are rich, but still poorer than the very rich. Diminishing concern for the poorest as the poor do better is reflected in the utilitarian's diminishing concern for overall welfare improvements as the absolute size of these improvements decreases. Thus, this particular objection fails.

 

Finally, because utilitarianism prescribes maximizing human welfare, it cannot prescribe what to do in certain fanciful situations where maximization is impossible because no choice open to the agent provides the maximum expected utility. Suppose that you are a utilitarian who wishes to endow a trust that will benefit humanity. You believe your trust will survive forever and so will humanity. You expect the trust to grow in real terms at some fixed annual rate (gaining a little bit each year, even taking inflation into account), but you also expect the world's population, and with it, human suffering, to grow at a constant rate at least as great. Humanity's needs will always outstrip what the money in the trust can do to alleviate these needs. You choose to have all the earnings reinvested back in the trust. Your only remaining decision is to select a date at which to terminate the trust and distribute the accumulated capital. Whatever date you select for liquidating the trust, you could have waited another year and relieved more suffering. (Specifying the date is no problem: however far into the future, you are clever at inventing a notation which allows you to denote that date in an economical way. If required, you and others have superhuman powers.)

 

The utilitarian may attempt one of at least three possible replies. The first is, admittedly, hardly utilitarian. If the donor slightly favors earlier over later generations, then a maximum exists and the donor will naturally select the one date that maximizes utility. But such a preference would violate Bentham's sacred principle that each person count for one and no more than one, that is, the principle of giving each person's welfare equal consideration. In some riddles involving future generations the question arises whether a utilitarian calculus should count the welfare of people still unborn. Do we have to consider the claims of people who do not yet exist and are merely hypothetical, or are we permitted to take actions that result in less total welfare, but higher average welfare because some people are not born? To avoid this complication, we stipulate that the same number of, or even the same, people are born, whatever decision is made about the trust. So long as there is no question about who is born, it seems clearly incompatible with the heart of utilitarianism to introduce a bias in favor of some people over others.

 

A second reply is to claim that there is no possible world in which an agent could rationally attach probability one to the proposition that humanity will continue forever.

 

We may grant that no rational agent could believe with a probability of one that the human race will continue forever. However, it is possible to imagine the following situation. For any date the donor might pick to liquidate the fund, she attaches sufficient probability to the continuation of human suffering for another year and enough appreciation of the trust's value during the year such that the expected utility of waiting one more year is greater than the expected utility of an earlier liquidation of the trust. The donor does not believe with complete certainty that the world and human suffering will last forever. But she reasonably thinks this is a possibility. That is all she need concede for her to rationally expect that the growth in the fund's assets during any given year will be large enough to offset the slight probability of a solar explosion, decline in the growth of human suffering, or some other occurrence that would limit the usefulness of the trust at the end of the year.

 

The third reply involves a significant, but still modest revision to the usual utilitarian principle. One may allow for a measure of satisficing by suggesting the following rule: choose the action which maximizes expected human welfare whenever possible; when this is not possible, do enough to maximize human welfare. This principle would allow one to pick some date for terminating the trust, although it would not be the best date.

 

Unfortunately, this principle is vague, since it does not specify what is doing "enough" to advance human welfare under the special circumstances. Further, it seems vulnerable to criticism on the grounds that it makes inferior recommendations.

 

In sum, the objection based on diminishing marginal concern fails, the objection based on excessive free riding is inconclusive and the objections based on justifiable risk aversion and the charitable trust example succeed.

 

 Partial Altruism

 

Our theory must solve all of the above difficulties while still justifying significant voluntary provision. There is a simple way to accomplish this. Let each person's utility be a function of her own welfare and the welfare of others. More exactly, let her utility be the sum of her own welfare and a strictly concave function of the sum of everyone else's welfare. I will call this theory "partial altruism", although it involves more than just a partial concern for others. This allows us to explain voluntary provision, limited generosity, risk aversion, diminishing concern for others as others are better off, and selection of a date in the charitable trust example.

 

An example of a partial altruist's utility function would be this:

 

 

where x is the agent's consumption (say, as measured in dollars), yi is person i's consumption, and I is a measure of the person's degree of altruism. Hence while the person's utility is an increasing function of every other person's consumption, the consumption of others is summed and the log is taken of the sum before being multiplied by a constant.

 

Partial altruism justifies some voluntary provision by using the very same arguments available to utilitarianism. The partial altruism that I am proposing is thoroughly consequentialist and shares with utilitarianism the ability to consider small differences and small chances that affect, or might affect, large numbers of people. Its ability to justify voluntary donations is impaired, however, relative to utilitarianism. In fact, there is only a quantitative difference between the level of provision which egoists will sustain and the level that partial altruists will sustain. The contributions of partial altruists, unlike the utilitarian's, are not Pareto optimal. But this in between level of generosity is what an adequate moral theory ought to prescribe, otherwise it demands more than we think is required. Recall that in experimental situations, subjects fail to give Pareto efficient amounts. The theory of partial altruism avoids the problem of prescribing an immoderate sacrifice (say, to relieve poverty in Somalia) by allowing for less than perfect impartiality. The theory also solves the charitable trust and the altruistic risk problem by discounting the welfare of others using a nonlinear utility function.

 

Helping one more person matters less to the altruist as more and more people are helped, as more and more people are better off. In theory, if the interest rate of the charitable trust increases faster than concern for others diminishes, the problem reappears. However, our original charitable trust problem only required a fixed rate of interest (just large enough to compensate for continued linear growth in human suffering, diminishing probabilities associated with survival of the trust, etc). Exponential interest rate growth seems too fantastic to take seriously.

 

The proposed version of partial altruism justifies risk aversion by supposing that each person ought to value the welfare of others less as others become better off. Need a theory of partial altruism take this step? Couldn't one justify risk aversion simply by giving extra weight to one's own welfare? The argument I gave to suggest that a utilitarian would seek the highest expected return on an investment, regardless of risk, relied on the utilitarian's ability to divide her earnings among many people. However, if one cared more about oneself than others and if the marginal value of expenditure on oneself diminishes with increased expenditure, then this alone would justify some level of risk aversion. In such an instance, higher returns would lead to spending more money on at least one person: oneself. One's level of risk aversion would then resemble that of an egoist with a similar personal welfare function, the more so, the less altruistic one became. So there would seem to be no need to introduce a further transformation of the welfare of others other than the simple discount multiplier. Why require that one's utility be some (increasing) concave function of the welfare of others? Why bother, for example, taking the log of the sum of everyone else's consumption? One could just sum the log of each person's consumption and multiply the sum by a fraction (e.g. .01).

 

There are two reasons. First, we might want a moral theory that gives predominant weight to the welfare of others, while still preserving levels of risk aversion similar to that of egoists. A utility function that simply discounts the welfare of others links greater altruism with less aversion to risk. The declining marginal utility of money will matter very little to someone who has a simple weighted sums utility function and is so altruistic that she is close to being a utilitarian. On the other hand, it will matter much more if the person with such a utility function has so little altruism that she is close to being an egoist. It seems inappropriate that risk aversion should vary in this way with a person's (or a moral theory's) level of altruism.

 

Some might regard this as a realistic relationship. Others might wish to espouse a single level of altruism for everyone (rather than some form of altruistic relativism), but argue that in order to match ordinary human intuitions, we need to propose a normative theory that is minimally altruistic (recall that average voluntary donations to charity amount to less than 3% of annual income). This gives us a good measure of risk aversion, perhaps just the right amount, so there is no difficulty. However, risk aversion with a high level of altruism does not seem so implausible that it should be ruled out from the start without further consideration.

 

Second, one may be averse to risk not only in investing one's money, but also in distributing it. Simple discounting will allow for some risk aversion when investing, but not when giving. Consider an altruist who must choose how much to give to two separate public goods: a public garden and a 4th of July fireworks display. The altruist is not sure about the effect of either contribution. If the summer is wet, the garden will flourish, but the fireworks will fizzle. If the summer is dry, there will be a bright fireworks display, but a dead garden. Suppose that fireworks bring more happiness than gardens, but people are pretty sure of a wet summer. Suppose, further, that even discounting for the high probability of rain, and hence no fireworks, the expected happiness of buying fireworks is higher than that of planting a garden. Would the altruist automatically fund the fireworks?

 

I would suggest that a reasonable altruist ought to be concerned with more than just maximizing expected happiness for others. She should avoid the risk that others will not be helped at all, or helped less. Given that forecasters predict rain, if she pays for a garden she is more likely to help others than if she pays for fireworks. She should reasonably prefer a garden to fireworks, although it does not maximize expected utility and would not be chosen by a partial altruist whose utility function discounted the welfare of others with a simple weighted sums procedure.

 

Compare the basic insight of modern portfolio theory. Investors care not only about expected return, but also risk (or the expected variance of their return). Hence if an investor is considering two stocks A and B, where A has a high expected return and a large expected variance, while B has a slightly lower expected return, but a smaller expected variance, the rational investor will not necessarily choose to invest all of her money in A, the stock with the higher expected return. The rational investor will want to choose a portfolio that is efficient (lies on the efficiency frontier of all feasible portfolios). Efficient portfolios maximize expected return for a given expected variance (or, alternatively, minimize variance for a given return). Investing all of her money in stock A will be efficient, but may exceed an investor's tolerance for risk.

 

Note that even if stock B satisfies the investor's desire to minimize variance (risk), it would probably not be advisable for her to invest all of her money in stock B. If the performance of A and B are not perfectly correlated, such a portfolio would not be efficient. By diversifying her investment between A and B, the investor can maintain the same degree of expected portfolio variance, while improving her expected return.

 

Likewise, in the above example, planting a garden does not provide an altruist with the greatest security against being of no help at all. Since the advantages of buying explosives and planting seeds are negatively correlated, an altruist can increase her expectation of helping others without increasing her risk by dividing her contribution between the two projects, buying a few seeds and a few rockets. She need only find the right proportion of seeds to rockets to meet her desired level of altruistic risk. This illustrates the fact that some ways of spending money on charitable projects will be inefficient in the sense that there exists an alternative distribution with as much expected happiness, but less risk, or more expected happiness, but no more risk. We may think of the altruist as like an investor selecting an efficient portfolio of stocks. The benefits to human welfare and the attendant risks will be much harder to predict, but in principle the portfolio selection problem remains the same.

 

The analogy does not apply if the altruist's utility is given by the simple weighted sums model. Only if the altruist has an increasing concave function applied to the sum of other people's consumption, will she care about the amount of expected variation in the outcome of her risky donations. Only then will she have what we might properly call "altruistic risk aversion".

 

Altruistic risk aversion obviously has practical implications for what donors do with their money. Since most non-profit organizations fund more than a single project, giving to these organizations provides less risk than giving to one person or funding a single project. Many non-profit charities, like investment mutual funds, allow contributors to diversify their contributions over a large set of projects. This is another advantage organizations have over individuals as competitors for charitable funds. However, since most organizations specialize in a particular field, the success of different projects is still highly correlated. Even local chapters of the United Way do not fully diversify their expenditures, but usually limit them to organizations and enterprises within the chapter's geographic area. (One could argue that diversification is achieved through coordination, if one looks at the system from a national perspective. Nonetheless, the United Way's expenditures are still limited to organizations within the U.S.)

 

While an altruistic risk averse donor may believe that famine relief and development aid does the most to improve human happiness, she should avoid giving all of her money to fight world poverty. Perhaps relief organizations are mismanaged, corrupt, or misguided. Perhaps foreign aid is counter productive, fostering dependency, encouraging overpopulation or strengthening the rule of harmful governments. The donor reduces her risk by supporting other kinds of charity, the more diverse, the less risk.

 

Should one be averse to altruistic risk? Clearly, if one fears such risks, one should diversify and no doubt many, perhaps most, donors do give to more than one charity. Should they? An investor who reduces her investment risk by moving to another point along the efficiency frontier sacrifices expected return. A donor who wishes to reduce her altruistic risk by moving along an analogous efficiency frontier sacrifices expected human welfare. Is this a mistake or a betrayal of altruism? No more, I suggest, than risk averse egoists are making a mistake in not maximizing their expected monetary return. Unfortunately, everyone suffers somewhat from both forms of risk aversion. Financial risk aversion slightly distorts capital markets and altruistic risk aversion slightly distorts non-profit sectors. In the latter instance, the result is less improvement to human happiness. If every altruist ignored altruistic risk, there would be little chance that most projects would fail. Overall, more good would get done with little chance of widespread, uncorrelated failures, that is, the collective risk would be minute.

 

This is simply the price we pay for individual rationality. Imagine a world of only a lone altruist betting on a single project. Since there is only one project, there is no chance, much less, expectation, that if this project fails the success of another project will compensate for the loss. Altruistic risk aversion looks quite appealing in such a world. In the real world, with many altruists, but each acting independently, the appeal to the individual altruist remains the same. In both instances, what should count is the marginal consequences of the individual's action. In the real world the marginal consequences of the individual's action may be just the same as they would be if the individual were all alone.

 

Each altruist is right not to put all of her fragile gifts in one basket. She may justifiably want to avoid the risk that little good will come of her well-intentioned action. This is not because she cares about being the originator of beneficial effects. It is because she is cares about the marginal difference her individual actions would make. The altruist's caution is not only observable common practice, but is also individually justifiable common sense.

 

 Objections to Partial Altruism

 

The critics of altruistic utility functions have argued that no matter how strong one's concern for the welfare of others, with an increasing population, assuming each person's action is independent of everyone else's, with each making Nash conjectures about the behavior of others, the total supply of a public good will fall further and further behind optimal levels. One economist has proved a theorem which implies that total provision will never exceed a certain finite amount. Another claims that it will never exceed twice the amount that the most generous contributor would give alone.

 

These conclusions ignore the fact that as n, the size of the population, increases, utilitarians and partial altruists will increase their contributions because these contributions benefit more people.

 

This fact is hidden by the standard way in which economists model altruistic motivation for public goods. The usual formula for an economy of one private and one public good is:

 

 

where xi is i's purchase of a private good x, G is the total of everyone's contribution to the public good, and Ui is a function of xi and G which represent's i's total utility. Each person tries to maximize her utility subject to the budget constraint, wi = xi + gi, where gi is i's contribution to the public good and wi is i's income or budget.

 

This formulation ignores the fact that the person's utility changes as n increases: the altruist gains more utility as the number of people who benefit from a public good increases.

 

Note that the altruist needn't calculate human welfare in the way a classical utilitarian does to be concerned about the number of people who are served by a public good. She may still be primarily concerned with average human welfare. One should be careful here to distinguish between the world's population and the population of people who benefit from a public good. As the number of benefactors rises, the average welfare of the world's population increases. So an altruist whose utility increases when a public good benefits more people can still object to population growth which diminishes average human welfare. For convenience, we shall not try to model the altruist's fundamental utility function, but only that aspect pertaining to public goods.

 

A better model of the altruist's utility would be this:

 

[Graphic #9, Equation - can't read]

 

where yi = f(n,G), that is, yi is an increasing function of n and G. Using this formula, one can prove that for some functions, f, G does not approach a finite level, as n increases, but is unbounded, in contrast to the theorems mentioned above. Hence, altruists are quite capable of generating any amount of total revenue for a public good.

 

Unfortunately, there is a complication for this defense. The reply succeeds for some versions of partial altruism, particularly utilitarianism, but I have not been able to find a utility function that models altruistic risk aversion and for which G is unbounded. If no such function exists, one faces a choice between accepting the practical consequences of this objection (suppliers of public goods will be constrained by small, finite revenues) or embracing utilitarianism, the strong form of altruism. Both are live, although--if my earlier attack on utilitarianism is correct--significantly wounded alternatives. Some times weak alternatives are still the best available.

 

A second argument against partial altruism claims that the supply of public goods will remain constant, regardless of outside contributions. If outside support increases, people will reduce their contributions to compensate for the increase of support from other sources. Critics point out that spending is not neutral in the way that partial altruism predicts. Probably the best evidence that people do not in fact neutralize the effect of exogenous support derives from studies of the effect of government subsidies to privately supported public goods. Government contributions do "crowd out" private contributions, but only partially. Why is the crowding out only partial?

 

The neutral effect of exogenous support presupposes a number of assumptions which may not hold in actual circumstances. People may not know the exact level of government support, government support may actually increase their belief in the effectiveness of a program or, for some reason, encourage them to raise their level of altruism, or government support may be conditional on private contributions (as with matching grants). Finally, it may happen that donors' preferences--their level of altruism--or their income and wealth will change over time. The rise of the welfare state in the U.S. did not completely crowd out private contributions to relieve poverty. This is in part because Americans became wealthier since the 1930's and, as a result, were willing to give more money.

 

Is partial altruism wrong to recommend to contributors that they adjust their donations so as to neutralize the effect of exogenous revenue? I think not. Imagine that someone enrolls in a charitable payroll deduction program. Every month her company deducts 5% of her income and donates it to the United Way. Suppose the government then taxes her 5% and (implausibly) gives all of it to the United Way. They do this by declaring that her 5% donation is now the U.S. government's donation. Critics of neutrality must believe that the donor ought to make an additional contribution to the United Way. Such a recommendation is normatively unpalatable.

 

Alternatively, imagine that Sue is about to give $100 to Oxfam America. John informs her that he too is going to give Oxfam $100. Before Sue was ignorant of John's intention. Implicitly, she thought John would not give Oxfam as much as a penny. Would it be rational for Sue to now give significantly less than $100? Not if the news about John fails to alter Sue's estimate of Oxfam's revenues. Revising her estimate in the light of this new information is a very complex task, which Sue may fail to undertake because of the costs (time and effort) involved. Even if Sue ignores computational costs and attempts an accurate reappraisal, one thing is clear: John's contribution will not have a simple linear effect on Sue's contribution. It might if the news of John's contribution increases Sue's estimate of Oxfam's revenues by exactly $100. But it will not.

 

It is difficult to know precisely what effect John's news will have on Sue. If Sue were perfectly rational in her use of information, she would probably have derived her original estimate of Oxfam's revenue from a variety of sources, carefully balancing their reliability. She may have information about the organization's revenue from prior years, know about general giving trends or have reasons to think that Oxfam will enjoy a year of especially high contributions. In addition, she may have beliefs about which of her friends or acquaintances are giving to the organization. She must treat this last source of information as like a sampling experiment with a potentially biased sample. If last year she believed two of her friends were going to give $100 each, whereas this year, with John's contribution, the total will be $300, she will want to slightly raise her estimate of the organization's income. But the reliability of this data for estimating the real level of contributions is going to be very poor. When the results of this "experiment" are weighed with other sources of information, such as Oxfam's income from previous years, the effect is likely to be quite small. News of John's contribution is unlikely to force Sue to raise her (probably vague) estimate by an equal amount. The statistical significance of this information about John is just too small.

 

Thus, the neutrality objection fails because it ignores how rational donors, especially donors with limited processing capacity digest relevant information.

 

A third possible objection to partial altruism is an objection to other versions of altruism as well. This objection points out that in many instances, especially in the prime example of public broadcasting, people prefer to support services from which they derive an actual benefit. Listeners to a particular public radio station give money to that station and not to public broadcasting in general or to stations serving more needy populations in other geographic regions. It might at first seem that theories of reciprocity and fairness better explain people's motives for giving in instances of this kind.

 

While this objection may have some merit against theories of impartial altruism, partial altruism is different by being not only less strong, but also less impartial. Listeners to one broadcasting station may direct their altruistic concern towards that particular station and not any other. Is this justifiable? It is if the listener knows much more about one station than another and can have more confidence in the reliability of her judgement of the station's quality. In special circumstances, such as when the donor has intimate knowledge of another station of much better quality, serving a more needy and larger population, then perhaps a bias in favor of the station to which the donor listens would not be justified. At least, it would not be justified simply on the grounds of superior confidence in the value of local station. However, partial altruism might still be justified on other grounds: having received a benefit may be justification enough for directing one's altruism toward particular institutions. Hence, however much they may challenge impartial altruistic theories, such as utilitarianism, special relationships do not pose a genuine problem for partial altruism.

 

 Hybrid Altruism

 

The two theories we have considered so far attend only to the beneficial effects of a person's actions. The theories recommend an action on the basis of how much the action promotes people's welfare (or how much it is expected to do so, relative to other available actions). They are consequentialist in this respect. Alternatively, one might propose an altruistic theory that recommends actions according the kinds of actions they are, not their effects. Such a theory encourages a person to be generous, not to obtain the results of a generous act. If the application of this second sort of theory to a particular situation suggests that one person should give money to another, the basis for the recommendation is not that the recipient should have more money, but that the donor should engage in the activity of giving away money.

 

One might describe the first sort of theory (including utilitarianism and partial altruism) as "goods altruism", and describe the second sort of theory as "participation altruism". Goods altruism is the preference that other people be better off (lead better lives, gain resources, etc.); participation altruism is the preference for making other people better off (improving other people's lives, giving resources away, etc.). Goods altruism is agent neutral; participation altruism is agent relative. Goods altruism is only concerned with the good being done, whereas participation altruism is sensitive to who is doing the good deed. Participation altruists want to be the ones doing the good deeds.

 

Participation altruism has an obvious flaw. It is insensitive to the needs of recipients, giving opportunities and the price of aid. It contradicts the intuition that the greater the need, the better the giving opportunity, or the cheaper it is to help someone, the more generous a donor should be with her assistance. Goods altruism is able to avoid these problems. On the other hand, if one accepted the objections to goods altruism of the previous section (limits to growth and neutrality), one would be tempted to conclude that goods altruism is also flawed. Participation altruism seems to be able to avoid these pitfalls. Perhaps the disadvantages of each form of altruism can be avoided by developing a hybrid theory.

 

One may wish to combine goods altruism and participation altruism into one theory. One way to do this is to recommend that we act on two principles of choice, one which maximizes the individual's utility and the other which maximizes social (or the group's) utility. We use an allocation rule to determine the influence of each component. The allocation rule decides, for example, how much money to spend on oneself and how much to spend on the group. Once an allocation is made, however, the decision concerning how to spend the allocated resource rests with a particular principle of choice. Resources allocated to the self, for example, are spent so as to maximize the agent's own welfare. Resources allocated to the group are spent so as to maximize the group's interest. More exactly, the allocation rule states that, all else being equal, a person ought to be more inclined to spend a dollar on social utility the more social utility that dollar will buy. On the other hand, a person ought to be less inclined to devote a dollar to social utility the more he has already spent on social utility.

 

One might say that the theory calls for behaving as if one were a hierarchically arranged company or government. Money is appropriated for two offices (or departments), with different mandates, subject to a budget constraint. Headquarters allocates money to one office, S, and directs S to use this money to maximize the person's own welfare. Headquarters also allocates money to another office, G, and directs G to use this money to maximize social (or the group's) welfare. (If one prefers, one may imagine that S and G are not offices with different missions, but persons, or parts of a person, with different motivations.) What allocation is made depends, in part, on the relative price of goods that serve the group's interest and goods that serve the self's interest. If helping the group were to become cheaper, for example, the allocation principle would inflate G's share. But this is not the only consideration. The history of past expenditures also affects the allocation decision. We shall call this theory "Hybrid Altruism".

 

The point I will now attempt to argue is that in so far as Hybrid Altruism shares some of the characteristics of goods altruism, it must also share the same fate as goods altruism. It is inconsistent to try to criticize goods altruism and at the same time defend Hybrid Altruism. If goods altruism alone can't justify contributions towards public goods, Hybrid Altruism won't be able to do so either. The department devoted to maximizing the group's welfare, G, must face the very same problems that a pure goods altruist faces.

 

Consider the neutrality objection. Imagine, for example, that there is an exogenous change in the funding of two non-profit organizations. Suppose the government decides to shift a certain subsidy from one of the organizations to the other. Hybrid Altruism would have the total funding for the two organizations remain the same. If G was supporting both organizations, G will shift its contributions drastically to compensate as best it can for the government's action. If the sum of G's contributions to the two organizations is large enough, G will compensate for the government's action completely. If not, then G's action together with that of other contributors will bring about the same result (assuming that, altogether, everyone's contribution to the two organizations is large enough compensate for the government's action).

 

I have tried to defuse the neutrality objection by explaining what additional considerations would prevent rational donors from neutralizing the effects of exogenous spending, and identifying circumstances in which neutrality would plausibly occur. Hence, people's actual behavior is both consistent with goods altruism, and consistent with our intuitions about rational action. But if I am wrong about this, and goods altruism prescribes a neutrality that our reflective intuitions reject, then this equally refutes Hybrid Altruism. For Hybrid Altruism also prescribes a neutrality that would be contrary to what we observe of human behavior and contrary to our intuitions about rational behavior. If the objections were sustained, then Government spending would affect and should affect the incomes of charitable organizations in ways prohibited by Hybrid Altruism.

 

If the limited growth objection were persuasive, it too would apply to the Hybrid model. Suppose G must choose between giving to, say, a single needy person and giving to a large organization that is also widely supported by others. This choice mirrors the choice an individual faces between giving to a public good and spending resources on himself. If the total revenue of the large organization is bounded in the one instance, it is bounded in the other. One need only assume that G finds the single individual worthy of some fraction of its resources just as we must assume that the partial altruist finds himself worthy of some fraction of his own resources.

 

Margolis contends that if people were goods altruists, then large organizations would never raise more than twice the amount that the most generous contributor would be willing to donate in isolation. As long as there are some private goods which can compete with large organizations for the support of altruists, the same conclusion must describe donors who follow the principles of Hybrid Altruism. Spending on the private altruistic good is analogous to spending on oneself and the same equilibrium model will describe both situations. The equilibrium model does not care who or what is designated as the private good. The core idea behind the model is that as the donor population grows, the incentive to free ride increases, causing each person to give less to the charitable organization and spend more on the private good. More people are giving to the public good, but each gives less. The net effect is that total revenue to the organization rises, but never exceeds a certain finite amount.

 

Just as the goods altruist is increasingly tempted to free ride on others as the donor population grows, similarly, the Hybrid Altruist is increasingly tempted to let others help the organization, while she provides for the private good. (Both the public and private good are good for the group). Not all causes worthy of altruistic support are public goods and in our example a single needy person allows the Hybrid Altruist to maximize the group's welfare by contributing to a private good (the needy person's welfare). If the argument applies to goods altruists in general, it applies equally to G, the department that acts like a goods altruist.

 

Suppose we were to obtain consistency by retracting the two objections to goods altruism. What then would be wrong with Hybrid Altruism? There are several problems.

 

First, if we retract the criticism of goods altruism, we have no reason to prefer Hybrid Altruism over the competition. In fact, because goods altruism is considerably simpler than the Hybrid model, one ought to prefer goods altruism.

 

Second, the allocation principle only looks to the past and not the future. If someone expects a particularly good altruistic opportunity to arise in the future, the allocation principle ties his hands. He will be most generous to organizations that solicit him first. This is because the principle has no mechanism for budgeting over time. It simply reviews each opportunity as the opportunity arrives. If the donor has already contributed, then the Hybrid donor is less inclined to give again. If the donor has not yet given, she is more inclined to give. That is how the Hybrid donor manages to be relatively indifferent to the support that an organization receives from other donors. The contributions of other donors hardly affect the Hybrid donor's disposition to give away a portion of her total resources. As long as she has only given away a little so far, she has a strong bias in favor of giving. She starts out responsive to solicitations, but her receptivity diminishes with each additional gift. This characteristic favors those lucky enough to approach the donor first.

 

On the other hand, if the theory allows the donor to anticipate future opportunities, then, in principle at least, the donor need only make one big decision early in his life. Such a decision will specify how to behave under any set of circumstances. It will specify what to do at any point in time that requires a decision (like a strategy in a normal form game). But when this person makes her one big decision, she will be indistinguishable in her behavior from a goods altruist. She will apply the allocation rule only once, before she has spent any money. Later decisions will simply be carrying out the earlier decision in the light of new information. So either the theory foolishly ignores the future or else fails to distinguish itself from a simpler partial altruistic theory.

 

Third, Hybrid Altruism is decidedly not consequentialist. In this respect, it might more properly belong to the group of theories discussed in the previous chapter on fairness. The following example highlights this difference, to the detriment of Hybrid Altruism, if my intuitions are correct. Imagine the same person at two different times. One year, she earns an income of $20,000 and has already given $1,000 to the United Way when a representative from Oxfam solicits her. The next year she earns an income of $19,000 and has not given any money away that year. Suppose that when Oxfam solicits her again the second year the organization is in exactly the same position as the year before. Assuming the slate is wiped clean each year, the allocation rule recommends giving much less to Oxfam the first year and much more the second year.

 

But this is misguided. During each year, at the time of Oxfam's solicitation, the person has $19,000 to spend as she chooses. By hypothesis, a gift to Oxfam will do as much good the first year as the second. What does it matter that the potential donor has only $19,000 to spend because she gave away $1,000 or because she was paid $1,000 less than the prior year? How she arrived at her situation, her past, seems irrelevant. But what she can do to affect the future is not. She is in exactly the same position with regard to the future each year. The need for help and her ability to pay are exactly the same. Hence, her response to Oxfam's request ought to be exactly the same.

 

Hybrid Altruism recommends actions reminiscent of the common mistakes people make with regard to sunk costs, but leading to behavior that moves in exactly the opposite direction. People who have paid more for season tickets to the theater are more likely to attend than people who are no less eager to attend, but paid less. People don't want to "waste" their earlier investment, or appear to have made an previous mistake. So they commonly "throw good money after bad". Hybrid Altruists do the opposite. Having already spent money for one purpose, they are prejudiced against spending more money in the same way. The Hybrid Altruist divides the world into two kinds of expenditures (expenditures on the self and expenditures on the group) and avoids repeating one kind of expenditure, even if she independently believes such repetition would be worthwhile. She would rather spend her money less well, in order to spend it differently. The person who pays heed to sunk costs would rather spend her money less well, in order to spend it no differently.

 

Fourth, the theory cannot explain diversification in individual giving. G will distribute his allotted resources so as to maximize group benefit. Aiming solely at maximum expected benefit, there is no incentive to diversify. Yet we ought to prescribe diversification as a prudent hedge against altruistic risk. One could overcome this objection by incorporating risk aversion into G's utility function. The result would be an entirely different theory.

 

Hence, there are several problems with the Hybrid Altruism.

 

A final quick summary may be helpful. Utilitarianism justifies voluntary provision, but it arguably demands heroic personal sacrifices. It seems to have an adequate reply to the charge that its concern for the welfare of the least well off is indifferent to their prosperity, but it cannot solve the charitable trust problem. Finally, it fails to justify cautious investing (or diversified giving). Our sketch of partial altruism justifies voluntary contributions along the lines of utilitarianism, but avoids these four other problems. Two prominent objections to partial altruism do not fully succeed (although one--the limited growth objection--may succeed better against partial altruism than utilitarianism). However, a rival version of altruism is subject to many serious objections.

 


 

Next Segment

Back to Table of Contents page