Read Wired for Culture: Origins of the Human Social Mind Online
Authors: Mark Pagel
Tags: #Non-Fiction, #Evolution, #Sociology, #Science, #21st Century, #v.5, #Amazon.com, #Retail
Another possibility avoids all these problems and doesn’t require any notions of strong reciprocity or putting our self-interest aside. It is that rejecting the offer signals your disapproval and sends out a message that you are not someone to be trifled with. This is just what we expect of an emotion—an expectation for fairness—that has evolved to watch out for our interests. But wait, to whom are you sending this message in the ultimatum game? You have been told the experiment is anonymous, that you will never meet the donor. Maybe, but is that how people really feel in these experiments? You can be told the exchange is anonymous and that you will never encounter the person again, but that doesn’t mean you can simply switch off the normal emotions that natural selection has created in us for ensuring we are not taken advantage of in reciprocal exchanges. The experimenters who conduct these studies are, in effect, asking their volunteers to leave behind at the door all of their evolved psychology for long-term relations. Robert Trivers in his “Reciprocal Altruism Thirty Years Later” put it more witheringly, saying, “you can be aware you are in a movie theatre watching a perfectly harmless horror film and still be scared to death… . I know of no species, humans included, that leaves any part of its biology at the laboratory door; not prior experiences, nor natural proclivities, nor ongoing physiology, nor arms and legs or whatever.”
In the real world, few interactions are of the sort concocted in the ultimatum games. Our psychology is the psychology of repeated interactions, and in that context, turning down a low offer sends a message to the person you are dealing with—and to any others who might witness or hear of the event—not to try to take advantage of you in the future. Turning the offer down might cost you something now, but it pays its way as an investment in future interactions (this is why punishment is effective in
tit for tat
: it reins in cheats). What might look spiteful is actually a way of improving your longer-term prospects. Emotions that guide this sort of behavior are important for a species like us that lives in small social groups in which people live a long time, and can therefore be expected to see each other repeatedly. The experimental situation of the ultimatum game, perhaps unwittingly, elicits the sense of having an audience precisely because volunteers are told the exchange is anonymous. It seems not to occur to the experimenters who run the ultimatum and other related games that the mere fact of telling someone their actions are anonymous is a refutation of that statement!
Someone
is watching.
To return to Basu’s example, why do we pay taxi drivers? One obvious reason that separates these people from ultimatum gamers is that the drivers have
earned
their payment. Riders know this and know this will make drivers more tenacious about getting their payment. Our disposition to act fairly is also most acutely switched on in face-to-face exchanges because we have come to expect reciprocity in our dealings. But that disposition is not one of behaving altruistically because this makes groups strong or because it is the right thing to do. Rather, it is an emotion that ensures we are not taken advantage of, and in any exchange we know that the other person is having the same thoughts as us. Indeed, no one should assume that taxi drivers do their jobs for
you,
or that you have
their
interests in mind. Not too many years ago the taxi rank at the airport of one of Southern California’s major cities was a marketplace, not a highly regulated economy with fixed fares. Someone wishing a ride could walk among the drivers bargaining for the best offer. Under these circumstances, the law of supply and demand is at its most efficient best. When there were lots of drivers but few passengers, fares came down. But if you were to show up when only one taxi was in the rank, you could be charged more or less whatever amount the driver could get away with.
When the artificial trappings of the ultimatum games are removed, our supposed strong reciprocity fades. The economist John List got people at a baseball trading card convention to approach card dealers and ask them for the best card they could buy for $20. In another situation, they had $65 to spend. Dealers consistently took advantage of the buyers, selling cards that were well below those values. Revealingly, though, it was especially the dealers from out of town—and thus unlikely to encounter the buyer again—who cheated buyers the most. List also produced a variant of the Dictator game in which instead of telling the donors they could offer whatever they wanted, he also gave them the option of
taking
some money from the other player. List’s hunch was that this removed some of the “demand” to give money (although List must acknowledge that it might also have created an expectation to take it). His hunch proved correct. Donations by Dictators fell to less than half of what they were in the conventional setting and 20 percent of the Dictators took money.
TRUST AND THE DIFFUSION OF COOPERATION
OUR ATTACHMENT
to fairness and justice has its origins in our self-interest, and we have seen that we can respond violently when we think justice is imperiled by selfish behavior. Trivers reminds us that victims feel the sense of injustice far more strongly than do bystanders and they feel it far more than do the perpetrators. People normally raise the issue of “fair play” when they are losing. “Envy,” says Trivers, “is a trivial emotion compared to our sense of injustice. To give one possible example, you do not tie explosives to yourself to kill others because you are envious of what they have, but you may do so if these others and their behavior represent an injustice being visited upon you and yours.”
Still, the remarkable feature of human cooperation is that, as in the case of a suicide bomber, it is not restricted to reciprocal relations between pairs of people. Many, perhaps most, of our day-to-day interactions are reciprocal—such as when we buy a loaf of bread—but our cooperation routinely balloons in complexity well beyond exchanges between pairs of people. If most altruism and cooperation in animals can be arrayed into two levels—that driven by helping relatives and that which prospers from direct reciprocity or exchange between two parties—there is a third level to human cooperation that is diffuse, symbolic, and artfully indirect. On a day-to-day basis, we act in ways that cannot possibly be directly reciprocated, such as when we routinely and unself-consciously hold doors for people, form lines obediently and admonish those who don’t, help the weak, elderly, or the disabled, return items of value, aid people in distress, pay taxes, and give to charities.
If our helping, sharing, and altruism stopped at these acts, we might be happy to let it go there as a charming peculiarity of our nature, something born of our ability to transcend our biological existence, to be empathic and to understand others’ needs. But while the cost of returning a wallet or holding a door may be small, helping someone in distress might not be. As we have seen, our style of help moves beyond the
eusociality
of the social insects to the
ultra-sociality
seen only in our species: the most vivid and outré form of our altruism comes to us in battlefield accounts of soldiers who fall on a grenade, charge a machine-gun nest, help others to safety under fire, or fly an airplane Kamikaze-style into an enemy ship. Few of these will have left offspring to regale with stories of their heroism, or if they have, those offspring will suddenly find themselves without one of their parents.
The next chapter asks how this kind of strange selfless behavior can arise in a Darwinian world in which it is the survivors who float to the top. We have seen here that there are some who think we are guided by a psychology of doing what is best for the group, even at cost to ourselves. But an unusual idea from genetics discussed in the next chapter shows how costly acts ranging from so-called honor killings to a disposition to suicide can, remarkably, be understood as self-interested behavior.
CHAPTER 6
Green Beards and the
Reputation Marketplace
That human society is a marketplace in which
reputations are bought and sold
F
OR MOST PEOPLE
the sight of seeing their nation’s flag raised, the sound of their national anthem being played, watching their nation compete against others in international events, or the loss of one of their soldiers in battle causes a familiar emotion. We often call it “nationalism,” a diffuse and warm pride in one’s country or people, and a tendency to feel an affinity toward them that we do not always or so easily extend to others from different nations or societies. It has been the great and sometimes terrible achievement of human societies to create the conditions that make people share this sense. It can get us unwittingly to practice a kind of cultural nepotism that disposes us in the right circumstances to treat other members of our nation or group as a special and limited kind of relative, willing to be more helpful and trusting than we would normally be toward others. It is the emotion of encountering a stranger on holiday in a foreign land and finding them to be from your country. But it is also the emotion that gets the people of one nation to cheer while those of another suffer from a deadly act of terrorism.
These are all descriptions of our ultra-social nature, a nature that sees us acting altruistically toward others, especially other members of our societies, without expecting that help to be directly returned. That altruism ranges from simple acts such as holding doors and giving up seats on trains, to volunteering your time and contributing to charities, but also to risking your life in war for people you might not know and are not even related to. None of these acts is directly reciprocated, or not necessarily so, and the risks of exploitation by others who do not share your altruism far exceed those of simple reciprocal exchanges between two people. Our altruistic dispositions are so strong that they even extend to helping other species. What other animal would ever put in the time and effort to save tigers, or adopt an abandoned dog, or call out the fire brigade to rescue a cat up a tree?
Where does this sense of cultural altruism come from; why do we feel it so strongly and so
naturally
? How do I know whom to extend these feelings to and in what circumstances, and why am I more likely to trust someone from my own group even if I have never met them? How could it ever be in your interest to risk your life in war? We saw the outlines of an answer to these questions in Chapter 2, and here we will see just how easy it can be to get this cultural altruism to evolve.
GREEN BEARDS, VENTURE CAPITALISTS,
AND GOOD SAMARITANS
IN THIS
chapter we will adopt an approach that uses thought experiments and hypothetical scenarios to understand our evolved psychology. Evolutionary biologists often use such an approach in an attempt to simplify what can seem like overwhelmingly complicated situations, such as our public behavior. The risk is that the simplifications can seem to remove any realism from the examples. But the rewards of this thought experiment approach are that it often returns insights that we hadn’t expected, or ones that fit so well with how we actually behave that we think the simplifications have captured something fundamental about our underlying dispositions and motivations.
We want to think about how a disposition to behave altruistically toward one another could evolve among an imaginary group of people initially lacking that disposition. This imaginary group of people could represent our “state of nature” before we learned how to cooperate with people outside of our immediate families. To see how a disposition toward cooperation might evolve in such a group, consider that a gene, an idea, or just an emotion arose in one or even a few of them that caused them to help people whom they thought were “helpers” like themselves. It could be as simple as just some good feeling that you get from helping these people. It is not a disposition to help any one individual in particular, or even to expect him or her to help you in return. It is a disposition to help people whom you think are helpers like you. You don’t need to know why you have this disposition, or where it comes from; it could simply be something you feel.
Let’s assume the value of the help the altruists provide exceeds the costs to them of providing it. Maybe this is something as simple as allowing someone to shelter in your house during a lightning storm. It costs you almost nothing, but it could save that person’s life. If people with this disposition can identify each other and then behave as we have assumed, two things will follow. One is that people carrying the disposition will be more likely to survive and prosper from the mutual help they provide, and this will make it more likely that the gene or idea survives and can get passed on to someone else. If it is a gene, it will spread in the usual way as people who carry it will produce more offspring. If it is an idea, it could spread by people observing others and then copying this successful strategy—it would then spread by social learning. The second thing we can expect is that as this tendency to help becomes widespread in this group, cooperation will come to take on the diffuse character that we recognize in some of our own actions. We could even think of the emotion as something akin to modern feelings of nationalism, but at this stage we might think of it as a kind of tribalism, a partiality toward others in your group, or others like you. Being among these people but not being a helper would be like being at a party without anyone noticing you.
It is an idea of such simplicity that we must wonder if it could lead to anything of importance in real social life. In fact, William Hamilton anticipated our imaginary scenario in 1964. Hamilton imagined a gene that had three simultaneous effects: it causes its bearer to have some sort of recognizable external marker; the gene grants the ability to recognize others with the marker; and it gets its carriers to target assistance toward others who have it. Richard Dawkins later named these hypothetical genes
greenbeard
genes
as a vivid way of calling to mind the mechanism by which such genes might recognize copies of themselves in other bodies. The idea is that those with the gene produce a conspicuous marker that allows those carrying the gene easily to spot each other and then direct their assistance toward them, and only toward them. We needn’t take the green beard literally; it is simply any kind of conspicuous marker; maybe ginger hair or blue eyes, misshapen ears, or the wearing of a particular kind of hat.
It is tempting to caricature the greenbeard idea as little more than an amusing anecdote. And it is fair to say that greenbeard genes have long been regarded as fanciful playthings dreamt up by theoreticians. The charge against them is that they require an implausible combination of three effects from a single gene. Why should a gene that produces some conspicuous marker also be linked to an ability to recognize others with it, and then to behave altruistically toward them? There is no reason. If the gene did produce more than one effect, it is just as likely to be for a taste for lemonade or a preference for cloudy skies as for helping others. If it is easy for us to imagine a gene having all three properties, this might be because we have minds that have already managed to understand the power of cooperation. But we must act as if this system could arise in other animals that do not have our sophistication and therefore the single gene would have to cause all three effects at once.
In fact, some remarkable evidence lends plausibility to the idea of greenbeard genes, and hints at parallels to human behavior. There is a species of fire ant (
Solenopsis invicta
) in which workers carry a gene that causes them to kill queens in their nests, but only queens who don’t carry a copy of this gene. Most ant species have just a single queen, but in this species there can sometimes be more than one. The ants recognize queens that lack this particular gene because those queens produce a chemical that appears on their outer surface. Queens that do have the gene don’t produce this chemical. Workers who carry the gene recognize the chemical, and then attack and kill the queens that produce it: her green beard is her death warrant. These same workers avoid attacking queens that don’t produce the chemical.
This is just the reverse of the usual greenbeard story, but the mechanism is the same. Workers carrying a particular genetic trait are able to recognize queens that don’t, and execute them. Their behavior is an act of altruism toward their brothers and sisters, who will make up a sizable proportion of the workers in the nest. It is an act of altruism because by killing the queens who display the chemical, the killers ensure that she will not produce offspring that would compete with their siblings.
Whatever one thinks of this example, it is difficult to avoid the suggestion of parallels to human xenophobia and bigotry—the ants direct their hostility toward someone else solely on the basis of some identifiable external marker or characteristic. Another instance of a greenbeard gene gets those with an external feature to help each other. Among the single-celled amoebae or slime molds, individuals normally live a solitary existence. But at times of food shortages they form into towers of many thousands of individual cells. Amoebae near the top of the tower form part of the fruiting body or spore cells that will reproduce and form the next generation, but the others in the tower will die. Only a lucky few get to reproduce so this means that, for most of the amoebae, building the tower is an altruistic act. But in one amoeba species the altruism is more focused. Some individuals carry a gene that makes a protein that is expressed on their outer surface. This protein causes them to stick to other amoebae that express the same protein, but not to amoebae that lack it. Experiments show that by sticking to each other, these amoebae exclude other amoebae that aren’t sticky, and this means the sticky ones are more likely to get into the fruiting body at the top. The gene for this sticky protein simultaneously fulfills the roles of producing a marker, recognizing those with it, and then assisting them. It creates a prejudice to favor those who are like you.
Still, our lives are not as simple and rule-bound as those of ants and slime molds. Our social lives are complicated: we miss things or mistakenly help selfish people, we blunder into a situation not knowing what to do, and people try to deceive us—impostors, con men, liars, and other tricksters are always lurking around looking to take advantage of someone’s good nature. Can this greenbeard altruism still evolve? Evolutionary biologists often study questions such as this about the success or not of various strategies, using mathematics to represent the interactions among groups of imaginary players who suffer or enjoy imaginary outcomes. These mathematical models describe different strategies that players can adopt, and then ask how these strategies fare in competition with one another. The strategies can even be altered to see how this affects their success. I have studied just such a model for this case and it turns out we can make mistakes, or simply fail sometimes to provide help, and still the greenbeard altruism gene will prosper so long as two things are true. One is that the altruists need to help each other at least some of the time; and the second is that altruists (the greenbeards) help other altruists more often than they help selfish or non-cooperative people.
This makes sense. If altruists help each other even just some of the time, but avoid helping non-altruists or non-cooperators, collectively the altruists will be better off than the non-altruistic or selfish players who never help each other. Avoiding selfish or uncooperative people is important because helping a non-altruist means helping a competitor to the fledgling mutual aid society. This raises the possibility that natural selection has favored in us a heightened sensitivity to detecting what we might think of as social cheats or free riders, people who might take advantage of others’ goodwill without intending to return it—they are the enemies of the mutual aid societies. Remarkably, the evolutionary psychologists Leah Cosmides and John Tooby have proposed just this capability in studies of human cooperation. Here is one of their examples. You walk into a bar and want to find out who is following the rule of having to be over the age of eighteen to buy an alcoholic drink. There are people of all ages in the bar; some have drinks and some don’t. Which of these people should you check to work out who is following the rule? Should you target young people or people with drinks? Most of us instinctively realize that it is people with drinks—and especially young ones—who can potentially test the rule. You could find out the ages of all the people without drinks, but this alone would tell you nothing about whether any of them individually is likely or not to follow the rule. And this is Tooby and Cosmides’s point: without even thinking about it, our brains instinctively know how to detect the cheats among us.
The rules that guide our cooperative behaviors toward others lead to an important “principle of information,” which if followed not only makes altruism possible but profitable to the altruists. It says that if we know enough about someone, we can make a decision about whether to cooperate; but if we don’t, it is better not to cooperate, because you might just be helping a selfish person. We can think of the principle this way. Let’s allow
i
(for information) to stand for a number that can range between 0 and 1. An
i
of 1 says you are certain the other person is an altruist and an
i
of 0 says you have no confidence at all. Then the principle of information says our actions should be guided by this rule: that
i
multiplied by the
benefit
you provide to someone else must exceed the
cost
to you. This can be written as
i
×
b
>
c.
When we are certain the other person is a cooperator (
i
is 1), we should help if the help we provide more than pays for the costs to us of giving it. This rule makes sense: only when the benefits the altruists dole out to each other make up for the costs of providing them do they prosper as a group. The action might be something like the example I gave earlier of providing shelter to someone during a storm, or you might give someone some information or hold a door for a weak or disabled person. These actions cost you little but can be of big help to the other person.