Wired for Culture: Origins of the Human Social Mind (46 page)

Read Wired for Culture: Origins of the Human Social Mind Online

Authors: Mark Pagel

Tags: #Non-Fiction, #Evolution, #Sociology, #Science, #21st Century, #v.5, #Amazon.com, #Retail

Students of the Great Apes might complain this explanation for our consciousness could equally apply to the apes’ complex social circumstances, and we should also grant them consciousness. Perhaps we should, but even so, there might be two differences between us and the Great Apes that challenge this objection. One is that our social world is even more complex than that of a Great Ape, including social exchange and the extended forms of cooperation we have seen in earlier chapters. A computational state that keeps the “I” centerstage might be particularly valuable for reminding our subconscious minds to put our social system to best use. But the other is even more fundamental: our minds have discovered language. We alone have a symbolic code for translating our subconscious thoughts from whatever form they might take into the same audible (or tactile) language that we use to communicate with others. It might not be an accident that for most of us consciousness is expressed in our native language. Perhaps it is this aspect of our virtual sixth sense that tips our awareness over into something we can label as “I” or “me.”

tRUTH AND THE DIFFICULTY OF KNOWING WHAT TO DO

THE WORD
“truth” is heavily laden with difficult philosophical baggage, but as a shorthand we can take it colloquially to mean knowing the right answer, or knowing what really happened in some situation, or knowing the best course of action, or the best solution to some problem. If we take this as a working definition of truth, then we probably have precious little access to it. The American baseball player and coach Casey Stengel famously advised: “Never make predictions, especially about the future.” It is good advice. In the 1950s, the president of IBM, Thomas Watson, Jr., is reported to have said, “I think there’s a world market for about five computers.” Ken Olson, president of Digital Equipment Corporation in 1977, believed that “There is no reason anyone would want a computer in their home.” It is rumored that one publishing executive returned a manuscript to J. K. Rowling, saying that “children just aren’t interested in witches and wizards anymore,” and that an MGM internal memo about
The
Wizard of Oz
said, “That rainbow song’s no good. Take it out.”

For most of us, much of everyday life is a series of easy decisions that we think we know how to make. But for many of the most important things we do, and most of the important decisions we need to make, we don’t have and might not even be able to acquire the information we need to be confident of making the right or best decision. It might also be that our best action depends on what others do. Should I fight those people who live in the next valley and who keep stealing my sheep? What lure should I use on my fishing line in this stream? Is that snake poisonous or is it one of those that just looks like a poisonous one? Is that berry edible? How much should I offer for a house I am thinking of buying? Should I pay more or less than I am into my pension fund? What is the best car for me? Which computer should I buy? How strict should I be with my children? Should I invest in that stock or buy a government bond? Which is the best airline? Should I marry this person?

An amusing but potentially serious manifestation of not knowing what to do or how to behave is called “collective ignorance.” You are in a crowded elevator that comes to a halt between floors. Maybe it is just a temporary problem, but maybe not. What should you do? Not wishing to appear foolish or anxious, you look to others for clues. But of course the others are in the same position as you and they are looking to you for the same clues. The result is that everyone inadvertently sends the message to do nothing and you all stand there in silence. It is the position we all occasionally find ourselves in when a fire alarm goes off, a subway train comes grinding to a halt between stations, or, in a big city, we pass someone lying in the street. Should we help them, or are they just some drunk passed out from their own exuberance? But collective ignorance is also why stock markets can rise and fall with such exciting or jarring urgency—few investors know what to do so they just follow what others are doing.

These are questions about whether we should copy others or try to figure out best solutions on our own. As the most intelligent species on the planet, we might think that not only can we work out good solutions, but that doing so rather than relying on others is our best strategy. A simple thought experiment posed by Alan Rogers leads to a different and surprising conclusion. Rogers asks us to imagine a group of people who live in a constantly changing environment such that new problems continually arise that require new solutions. Over time, these people—we can call them
innovators
—work out solutions for surviving and reproducing on their own. This takes time and effort, and they occasionally make mistakes. But they can be expected to maintain a more or less steady level of health and well-being as their innovations just keep up with changes to their environment.

But now imagine that someone is introduced to this group who merely imitates or copies these innovators. This imitator or
social learner
would not have to spend the time and energy trying to work out solutions to problems posed by the environment, and would not suffer the inevitable losses of making the occasional error. This tells us that a social learner who copies what others do, introduced into an environment of innovators, would survive and prosper better than the innovators. Over time, the imitators will therefore increase in number until at some point the population of people is made up mostly of them. But now consider what happens. Once imitators become common, they will frequently copy each other. This is fine so long as it works, but mistakes in copying will creep in, and the imitators will have no way to correct them. The environment will also continue to change. So now the imitators will begin to suffer losses and ill-health because they are employing obsolete solutions.

We learn from this that neither all innovation nor all copying will ever take over in society: in the language of Chapter 3, neither innovation nor copying is an evolutionarily stable strategy. There is also a hint of something we have seen before: that only a handful of innovators is needed. But if this is true, and most of us do copy others, whom should we copy, and how much? Perhaps your neighbor has just purchased a new car; you have been thinking of buying one as well. Should you get the same one? Kevin Laland posed these questions more formally in a computer tournament organized to understand social learning. Elizabeth Pennisi in
Science
describes how people were asked: “Suppose you find yourself in an unfamiliar environment where you don’t know how to get food, avoid predators, or travel from A to B. Would you invest time working out what to do on your own, or observe other individuals and copy them? If you copy, who would you copy? The first individual you see? The most common behavior? Do you always copy, or do so selectively? What would you do?”

A young boy I put this question to replied by saying he would copy the most overweight people. There is something to this, especially in the evolutionary setting of being a hunter-gatherer. If body weight is an indication that you have been good at getting food, maybe you are doing something right. It was just this logic that we used to speculate on the meaning of the Venus statues. But Laland wanted to know how we decide whom to copy when we only have access to what others are doing. Entrants to his tournament had to write a computer program that would somehow juggle the alternatives of someone trying to innovate or work out for themselves the best course of action, versus copying or imitating others, and if the latter, whom to imitate. The computer programs operated in a kind of
in silico
social environment in which they could “see” the choices that other programs had made, and thus what behaviors they were displaying. These programs then competed against each other inside a large supercomputer.

Competing strategies ranged from those that relied strongly on innovation to those that always copied others. Startlingly, the winning strategy in Laland’s tournament exclusively copied others—it never innovated! By comparison, a strategy that relied almost exclusively on innovation finished ninety-fifth out of one hundred contenders. This is a result that flies in the face of all expectations, but the strategy of always copying works for two very simple but profound reasons. One is that when others around us make decisions and act on them, they have little choice but to demonstrate the best strategy in their repertoire: when you do something, you will typically do what you think is in your best interest. This presents imitators with a set of alternatives from which the truly bad ones have probably already been filtered out. The second is that by virtue of being alive and available to copy, those whom we imitate are survivors, and so what they are doing must be reasonably good.

Remarkably, it matters less exactly whom you copy or precisely what than that you copy rather than try to innovate. Laland’s computer tournament, therefore, also lays bare the social implications of learning from others. Our ability to copy and imitate is why our culture can accumulate knowledge and technology. But the winning strategy in the tournament acted like a social parasite, plagiarizing the hard-won knowledge and strategies of others, and thereby avoiding any of the costs of having to try out new ideas on its own. Indeed, its parasitical nature was revealed when Laland ran the winning strategy alone and it performed badly. Just as Alan Rogers’s thought experiment would have led us to expect, if no one is innovating, then copiers will end up copying each other, and this will mean that many bad strategies will be copied and maintained.

We have seen this before: social learning—imitation and copying—is visual theft. It is unavoidably steeped in conflict and cooperation because knowledge itself becomes a valuable commodity that might otherwise grant an advantage to the person you visually steal it from. If I can perform some behavior that you wish to learn, I might wish to hide it or even modify it in your presence, or perhaps trade it for some of your knowledge. For your part, you might wish to conceal your interest, act deceptively or furtively, hoping that I will let down my guard. We see these conflicts of interest—and the deceptions they produce—manifesting themselves in patent applications and patent law, industrial and even national espionage, and outright theft. But we also see them in the reluctance, for example, to share old family recipes, reveal where our favorite fishing spot is, where to find the best mushrooms, or what bait we use to catch fish. Deception, competition, and exploitation are built into us because most of us rely on copying others most of the time.

Even when we have access to the so-called facts, we often misuse them, and this too might be because copying has played an important role throughout our history. We know that we are highly susceptible to contagion, false beliefs, neuroses—especially medical and psychological—and conspiracy theories. Why we should be is surprising because our brains have surely evolved to judge risks, to assess likelihoods or probabilities, to defend our minds against undue worry, and to infer what others are thinking. But our minds probably evolved to make these judgments drawing on the experiences of small groups of people—most probably throughout our history the small number of people in our tribe. The trouble is that now we are often confronted with vastly more information about risks, from newspapers and radio or the Internet, and yet we don’t always make the best use of it.

We misuse it because our brains assume that the rate at which these things come to our attention from all over the world is the same as the rate in our local area. It is a case of doing
bad mathematics
. In the past, my assessment of the risk of being blown up by a terrorist, or of getting swine flu, or of my child being snatched by a pedophile on the way to school, was calculated from averaging the input of information I received mainly from my small local group, because these were the people I spoke to or heard from, and these were the people whose actions affected me. What the Internet does—and what mass communication does more generally—is to sample those inputs from the 6.8 billion people on Earth. But without my being aware of it, my brain is still considering that the inputs arose from my local community, because that is the case its assessment circuits were built for.

The bad mathematics occurs because my brain assumes a small denominator (the bottom number in a fraction, and here that number is the number of people in my village), but it is using the inputs from the whole world as its numerator (the top number of a fraction). The answer it produces to the question of how likely something is to happen is, then, way too big. So, when I hear every day of children being snatched, my brain gives me the wrong answer to the question of risk: it has divided a big number (the children snatched all over the world) by a small number (the tribe). Call this the “Madeleine McCann effect.” We all witnessed months of coverage of this sad case of a kidnapping of a young girl in Portugal that occurred in 2007—as of this writing still unresolved. Although the worry this caused in the rest of us is trivial compared to what the McCanns have suffered, it was probably largely misplaced. But even knowing this, it is hard to shake the feeling that our children are at risk, and this just shows us how deep are the biases in our decision making.

The effects of the bad mathematics don’t stop with judging risks. Doing the mathematics wrong means that contagion can leap across the Internet. Contagion arises when people perceive that the numerator (input from Internet) grows far more rapidly than the denominator (village or tribe). Our tendency to copy others just reinforces this perception. Once contagion starts on the Internet, everyone’s copying means that the bad mathematics make it explode. The same happens with conspiracy theories: if it seems everyone is talking about something, it must be true!

But this is just the wrong denominator again, because in fact “most” people are not talking about that thing, it is just that the ones who are choose to appear on the Internet (or radio phone-ins, etc.). Neuroses and false beliefs are buttressed: we all worry about our health and in the past would look around us and find that no one else is worrying or ill. But consult the Internet and you might find tens of thousands—maybe more—people are worrying, and they’ve even developed Web sites to talk about their worry. The 2009 swine flu pandemic turned out to be a damp squib, but you wouldn’t have known that from the frenzy at the time.

Other books

The Heir Apparent by Lauren Destefano
Stork Raving Mad by Donna Andrews
Daughter of Fire by Simpson, Carla
Good Omens by Neil Gaiman
08 Safari Adventure by Willard Price
Before I Burn: A Novel by Heivoll, Gaute
The Cause by Roderick Vincent
Love of a Lifetime by Emma Delaney