Read Sex at Dawn: The Prehistoric Origins of Modern Sexuality Online
Authors: Christopher Ryan,Cacilda Jethá
Tags: #Non-Fiction, #Sociology, #Psychology, #Science, #Social Science; Science; Psychology & Psychiatry, #History
Homo Economicus
We have a greed, with which we have agreed…
“Society,” by EDDIE VEDDER
Many economists have forgotten (or never understood) that their central organizing principle,
Homo economicus
(a.k.a.
economic man),
is a myth rooted in assumptions about human nature, not a bedrock truth upon which to base a durable economic philosophy. When John Stuart Mill proposed what he admitted to be “an arbitrary definition of man, as a being who inevitably does that by which he may obtain the greatest amount of necessaries, conveniences, and luxuries, with the smallest quantity of labour and physical self-denial,”3 it’s doubtful he expected his “arbitrary definition” to delimit economic thought for centuries. Recall Rousseau’s words: “If I had had to choose my place of birth, I would have chosen a state in which everyone knew everyone else, so that neither the obscure tactics of vice nor the modesty of virtue could have escaped public scrutiny and judgment.” Those who proclaim that greed is simply part of human nature too often leave context unmentioned. Yes, greed is part of human nature. But so is shame. And so is generosity (and not just toward genetic relatives). When economists base their models on their fantasies of an “economic man” motivated only by self-interest, they forget community—the all-important web of meaning we spin around each other—the inescapable context within which anything truly
human
has taken place.
One of the most cited thought experiments in game theory and economics is called The Prisoner’s Dilemma. It presents such an elegant and simple model of reciprocity, some scientists refer to it as “the
E. coli
of social psychology.” Here’s how it works: Imagine that two suspects are arrested, but the police don’t have enough evidence for a conviction.
After the prisoners are separated, each gets the same offer: If you testify against your partner and he remains silent, you’ll go free and he’ll get the full ten-year sentence. If he fesses up but you don’t, you’ll do the time while he walks free. If neither of you talks, you’ll both get six months. If you both talk, you’ll both do five years. Each prisoner must choose to snitch or remain silent. Each is told the other won’t know about his decision. How will the prisoners respond?
In the classic form of the game, participants almost always betray one another, as each sees the benefit of quick betrayal: talk first, and walk away free. But take that theoretical conclusion to a prison anywhere in the world and ask what happens to “rats.” Theory finally caught up to reality when scientists decided to let players gain experience with the game and see whether their behavior changed over time. As Robert Axelrod explains in
The Evolution of Cooperation,
players soon learned that they had a better chance if they kept quiet and assumed that their partner would do the same. If their partner talked, he acquired a bad reputation and was punished, in a “tit-for-tat” pattern. Over time, those players with the more altruistic approach flourished, while those who acted only in their individual short-term interest met serious problems—a shiv in the shower, maybe.
The classic interpretation of the experiment took another blow when psychologist Gregory S. Berns and his colleagues decided to wire female players up to an MRI machine. Berns et al. were expecting to find that subjects would react most strongly to being cheated—when one tried to cooperate and the other “snitched.” But that’s not what they found. “The results really surprised us,” Berns told Natalie Angier, of
The
New York Times.
The brain responded most energetically to acts of cooperation: “The brightest signals arose in cooperative alliances and in those neighborhoods of the brain already known to respond to desserts, pictures of pretty faces, money, cocaine and any number of licit and illicit delights.”4
Analyzing the brain scans, Berns and his team found that when the women cooperated, two parts of the brain, both responsive to dopamine, were activated: the anteroventral striatum and the orbitofrontal cortex. Both regions are involved in impulse control, compulsive behavior, and reward processing. Though surprised by what his team found, Berns found comfort in it. “It’s reassuring,” he said. “In some ways, it says that we’re wired to cooperate with each other.”
The Tragedy of the Commons
First published in the prestigious journal
Science
in 1968, biologist Garrett Hardin’s paper “The Tragedy of the Commons” is one of the most reprinted articles ever to appear in a scientific journal. The authors of a recent World Bank Discussion Paper called it “the dominant paradigm within which social scientists assess natural resource issues,” while anthropologist G. N. Appell says the paper “has been embraced as a sacred text by scholars and professionals.”5
Well into the 1800s, much of rural England was considered commons—property owned by the king but available to everyone—like the open range in the western United States before the advent of barbedwire fencing. Using the English commons as his model, Hardin purported to show what happens when a resource is communally owned. He reasoned that in “a pasture open to all … each herdsman will try to keep as many cattle as possible.” Though destructive to the pasture, the herdsman’s selfishness makes good economic sense from his personal perspective. Hardin wrote, “The rational herdsman [will conclude] that the only sensible course for him to pursue is to add another animal to his herd.” This is the only rational choice because all will share the cost of the degradation to the land from overgrazing, while the profit gained from additional animals will be his alone. Since each individual herdsman will come to the same conclusion, the common ground will inevitably be overgrazed. “Freedom in a commons,” Hardin concluded, “brings ruin to us all.” Like Malthus’s thoughts on population growth relative to agricultural capacity, Hardin’s argument was a hit because (1) it features an A+B=C simplicity that appears to be inarguably correct; and (2) it is useful in justifying seemingly heartless decisions by entrenched powers. Malthus’s essay, for example, was often cited by British business and political leaders to explain their inaction in the face of widespread poverty in Britain, including the famine of the 1840s in which several million Irish people starved to death (and millions more fled to the United States). Hardin’s articulation of the folly of communal ownership has provided cover repeatedly to those arguing for the privatization of government services and the conquest of native lands.
One other thing Hardin’s elegant argument has in common with that of Malthus: it collapses on contact with reality.
As Canadian author Ian Angus explains, “Hardin simply ignored what actually happens in a real commons: self regulation by the communities involved.” Hardin missed the fact that in small rural communities where population density is low enough that each of the herdsmen knows the others (the actual case in the historical English commons and in ancestral foraging societies), any individual who tries to game the system is quickly found out and punished. Nobel Prize–winning economist Elinor Ostrom’s studies of commons management in small-scale communities led her to conclude that, “all communities have some form of monitoring to gird against cheating or using more than a fair share of the resource.”6
Despite how it’s been spun by economists and others arguing against local resource management, the real
tragedy of the
commons
doesn’t pose a threat to resources controlled by small groups of interdependent individuals. Forget the commons. We need to confront the tragedies of the open seas, skies, rivers, and forests. Fisheries around the world are collapsing because no one has the authority, power, and motivation to stop international fleets from strip-mining waters everybody (and thus, nobody) owns. Toxins from Chinese smokestacks burning illegally mined Russian coal lodge in Korean lungs, while American cars burning Venezuelan petroleum melt glaciers in Greenland.
What allows these chain-linked tragedies is the absence of local, personal shame. The false certainty that comes from applying Malthusian economics, the prisoner’s dilemma, and the tragedy of the commons to pre-agricultural societies requires that we ignore the fine-grain contours of life in small-scale communities where nobody “could have escaped public scrutiny and judgment,” in Rousseau’s words. These tragedies become inevitable only when the group size exceeds our species’capacity for keeping track of one another, a point that’s come to be known as
Dunbar’s number.
In primate communities, size definitely matters.
Noticing the importance of grooming behavior in social primates, British anthropologist Robin Dunbar plotted overall group size against the neocortical development of the brain.
Using this correlation, he predicted that humans start losing track of who’s doing what to whom when group size hits about 150 individuals. In Dunbar’s words, “The limit imposed by neocortical processing capacity is simply on the number of individuals with whom a stable inter-personal relationship can be maintained.”7 Other anthropologists had arrived at the same number by observing that when group sizes grew much beyond that, they tend to split into two smaller groups.
Writing several years before Dunbar’s paper was published in 1992, Marvin Harris noted, “With 50 people per band or 150
per village, everybody knew everybody else intimately, so that the bonding of reciprocal exchange could hold people together. People gave with the expectation of taking and took with the expectation of giving.”8 Recent authors, including Malcolm Gladwell in his best-selling
The Tipping Point,
have popularized the idea of 150 being a limit to organically functioning groups.
Having evolved in small, intimate bands where everybody knows our name, human beings aren’t very good at dealing with the dubious freedoms conferred by anonymity. When communities grow beyond the point where every individual has at least a passing acquaintance with everyone else, our behavior changes, our choices shift, and our sense of the possible and of the acceptable grows ever more abstract.
The same argument can be made concerning the tragic misunderstanding
of
human
nature
that
underlies
communism: community ownership doesn’t work in large-scale societies where people operate in anonymity. In
The Power of Scale,
anthropologist John Bodley wrote: “The size of human societies and cultures matters because larger societies will naturally have more concentrated social power.
Larger societies will be less democratic than smaller societies, and they will have an unequal distribution of risks and rewards.”9 Right, because the bigger the society is, the less functional shame becomes. When the Berlin Wall came down, jubilant capitalists announced that the essential flaw of communism had been its failure to account for human nature.
Well, yes and no. Marx’s fatal error was his failure to appreciate the importance of context. Human nature functions one way in the context of intimate, interdependent societies, but set loose in anonymity, we become a different creature.
Neither beast is more nor less
human.
Dreams of Perpetual Progress
He is a barbarian, and thinks that the customs of his tribe and
island are the laws of nature.
GEORGE BERNARD SHAW, Caesar in
Caesar and
Cleopatra,
Act II
Were we really born in the best possible time and place? Or is ours a random moment in infinity—just another among uncountable moments, each with its compensating pleasures and disappointments? Perhaps you find it absurd to even entertain such a question, to assume there’s any choice in the matter. But there is. We all have a psychological tendency to view our own experience as standard, to see our community as
The People,
to believe—perhaps subconsciously—that
we
are the chosen ones, God is on our side, and our team deserves to win. To see the present in the most flattering light, we paint the past in blood-red hues of suffering and terror.
Hobbes has been scratching this persistent psychological itch for several centuries now.
It is a common mistake to assume that evolution is a process of improvement, that evolving organisms are progressing toward some final, perfected state. But they, and we, are not.
An evolving society or organism simply adapts over the generations
to
changing
conditions.
While
these
modifications may be immediately beneficial, they are not really
improvements
because external conditions never stop shifting.
This error underlies the assumption that
here and now
is obviously better than
there and then.
Three and a half centuries later, scientists still quote Hobbes, telling us how lucky we are to live after the rise of the state, to have avoided the universal suffering of our barbaric past. It’s deeply comforting to think we’re the lucky ones, but let’s ask the forbidden question: How lucky are we really?
Ancient
Poverty
or
Assumed
Affluence?
Prehistoric humans did not habitually store food, but this doesn’t mean they lived in chronic hunger. Studies of prehistoric human bones and teeth show ancient human life was marked by episodic fasts and feasts, but prolonged periods of starvation were rare. How do we know our ancestors weren’t living at the brink of starvation?
When children and adolescents don’t get adequate nutrition for as little as a week, growth slows in the long bones in their arms and legs. When their nutritional intake recovers and the bones begin to grow again, the density of the new bone growth differs from before the interruption. X-rays reveal these telltale lines in ancient bones, known as
Harris lines.10
Periods of more prolonged malnutrition leave signs on the teeth known as hypoplasias—discolored bands and small pits in the enamel surface, which can still be seen many centuries later in fossilized remains. Archaeologists find fewer Harris lines and dental hypoplasias in the remains of prehistoric hunter-gatherer populations than they do in the skeletons of settled populations who lived in villages dependent on cultivation for their food supply. Being highly mobile, hunter-gatherers were unlikely to suffer from prolonged starvation since in most cases, they could simply move to areas where conditions were better.