War: What is it good for? (65 page)

That place is nothing less than a new stage in our evolution. Beginning more than a hundred thousand years ago, the struggle for survival in a harsh, ice-age world created conditions in which freakish mutants with big brains—us—could outcompete and replace all earlier kinds of protohumans. This happened even though the protohumans that got replaced had themselves created the mutants, by having sex, which produced random genetic variations, some of which flourished under the relentless pressure
of natural selection. It seems unlikely that protohumans wanted to create monsters that would drive them into extinction, but, evolution being what it is, they had no choice in the matter.

As ye sow, so shall ye reap; and now, a thousand centuries on, we are doing something rather similar to what the protohumans did, but doing it faster, through cultural rather than biological evolution. In our struggle for survival in a crowded, warming world, we are creating new kinds of freakish mutants with big brains, using machines to merge our unimproved, individual, merely biological minds into some sort of superorganism. What we are making is, in a way, the ultimate open-access order, breaking down every barrier between individuals. Age, sex, race, class, language, education, you name it—all will be dissolved in the superorganism.

Maybe the process will only go as far as sharing thoughts, memories, and personalities (Nicolelis's guess). Or maybe it will reach the point that individuality and physical bodies no longer mean much (Kurzweil's guess). Or maybe it will go even further, and what we condescendingly call “artificial intelligence” will completely supplant ineffective, old-fashioned animal intelligence. We cannot know, but if long-term history is any guide, we have to suspect that one way or another the mutants—the new version of us—will replace the old us as completely as the old us replaced Neanderthals.

Once again, it seems that there is no new thing under the sun. Brain-to-brain interfacing is just the latest chapter in an ancient story. Two billion years ago, bacteria began merging to produce simple cells. Another 300 million years after that, simple cells began merging into more complex ones, and after another 900 million years complex cells began merging into multicelled animals. At each stage, simpler organisms gave up some functions—some of their freedom, in a sense—in order to become more specialized parts of a bigger, more complex being. Bacteria lost bacterianess but gained cellness; cells lost cellness but gained animality and ultimately consciousness; and now, perhaps, we are about to lose our individual animality as we become part of something as far removed from
Homo sapiens
as we are from our ancestral cells.

The consequences for the game of death are, to put it mildly, enormous. Two thousand years ago, the Roman historian Livy told a story about a time when his city had been bitterly divided. The poor, he said, had risen up against the rich, calling them parasites. As tensions mounted, Menenius Agrippa, a prominent senator, entered the rebels' camp to make peace. “Once upon a time,” Agrippa told them, “the parts of the human body did
not all agree as they do now, but each had its own ideas.” The stomach, the other organs felt, did nothing all day but grow fat from their efforts, “and so,” Agrippa said, “they made a plot that the hand would not carry food to the mouth, nor would the mouth accept anything it was given, nor would the teeth chew. But while the angry organs tried to subdue the stomach, the whole body wasted away.” The rebels got the point.

The further that brain-to-brain interfacing goes, the more Agrippa's parable will become reality. It might even push the payoffs from violence right down to zero. Should that come to pass, then the Beast, along with our basic animalness, will go extinct, and it will make no more sense for merged intelligences to solve disagreements violently (whatever “disagreements” and “violently” might then mean) than it does for me to cut off my nose to spite my face.

Or perhaps that is not what will happen. If the analogy between cells merging to create bodies and minds merging to create a superorganism is a good one, conflict might just evolve into new forms. Our own bodies, after all, are scenes of unceasing struggle. Pregnant women compete with their unborn babies for blood and the sugar it carries. If the mother succeeds too well, the fetus suffers damage or death; if the fetus succeeds too well, the mother may succumb to preeclampsia or gestational diabetes, potentially killing both parent and child. A superorganism may face similar conflicts, perhaps over which part of it gets access to the most energy.

About one person in forty currently also has fights going on inside his or her cells, where so-called B chromosomes feed off the body's chemicals but refuse to participate in swapping genes, and about one person in five hundred has cancer, with some cells refusing to stop replicating, regardless of the cost to the rest of the body. To protect ourselves from these scourges and against viruses that invade us from outside, our bodies have evolved multiple lines of microscopic defense. A superorganism may have to do something similar, perhaps even producing the equivalents of antibodies that can kill intruders or parts of its own body that go rogue. After all, as most of us have learned to our cost, machines are just as vulnerable to viruses as animals are.

There is plenty to speculate about. What we can be sure of, though, is that brain-to-brain interfacing and merging through our machines are accelerating. The old rules, by which we have been playing the game of death for a hundred thousand years, are reaching their own culminating point, and we are entering an entirely new endgame of death. If we play it badly, there is almost no limit to the horrors we can inflict on ourselves. But if we
play it well, before the end of the twenty-first century the age-old dream of a world without war might become reality.

The Endgame of Death

“Everything in war is very simple,” said Clausewitz, “but the simplest thing is difficult.” So it will be in the endgame of death. Playing it well will be simple—but also terrifyingly difficult.

What makes the endgame simple is that once we know where “there” is and what war is good for, it is fairly obvious how—in theory—we get there from here. I have suggested that “there” is the computerization of everything, and that what war is good for is creating Leviathans and ultimately globocops that keep the peace by raising the costs of violence to prohibitive levels. From these premises, the conclusion seems to follow that the world needs a globocop, ready to use force to keep the peace until the computerization of everything makes globocops unnecessary. The only alternative to a globocop is a rerun of the script of the 1870s–1910s, but this time with nuclear weapons. And since the United States is the only plausible candidate for the job of globocop, it remains, as Abraham Lincoln said a century and a half ago, “the last best hope of earth.” If the United States fails, the whole world fails.

As I write, in 2013, a great debate is under way in American policy circles, between those who believe the superpower should “lean forward” and those who urge it to “pull back.” Leaning forward, say its supporters, means sticking to “a grand strategy of actively managing global security and promoting the liberal economic order that has served the United States exceptionally well for the past six decades,” while pullers-back argue that “it is time to abandon the United States' hegemonic strategy and replace it with one of restraint … giving up on global reform and sticking to protecting narrow national security interests … [which] would help preserve the country's prosperity and security over the long run.”

Long-term history suggests that both camps are right—or at least half-right. The United States must lean forward and
then
pull back. As we saw in
Chapter 4
, when fifteenth-century Europeans launched their Five Hundred Years' War on the rest of the world, it was old-fashioned imperialists who led the charge, plundering and taxing the people they conquered. The success of the Five Hundred Years' War, however, produced societies so big that old-style imperialism passed its culminating point. By the eighteenth century, open-access orders that managed to get the invisible hand and the
invisible fist working together were generating much more wealth and power than traditional kinds of empires. The result was the rise of the world's first globocop—only for its success at implementing and managing a worldwide open-access order to generate such rich, powerful rivals that the British system soon passed its own culminating point.

The result of this, as we saw in
Chapter 5
, was a storm of steel and the rise of a much more powerful American globocop. Now, the new globocop's success is moving the world toward what I have called the ultimate open-access order, in which the invisible hand may have no need for the invisible fist. That will mark the culminating point not just for the American globocop, but for
all
globocops. Right now, the United States is the indispensable nation, and it must lean forward, but as it approaches the culminating point of globocoppery, the United States will need to pull back. The Pax Americana will yield to a Pax Technologica (a phrase I borrow from the futurists Ayesha and Parag Khanna), and we will no longer need a globocop.

Everything, then, is very simple—until we start asking the kinds of questions that immediately occur to security analysts. At that point, we see just how difficult the simplest things can be. We cannot just wish away humanity's defense dilemmas by applying science. In fact, it would seem that merging with machines is itself the most destabilizing of all the tectonic shifts, game changers, and black swans considered in this chapter, because the process will be so uneven.

As I type these words, I am sitting just fifteen miles (as the crow flies) from San Jose, the heart of California's Silicon Valley. The newest neighbor to move onto my road up in the Santa Cruz Mountains is an engineer working on Google Glass; when I commute to my own workplace, I fairly often pass self-driving cars (which tend to stick to the posted speed limits). But if I lived in Congo or Niger, which tied for last place in the most recent (2013) United Nations Human Development Report, I doubt that I would have such neighbors or see such vehicles. San Jose is one of the world's richest and safest cities; Kinshasa, one of its poorest and most dangerous. And not surprisingly, places that are already safe and rich (especially San Jose) are moving toward the computerization of everything faster than those that are not.

Open-access orders thrive on inclusion, because the bigger their markets and the greater their freedoms, the better the system works. Because of this, technologists tend to be confident that over the medium to long term, the computerization of everything will break down barriers, making
the world fairer. However, throughout history, early adopters—whether of farming, Leviathan, or fossil fuels—have always had the advantage over those who follow later. Open-access orders do not incorporate everyone on equal terms, nor is everyone equally enthusiastic about being incorporated. In the eighteenth century, the Europeans who colonized America brought Africans into the Atlantic open-access order primarily as slaves; in the nineteenth, industrialized Europeans and Americans frequently used guns to force other Africans and Asians into larger markets.

It is hard to imagine such crude kinds of bullying resurfacing in the twenty-first century (rich northerners scanning poor southerners' brains at gunpoint?), but in the short run computerization is likely to widen the gap between the First World and the rest. In the next decade or two it may cause more, not less, conflict as it dislocates economies and adds to the sense of injustice that already inspires Islamist violence. More terrorism, Boer Wars, and state failures may be looming.

Nor will the disruptive effects of brain-to-brain interfacing be limited to the poor South. The rather modest amount of computerization that the world's wealthiest countries have seen since the 1980s has already increased their inequality. Over the medium to long term, merging through machines should make this kind of distinction meaningless, but if—as seems quite possible—a narrow elite of wealth and talent leads the way in brain-to-brain interfacing, in the shorter term the new technocrats might come to tower over everyone else in ways that today's 1 percent can only dream of.

There is a story, admittedly of doubtful veracity, that the novelist F. Scott Fitzgerald once announced at a party that “the rich are different from you and me”—only for Ernest Hemingway to come back with the immortal put-down “Yes, they have more money.” Now, however, Fitzgerald is about to get his revenge. Over the next few decades, a new kind of rich really will become different from the rest of us.

Just how different is every bit as disputed as anything else in the prediction business, but for my money you cannot beat the imaginative account by the nanotechnologist turned novelist (and adviser to the National Intelligence Council) Ramez Naam. In
Nexus,
the only work of fiction I have ever encountered that comes with an appendix on bioengineering, Naam tells us that the 2036 edition of
The Oxford English Dictionary
will include some unfamiliar words. One is “transhuman,” defined as “a human being whose capabilities have been enhanced such that they now exceed normal human maxima in one or more important dimensions.” Another is
“posthuman,” meaning “a being which has been so radically transformed by technology that it has gone beyond transhuman status and can no longer be considered human at all.” Transhumans, according to Naam's
OED,
are “an incremental step in human evolution,” while posthumans are “the next major leap in human evolution.”

Naam's novel is set in 2040, and at that point, he suggests, rich countries will have not just plenty of transhumans but also the first few post-humans. He imagines growing conflicts. Idealistic, highly educated elite youths maneuver to give everyone the chance to tune in to posthumanity, turn on, and drop out; a conservative American globocop tries to control the technology and protect old ways of being human; and rising rivals—particularly China—try to exploit posthumans for strategic advantage. In the sequel,
Crux,
terrorists get in on the act too, using merged minds for political murder. The world edges toward war, and much blood is shed, by and from humans of all kinds.

Other books

Wren Journeymage by Sherwood Smith
Access Restricted by Alice Severin
Dawning by Vivi Anna
The Terrorist Next Door by Sheldon Siegel
Passion After Dark by J.a Melville
Reconstructing Amelia by Kimberly McCreight
The Hand-Me-Down Family by Winnie Griggs