Read The End of Absence: Reclaiming What We've Lost in a World of Constant Connection Online
Authors: Michael Harris
• • • • •
I often feel as though I’m living through a moment of authenticity wobble. Depending on the person I’m talking to—a youth or a senior citizen—my sense of what’s authentic, what’s
real,
flips back and forth. My own perception of the authentic is caught in the sloshing middle. Perhaps that means I’m less authentic than those who came before
and
those who came after. I dispute both origins. For my peers and me, this confusion is all around us, an ambient fog, though we don’t often name it. We look up symptoms on Mayoclinic.org but indulge in “natural” medicine; we refuse to obey any church’s laws, yet we want to hold on to some idea of spirituality and read our Eckhart Tolle; we hunch plaintively over our cell phones for much of the year and then barrel into the desert for a week of bacchanalian ecstasy at the Burning Man festival. One friend of mine, who is more addicted to his phone than most, visits a secret traveling sauna once a month located inside a specially outfitted van. He gets naked with a bunch of like-minded men and women and chats about life inside the van’s superheated cabin; then he tugs his clothes back on and reenters his digital life. I’m at the point where I won’t call one experience authentic and the other inauthentic. We are learning to embrace both worlds as real, learning to accept the origin and aura of things that rain down mysteriously from the clouds.
A prime example is the Google Books project
, which has already scanned tens of millions of titles with the ultimate goal of democratizing human knowledge at an unprecedented scale—the new technology needs the old one (briefly) for content; the old one needs the new (forever) to be seen by a larger audience. Screen resolution and printout resolution are now high enough that digital versions satisfy researchers, who no longer need to look at original manuscripts (unless they’re hungry for first-person anecdotes). A real Latin copy of Copernicus’s
De revolutionibus,
for example,
waits for us in the stacks of Jagiellonian University in Kraków; but it, like a fourth-century version of Virgil’s work or a twelfth-century version of Euclid’s, is handily available in your living room (sans airfare). It’s thrilling: our sudden and undreamed-of access to magazine spreads as they appeared in the pages of
Popular Science
in the 1920s or copies of Boccaccio’s
Decameron
as they appeared in the nineteenth century. The old, white-gloved sacredness of the manuscript is rendered moot in the face of such accessibility. Literary critic Stephan Füssel has argued that this means the “
precious old book
and the new medium have thus formed an impressive invaluable symbiosis.” I would only add: For now.
One authenticity must eventually replace the other. But first, this wobble, this sense of two authenticities overlaid, a kind of bargaining.
When Gutenberg published his Bible, he took great pains to please his readers’ sense of the authentic by matching his printed work to earlier scribal versions. John Man describes in
The Gutenberg Revolution
an intense labor, geared toward creating a kind of überversion of a handmade Bible rather than something entirely new. Authenticity, or an entrenched idea of authenticity, was key: Three punch cutters worked for four months to create all the punches that would do the printing, painstakingly copying them from a handwritten Bible to replicate the texture of a human touch. (
His Bible’s 1,282 pages
also incorporated accents that scribes had used to indicate short forms of words.) Although paper was readily available—and he did print his Bible on paper—he also imported five thousand calfskins in order to print around thirty “authentic” vellum copies. Gutenberg’s Bible, a manufactured masterpiece, claimed traditional authenticity even as it began to rub out that which came before. Yet first came that fascinating moment of flux: In the late fifteenth century, scribal culture and print culture were coexisting, with handwritten manuscripts being copied from printed books just as printed books were copied from scribal ones.
The old “authentic” artifact
and the new “fake” artifact—for a moment in time—informed each other.
• • • • •
When we step away from earlier, “more authentic” relations, it makes sense that we also fetishize the earlier “real.” Sherry Turkle argues that, in fact,
our culture of electronic simulation
has so enamored us that the very idea of authenticity is “for us what sex was for the Victorians—threat and obsession, taboo and fascination.” (One can imagine future citizens sneaking into underground clubs where they “actually touch each other.”) When I walk through the chic neighborhoods of London or Montreal—when I look through the shops that young, moneyed folk are obsessed by—it is this notion of ironic “authenticity,” the refolking of life, that seems always to be on offer. A highly marketable Mumford & Sons–ization. Young men buy “old-fashioned” jars of mustache wax, and young women buy 1950s-style summer dresses. At bars, the “old-fashioned” is one of the most popular cocktails, and hipster youths flock to the Ace hotel chain, where iPhone-toting customers are granted access to record players and delightfully archaic photo booths.
The fascination with the authentic tin of biscuits or vintage baseball cap remains, of course, the exception that proves the rule. The drive toward the inauthentic still propels the majority of our lives. When
aren’t
we caught up in a simulacrum? Millions of us present fantasy versions of ourselves—skinnier, richer avatars—in the virtual world of
Second Life
(while our First Life bodies waste away in plush easy chairs). Some even watch live feeds of other people playing video games on Twitch.tv (hundreds of thousands will watch a single person play
Grand Theft Auto
and send cash donations to their favorite players).
13
Meanwhile, in Japan, a robotic seal called Paro offers comfort to the abandoned residents of nursing homes; and the photo- and video-sharing site Instagram is less interested in recording reality and more interested in pouring it through sepia filters. The coup de grâce: Advances in the field of teledildonics promise us virtual sex with absentee partners. All in all, it seems the safety of our abstracted, cyborg lives is far more pleasing than the haptic symphony of raw reality. Digital life is a place where we can maintain confident—if technically less authentic—versions of ourselves.
It’s also a perfect place to shirk certain larger goals. The psychologist Geoffrey Miller, when pondering why we haven’t come across any alien species as yet, decided that they were probably all addicted to video games and are thus brought to an extreme state of apathy—the exploratory opposite of the heroes in
Star Trek
who spend all their time seeking out “new life and new civilizations.” The aliens “forget to send radio signals or colonize space,” he wrote in
Seed
magazine,
because they’re too busy
with runaway consumerism and virtual-reality narcissim. They don’t need Sentinels to enslave them in a Matrix; they do it to themselves, just as we are doing today. . . . They become like a self-stimulating rat, pressing a bar to deliver electricity to its brain’s ventral tegmental area, which stimulates its nucleus accumbens to release dopamine, which feels . . . ever so good.
Wouldn’t it make sense to shunt authentic tasks like child rearing, or space exploration, or the protection of the environment, to one side while pursuing augmented variations on the same theme?
• • • • •
Our devotion to the new authenticity of digital experience—the realness of the patently incorporeal—becomes painfully apparent in moments of technological failure. Wi-Fi dies at a café and a fleet of bloggers will choke as though the oxygen level just dropped.
Mostly these strangulations are brief enough that they don’t cut us off in any significant way from our new reality. The realness of our digital lives is firm. The breach was just a hiccup. But how invincible, really, is our new reality, our gossamer web?
In 1909, E. M. Forster published a smart little story called “The Machine Stops,” in which the web does drop away. In Forster’s vision of the future, humans live below the surface of the earth, happily isolated in hexagonal rooms like bees in a massive hive. They each know thousands of people but are disgusted by the thought of physical interaction (shades of social media). People communicate through “plates” (they Skype, essentially), and all human connection is conducted through the technological grace of what’s simply called the Machine, a massive networked piece of technology that supplies each person with pacifying entertainment and engaging electronic connections with other people. The Machine does not transmit “nuances of expression,” but gives “a general ideal of people” that’s “
good enough for all practical purposes
.” When “speaking-tubes” deliver too many messages (e-mail), people can turn on an isolation mode, but they’re then flooded by anxious messages the moment they return. Year by year, humans become more enamored of the Machine, eventually developing a pseudoreligion around it in what Forster terms a “delirium of acquiescence.”
Humans are warned off of authentic experience. “First-hand ideas do not really exist,” one advanced thinker proclaims. “They are but the physical impressions produced by love and fear, and on this gross foundation who could erect a philosophy? Let your ideas be second-hand, and if possible tenth-hand, for then they will be far removed from that disturbing element—direct observation.” Inevitably, the virtuosic Machine begins to fall apart, though, and with it the very walls of their micromanaged underground society.
Author Jaron Lanier recalls Forster’s story as a message of hope, a fantasy where mankind casts off its shackles (or has those shackles forced off, anyway). “
At the end of the story
. . . ,” Lanier recounts, “survivors straggle outside to revel in the authenticity of reality. ‘The Sun!’ they cry, amazed at luminous depths of beauty that could not have been imagined.”
But in fact Lanier is misremembering here. The underground citizens of Forster’s story do
not
climb out from the Machine’s clutches and discover the sun. The air above is toxic to them, and when the Machine dies, Forster’s heroes are buried alive, catching a glimpse of “the untainted sky” only as rubble crashes down and kills them. There’s no revelation; it’s a cold, dark death for all of them. The final words spoken in the story are not the euphoric ones remembered by Lanier. The last words anyone speaks are, “
Humanity has learned its lesson
.” Forster is describing a reverse Gutenberg moment. An undoing of the future.
Our own Machine has been similarly threatened before, though we were far less reliant on communication technologies then. On September 1, 1859, a storm on the surface of our usually benevolent sun released an enormous megaflare, a particle stream that hurtled our way at four million miles per hour. The Carrington Event (named for Richard Carrington, who saw the flare first) cast green and copper curtains of aurora borealis as far south as Cuba.
By one report, the aurorae lit up so brightly
in the Rocky Mountains that miners were woken from their sleep and, at one a.m., believed it was morning. The effect would be gorgeous, to be sure. But this single whip from the sun had devastating effects on the planet’s fledgling electrical systems. Some telegraph stations burst into flame.
Pete Riley, a scientist at Predictive Science
in San Diego, published an article in
Space Weather
in 2012 stating that our chances of experiencing such a storm in the next decade are about 12 percent. That’s a one in eight chance of a massive digital dismantling. If it doesn’t happen soon, it’ll happen eventually.
Great Britain’s Royal Academy of Engineering
has pegged the chance of a Carrington-type event within the next two centuries at about a 95 percent probability.
Such an event almost took place
in the summer of 2012, actually, and involved a particle stream larger than we imagine the original Carrington Event to have been. But it just missed the earth, shooting harmlessly over our heads (over the top of a STEREO spacecraft, actually). When we are hit, at any rate, we won’t be able to save ourselves with some missile defense system meant for meteors; no missile could halt the wraithlike progress of a megaflare.
What will happen, exactly? Electricity grids will fail; some satellites will break down; aircraft passengers will be exposed to cancer-causing radiation; electronic equipment will malfunction; for a few days, global navigation satellite systems will be inoperable; cellular and emergency communication networks may fail; the earth’s atmosphere will expand, creating a drag on satellites in low earth orbit; satellite communication and high-frequency communication (used by long-distance aircraft) will probably not work for days.
I daydream about a latter-day Carrington Event weirdly often, actually. (It’s pleasant to have something truly morbid to fix on while sitting on a subway, and if Milton isn’t doing the trick, then I switch to other celestial damnations.) Joseph Weizenbaum, the creator of ELIZA whom we met in chapter 3, was able to notice even in the mid-1970s how computers had become as essential to human life as our most basic tools: If extracted from us cyborgs, “
much of the modern industrialized and militarized world
would be thrown into great confusion and possibly utter chaos.” I imagine our transportation and communication systems crashing to a halt, our banks and governments freezing or, worse, misfiring commands. I imagine our refrigeration systems failing and, with them, all our stores of perishable food. Entire power grids blinking off. GPS systems becoming fuzzy to the point of fouling precise military actions. A team of scientists from Atmospheric and Environmental Research estimated that such an event would cost the United States alone up to $
2.6 trillion in damage
and would take as long as a decade to recover from.