Read The End of Absence: Reclaiming What We've Lost in a World of Constant Connection Online
Authors: Michael Harris
It’s only after the book is laid down, and I’ve quietly showered and shaved, that I realize I haven’t checked my e-mail today. The thought of that duty comes down on me like an anvil.
Instead, I lie back on the sofa and think some more about my favorite reader, Milton—about his own anxieties around reading. By the mid-1650s, he had suffered that larger removal from the crowds, he had lost his vision entirely and could not read at all—at least not with his own eyes. From within this new solitude, he worried that he could no longer meet his potential. One sonnet, written shortly after the loss of his vision, begins:
When I consider how my light is spent,
Ere half my days, in this dark world and wide,
And that one Talent which is death to hide
Lodged with me useless . . .
Yet from that position, in the greatest of caves, his talent did not waste away at all. Instead, he began producing his greatest work. The epic
Paradise Lost,
a totemic feat of concentration,
was dictated to aides, including his three daughters.
Milton already knew, after all, the great value in removing himself from the rush of the world, so perhaps those anxieties around his blindness never had a hope of dominating his mind. I, on the other hand, and all my peers, must make a constant study of concentration itself. I slot my ragged
War and Peace
back on the shelf. It left its marks on me the same way I left my marks on it (I feel awake as a man dragged across barnacles on the bottom of some ocean). I think: This is where I was most alive, most happy. How did I go from loving that absence to being tortured by it? How can I learn to love that absence again?
Forgetting used to be a failing
, a waste, a sign of senility. Now it takes effort. It may be as important as remembering.
—James Gleick,
The Information
HENRY
Molaison was born in the winter of 1926 and grew up to be a well-groomed, dark-featured Connecticut man whose brain had a habit of flooding with masses of electrical activity. Since a bicycle accident at the age of seven, he suffered from debilitating epilepsy and was visited by a steady string of tonic-clonic seizures (grand mal seizures). Each seizure may have begun with an “aura” phase, in which he would have experienced an intense foreboding and his senses would feel out of whack. This would shortly be followed by a “tonic” phase, in which Molaison’s skeletal muscles would seize up and he would collapse to the ground and lose consciousness (here he may have begun moaning or screaming, too). This phase would then be followed by the “clonic” phase, in which his body was racked by convulsions as though electrocuted by an unseen tormentor. The eyes roll back and the tongue is lacerated as the jaw grinds away senselessly. In Molaison’s extreme case, this would sometimes occur ten times a day or more. Twenty years of standard treatment produced no relief.
By the summer of 1953, Molaison was a twenty-seven-year-old man with no hope of an ordinary, or even almost ordinary, life. Desperate for a cure, he eventually consulted William Beecher Scoville, a neurosurgeon at Connecticut’s Hartford Hospital, who suggested that the patient’s hippocampus be surgically removed. And so, on August 25 of that year, Scoville’s plan was carried out, with two significant results: Molaison’s seizures did subside; and the patient also lost his ability to form new memories.
Stories in the newspaper, what he had for lunch, discussions with loved ones, all these pieces of ordinary information that quietly build for most of us an understanding of the world, our very idea of ourselves, fell immediately through the floor of Molaison’s memory and into an abyss. While memories formed in childhood remained intact, Molaison could hold nothing new in the banks of his perforated brain.
Shortly after the operation, a young experimental psychologist named Brenda Milner, then working at McGill University in Montreal, was invited to study the strangely altered patient. Milner had begun reporting on similar cases and was keen to oblige. The meeting would change the course of her life—but also that of the budding field of neuroscience. For the next thirty years, she would visit with Henry Molaison, taking him through a constant series of tests and examinations. At first, there seemed to be little to learn from these tests, since Molaison simply failed to remember anything. For that matter, he never remembered Brenda Milner. How strange for the researcher, to walk through the door at each meeting and introduce herself. To steadily build her portfolio on Molaison and come to think of him as a friend (as she did) while he accrued no records at all of their encounters.
Then one test, seemingly simple, changed everything. Milner brought Molaison to a table and placed before him a piece of paper on which was drawn a perfect star. She then blocked Molaison’s direct view of the paper but allowed him to see the star reflected in a mirror that was placed across the table. Handing him a pen, Milner asked Molaison to reach around the shield and draw the star over the original image, watching his progress in the mirror. The task is difficult for anyone since the mirror plays with our perceptions of right and left, forward and backward. Like anyone, Molaison did a poor job of it.
But Milner had Molaison practice. For three days she returned and had him complete the task thirty times over. And then she noticed an extraordinary change. Although Molaison retained no memory of the exercise (it remained as novel a thing as Milner herself), he nevertheless improved in his ability to trace the star. Finally, he completed the task without error and, looking up at Milner, said in his slow voice, “
That’s surprising. I thought this was going to be difficult
. But it looks as though I’ve done it quite well.” With remarkable simplicity, Milner had revealed that there is more than one system in the brain capable of producing new memories. Molaison may have lost the memory system of his hippocampus, which would have allowed him to remember completing the task the day before, but some other system was still creating muscle memory, motor memory.
After Milner’s work with Molaison, a disassociation was established, showing that our brains do not merely house all memories in a single stationary filing cabinet; nor is memory (as was often believed in the 1950s) a vague function of the whole brain. Human memory, we began to understand, is no simple storage device. It exists as a dynamic, a series of systems with information constantly moving between. And changing.
• • • • •
The more I made room in my life for absence, for solitude, for silence, the more I had time for my memories. Like Henry Molaison, though, and like everyone else, I have a faulty memory, full of holes. I do not enjoy any high-definition recall of infanthood or any revelations about my adolescent years. Rather, the memories that visit me are clouded, obscure. My memories all look broken and untrustworthy. Besides, the really useful stuff—names and dates and facts and figures—is scattered.
I’m baffled by my brain’s ineptitude (and deeply impatient with it). The birthdays of co-workers, the career maneuverings of friends, may slip away, revealing me as the unthinking asshole I always thought I was. Some days I forget if I’ve eaten breakfast; I will literally need to check for a bowl in the sink to confirm whether I need to feed myself. (Yet I remember the flavor of ice cream my childhood buddy once threw a tantrum over—Tiger.)
I’ve come to think of my memory lapses in terms of a barren mental landscape. What I want is a city full of memories that I can walk through, full of details I can note and well-stocked shops I can peruse. What I have instead is a desert by Dalí, composed mainly of whistling empty space and the occasional melting clock.
I survive on digital cues—my phone is an able secretary, prompting me with reminders and calendars and notes about names of husbands and wives covertly inserted on “contact cards”—but the facade crumbles quickly around Kenny, who knows me better. “How can you ask whether I’ve seen that movie?” he said the other day. “We watched it
together
.
Last week.
” I will also regularly forget the names and occupations of friends he’s introduced me to multiple times. Kenny sees my lack of memory as a sign that I simply don’t care. And, indeed, it’s hard to discern between forgetting something and not caring about it. If I cared, surely I would remember.
Does off-loaded digital memory count as caring? I only “know” Kenny’s phone number in the sense that I know how to recall it on my phone. And much of my supposed knowledge exists in an equally abstracted state. I find myself living, much of the time, as a happy conduit for information rather than a receptacle. I don’t
hold
the information myself and am happy enough to let it reside in a digital state, where I can always get at it if I need to. As King Thamus foretold, I feel all-knowing, but I’m really only managing the illusion of knowledge. Meanwhile, the law of neuroplasticity tells me that each use of a technological memory aid leaves me less able to store information myself. The physicist Haim Harari has written on this diminishing role of factual information in human thinking and wonders what consequences it might have:
The Internet allows us to know
fewer facts, since we can be sure they are always literally at our fingertips. . . . But we should not forget that often in the scientific discovery process the greatest challenges are to ask the right question rather than answer a well-posed question and to correlate facts that no one thought of connecting. The existence of many available facts somewhere in the infinite ocean of the Internet is no help in such an endeavor.
Others argue that future generations will learn to make new connections with facts that aren’t held in their heads, that dematerialized knowledge can still lead to innovation. As we inevitably off-load media content to the cloud—storing our books, our television programs, our videos of the trip to Taiwan, and photos of Grandma’s ninetieth birthday, all on a nameless server—can we happily dematerialize our mind’s stores, too?
Perhaps we should side with philosopher Lewis Mumford
, who insisted in
The Myth of the Machine
that “information retrieving,” however expedient, is simply no substitute for the possession of knowledge accrued through personal and direct labor.
Author Clive Thompson wondered
about this when he came across recent research suggesting that we remember fewer and fewer facts these days—of three thousand people polled by neuroscientist Ian Robertson, the young were less able to recall basic personal information (a full one-third, for example, didn’t know their own phone numbers). “I’ve almost given up making an effort to remember anything,” he admitted in the pages of
Wired,
“because I can instantly retrieve the information online.” Thompson even harnesses Wikipedia in real time, while speaking on the phone, and uses the information stored there to support his arguments. Admittedly I do the same; and I’ve noticed plenty of colleagues doing it, too. A person just feels
smart
when buttressing a phone conversation with Google-sourced knowledge and shooting it across the line as though it were innately understood. “My point,” says Thompson, “is that the cyborg future is here. Almost without noticing it, we’ve outsourced important peripheral brain functions to the silicon around us. And frankly, I kind of
like
it. I feel much smarter when I’m using the Internet as a mental plug-in during my daily chitchat. . . . You could argue that by off-loading data onto silicon, we free our own gray matter for more germanely ‘human’ tasks like brainstorming and daydreaming.”
Thompson is, of course, not the first to wonder whether he wouldn’t be a happier and freer person without the bother of storing mundane information in his head. In a certain sense, we have
always
enjoyed off-loaded memories—and I’m not talking just about history books here. Any long-term relationship (between co-workers or family or friends) involves such a memory system, with individuals each storing a portion of the group’s information. Members of the group then remain aware of where the memories they don’t personally hold are stored, giving them access to a larger pool of knowledge than they could ever hold themselves. This group memory is called “transactive memory.” It allows me to access wisdom from older co-workers or simply access backups of memories I ought to hold myself but have temporarily misplaced (“when’s Mom’s birthday?”). We constantly judge whether information will be available from an external source in the future, and if it will be, we are willing to forget.
A team of psychologists has reported in
Science
on the degree to which search engines have ramped up our dependence of transactive memory. When a group of volunteers was ushered through a number of memory tests, the researchers found that increased availability of online technologies led participants to recall
where
information was kept (the nodes of transactive memory systems) and participants relied less on retaining the original information itself. The signpost becomes dominant, the fact that it points to drops away. “Where” is prioritized and “what” is forgotten.
“We are becoming symbiotic with our computer tools,” concludes their report. “Growing into interconnected systems that remember less by knowing information than by knowing where the information can be found.” Importantly, this relationship is not a new “problem,” but an extension of that original tendency to off-load and network memory.
It may be no more than nostalgia at this point, however, to wish we were less dependent on our gadgets. We have become dependent on them to the same degree we are dependent on all the knowledge we gain from our friends and co-workers. . . . The experience of losing our Internet connection becomes more and more like losing a friend. We must remain plugged in to know what Google knows.
As long as transactive memory systems are loose and cumbersome—when it’s a bit of a hassle to access the memory inside another person, say—we remain beholden to the memories in our own heads. But increased ease of access (the Google-ization of memory searching) leads to a commensurate increase in transactive memory. Absolute searchability allows absolute amnesia.
• • • • •
What is the value of
possessing
a memory, after all, as long as the information itself is always at hand? Seneca raised that question two millennia ago when he told the story of a rich man named Calvisius Sabinus. Poor Sabinus (like poor me) was unable to recall even the most famous figures in history—Ulysses, Achilles, Priam—and, like Clive Thompson over at
Wired,
he was ready to use “a mental plug-in.” Google was a ways off, so he had to improvise an alternative.
Since what Sabinus wanted was to appear learned among the wealthy and cultured men he socialized with, his priority was
access
to knowledge rather than personal possession of facts. So he purchased slaves (at the cost of one hundred thousand sesterii apiece) to hold the information for him. One slave memorized the work of Homer; another knew all of Hesiod; nine more slaves were assigned to the lyric poets. Sabinus had constructed for himself a ramshackle, flesh-and-blood search engine, allowing him to access poetry he could not remember himself and then repeat it to his guests as though it had sprung from his own memory: