Brain Buys (3 page)

Read Brain Buys Online

Authors: Dean Buonomano

But today we live in a world that the first
Homo sapiens
would not recognize. As a species, we traveled through time from a world without names and numbers to one largely based on names and numbers; from one in which obtaining food was of foremost concern to one in which too much food is a common cause of potentially fatal health problems; from a time in which supernatural beliefs were the only way to “explain” the unknown to one in which the world can largely be explained through science. Yet we are still running essentially the same neural operating system. Although we currently inhabit a time and place we were not programmed to live in, the set of instructions written down in our DNA on how to build a brain are the same as they were 100,000 years ago. Which raises the question, to what extent is the neural operating system established by evolution well tuned for the digital, predator-free, sugar-abundant, special-effects-filled, antibiotic-laden, media-saturated, densely populated world we have managed to build for ourselves?

 

As we will see over the next chapters, our brain bugs range from the innocuous to those that have dramatic effects on our lives. The associative architecture of the brain contributes to false memories, and to the ease with which politicians and companies manipulate our behavior and beliefs. Our feeble numerical skills and distorted sense of time contribute to our propensity to make ill-advised personal financial decisions, and to poor health and environmental policies. Our innate propensity to fear those different from us clouds our judgment and influences not only who we vote for but whether we go to war. Our seemingly inherent predisposition to engage in supernatural beliefs often overrides the more rationally inclined parts of the brain, sometimes with tragic results.

In some instances these bugs are self-evident; in most cases, however, the brain does not flaunt its flaws. Like a parent that carefully filters the information her child is exposed to, the brain edits and censors much of the the information it feeds to the conscious mind. In the same fashion that your brain likely edited out the extra “the” from the previous sentence, we are generally blissfully unaware of the arbitrary and irrational factors that govern our decisions and behaviors. By exposing the brain’s flaws we are better able to exploit our natural strengths and to recognize our failings so we can focus on how to best remedy them. Exploring our cognitive limitations and mental blind spots is also simply part of our quest for self-knowledge. For, in the words of the great Spanish neuroscientist Santiago Ramón y Cajal, “As long as the brain is a mystery, the universe—the reflection of the structure of the brain—will also be a mystery.”

1
The Memory Web

I’ve been in Canada, opening for Miles Davis. I mean…Kilometers Davis. I’ve paraphrased this joke from the comedian Zach Galifianakis. Getting it is greatly facilitated by making two associations,
kilometers/miles
and
Canada/kilometers
. One might unconsciously or consciously recall that, unlike the United States, Canada uses the metric system, hence the substitution of “kilometers” for “miles,” or, in this case, “Miles.” One of the many elusive ingredients of humor is the use of segues and associations that make sense, but are unexpected.
1

Another rule of thumb in the world of comedy is the return to a recent theme. Late-night TV show hosts and stand-up comedians often joke about a topic or person, and a few minutes later refer back to that topic or person, in a different, unexpected context to humorous effect. The same reference, however, would be entirely unfunny if it had not just been touched upon.

But what does humor tell us about how the brain works? It reveals two fundamental points about human memory and cognition, both of which can also be demonstrated unhumorously in the following manner:

Answer the first two questions below out loud, and then blurt out the first thing that pops into your mind in response to sentence 3:

1.
What continent is Kenya in?

2.
What are the two opposing colors in the game of chess?

3.
Name any animal.

Roughly 20 percent of people answer “zebra” to sentence 3, and about 50 percent respond with an animal from Africa.
2
But, when asked to name an animal out of the blue, less than 1 percent of people will answer “zebra.” In other words, by directing your attention to Africa and the colors black and white, it is possible to manipulate your answer. As with comedy routines, this example offers two crucial insights about memory and the human mind that will be recurring themes in this book. First, knowledge is stored in an associative manner: related concepts (zebra/Africa, kilometers/miles) are linked to each other. Second, thinking of one concept somehow “spreads” to other related concepts, making them more likely to be recalled. Together, both these facts explain why thinking of Africa makes it more likely that “zebra” will pop into mind if you are next asked to think of any animal. This unconscious and automatic phenomenon is known as
priming.
And as one psychologist has put it “priming affects everything we do from the time we wake up until the time we go back to sleep; even then it may affect our dreams.”
3

Before we go on to blame the associative nature of memory for our propensity to confuse related concepts and make decisions that are subject to capricious and irrational influences, let’s explore what memories are made of.

SEMANTIC MEMORY

Until the mid-twentieth century, memory was often studied as if it were a single unitary phenomenon. We know now that there are two broad types of memory. Knowledge of an address, telephone number, and the capital of India are examples of what is known as
declarative
or
explicit
memory. As the name implies, declarative memories are accessible to conscious recollection and verbal description: if someone does not know the capital of India we can tell him that it is New Delhi. By contrast, attempts to tell someone how to ride a bike, recognize a face, or juggle flaming torches is not unlike trying to explain calculus to a cat. Riding a bike, recognizing faces, and juggling are examples of
nondeclarative
or
implicit
memories.

The existence of these two independent memory systems within our brains can be appreciated by introspection. For example, I have memorized my phone number and can easily pass it along to someone by saying the sequence of digits. The PIN of my bank account is also a sequence of digits, but because I do not generally give this number out and mostly use it by typing it on a number pad, I have been known to “forget” the actual number on the rare occasions I do need to write it down. Yet I still know it, as I am able to type it in to the keypad—indeed, I can pretend to type it and figure out the number. The phone number is stored explicitly in declarative memory; the “forgotten” PIN is stored implicitly as a motor pattern in nondeclarative memory.

You may have trouble answering the question, What key is to the left of the letter E on your computer keyboard? Assuming you know how to type your brain knows very well which keys are beside each other, but it may not be inclined to tell you. But if you mimic the movements while you pretend to type
wobble
, you can probably figure it out. The layout of the keyboard is stored in nondeclarative memory, unless you have explicitly memorized the arrangement of the keys, in which case it is also stored in declarative memory. Both declarative and nondeclarative forms of memory are divided into further subtypes, but I will focus primarily on a type of declarative memory, termed
semantic
memory, used to store most of our knowledge of meaning and facts, including that zebras live in Africa, Bacchus is the god of wine, or that if your host offers you Rocky Mountain oysters he is handing you bull testicles.

How exactly is this type of information stored in your brain? Few questions are more profound. Anyone who has witnessed the slow and inexorable vaporization of the very soul of someone with Alzheimer’s disease appreciates that the essence of our character and memories are inextricably connected. For this reason the question of how memories are stored in the brain is one of the holy grails of neuroscience. Once again, I draw upon our knowledge of computers for comparison.

Memory requires a storage mechanism, some sort of modification of a physical media, such as punching holes in old-fashioned computer cards, burning a microscopic dot in a DVD, or charging or discharging transistors in a flash drive. And there must be a code: a convention that determines how the physical changes in the media are translated into something meaningful, and later retrieved and used. A phone number jotted down on a Post-it represents a type of memory; the ink absorbed by the paper is the storage mechanism, and the pattern corresponding to the numbers is the code. To someone unfamiliar with Arabic numerals (the code), the stored memory will be as meaningless as a child’s scribbles. In the case of a DVD, information is stored as a long sequence of zeros and ones, corresponding to the presence or absence of a “hole” burned into the DVD’s reflective surface. The presence or absence of these holes, though, tells us nothing about the code: does the string encode family pictures, music, or the passwords of Swiss bank accounts? We need to know whether the files are in jpeg, mp3, or text format. Indeed, the logic behind encrypted files is that the sequence of zeros and ones is altered according to some rule, and if you do not know the algorithm to unshuffle it, the physical memory is worthless.

The importance of understanding both the storage mechanisms and the code is well illustrated in another famous information storage system: genes. When Watson and Crick elucidated the structure of DNA in 1953, they established how information, represented by sequences of four nucleotides (symbolized by the letters A, C, G and T), was stored at the molecular level. But they did not break the genetic code; understanding the structure of DNA did not reveal what all those letters meant. This question was answered in the sixties when the genetic code that translated sequences of nucleotides into proteins was cracked.

To understand human memory we need to determine the changes that take place in the brain’s memory media when memories are stored, and work out the code used to write down information. Although we do not have a full understanding of either of these things, we do know enough to make a sketch.

ASSOCIATIVE ARCHITECTURE

The human brain stores factual knowledge about the world in a relational manner. That is, an item is stored in relation to other items, and its meaning is derived from the items to which it is associated.
4
In a way, this relational structure is mirrored in the World Wide Web. As with many complex systems we can think of the World Wide Web as a network of many nodes (Web pages or Web sites), each of which interacts (links) in some way with a subset of others.
5
Which nodes are linked to each other is far from random. A Web site about soccer will have links to other related Web sites, teams around the world, recent scores, and other sports, and it is pretty unlikely to have links to pages about origami or hydroponics. The pattern of links among Web sites carries a lot of information. For example, two random Web sites that link to many of the same sites are much more likely to be on the same topic than two sites that do not share any links. So Web sites could be organized according to how many links they share. This same principle is also evident in social networks. For instance, on Facebook, people (the nodes) from the same city or who attended the same school are more likely to be friends (the links) with each other than people from different geographic areas or different schools. In other words, without reading a single word of Mary’s Facebook page, you can learn a lot about her by looking at her list of friends. Whether it is the World Wide Web or Facebook, an enormous amount of information about any given node is contained in the list of links to and from that node.

We can explore, to a modest degree, the structure of our own memory web by free-associating. When I free-associate with the word
zebra
, my brain returns
animal, black and white, stripes, Africa,
and
lion food.
Like clicking on the links of a Web page, by free-associating I am essentially reading out the links my brain has established between
zebra
and other concepts. Psychologists have attempted to map out what concepts are typically associated with each other; one such endeavor gave thousands of words to thousands of subjects and developed a huge free-association database.
6
The result can be thought of as a complex web composed of over 10,000 nodes. Figure 1.1 displays a tiny subset of this semantic network. A number captures the association strength between pairs of words, going from 0 (no link) to 100 percent, which are represented by the thickness of the lines. When given the word
brain
4 percent of the people responded with
mind
, a weaker association strength than
brain/head
, which was an impressive 28 percent. In the diagram there is no direct link between brain and bug (nobody thought of
bug
when presented with
brain
). Nevertheless, two possible indirect pathways that would allow one to “travel” from brain to bug (as in an insect) are shown. While the network shown was obtained by thousands of people, each person has his or her own semantic network that reflects unique individual experiences. So although there are only indirect connections between brain and bug in the brains of virtually everyone on the planet, it is possible that these nodes may have become strongly linked in my brain because of the association I now have between them (among the words that pop into my mind when I free-associate starting from
brain
are
complex, neuron, mind,
and
bug
.

Figure 1.1 Semantic network: The lines fanning out from a word (the cue) connect to the words (the targets) most commonly associated with it. The thickness of a line between a cue and a target is proportional to the number of people who thought of the target in response to the given cue. The diagram started with the cue
brain
, and shows two pathways to the target
bug
. (Diagram based on the University of South Florida Free Association Norms database [Nelson, et al., 1998].)

Nodes and links are convenient abstract concepts to describe the structure of human semantic memory. But the brain is made of neurons and synapses (Figure 1.2), so we need to be more explicit about what nodes and links correspond to in reality. Neurons are the computational units of the brain—the specialized cells that at any point in time can be thought of as being “on” or “off.” When a neuron is “on,” it is firing an
action potential
(which corresponds to a rapid increase in the voltage of a neuron that lasts a millisecond or so) and in the process of communicating with other neurons (or muscles). When a neuron is “off,” it may be listening to what other neurons are saying, but it is mute. Neurons talk to each other through their
synapses
—the contacts between them. Through synapses, a single neuron can encourage others to “speak up” and generate their own action potentials. Some neurons receive synapses from more than 10,000 other neurons, and in turn send signals to thousands of other neurons. If you want to build a computational device in which information is stored in a relational fashion, you want to build it with neurons.

Other books

Autofocus by Lauren Gibaldi
To Be Free by Marie-Ange Langlois
Letters from Skye by Jessica Brockmole
Titanic Affair by Amanda P Grange
Waiting for Augusta by Jessica Lawson
The Leopard's Prey by Suzanne Arruda
Play Me by McCoy, Katie