Authors: Dean Buonomano
As powerful and elegant as they are, the associative architecture of the brain and priming are together responsible for many of our brain bugs, ranging from our propensity to confuse names and mix up related concepts to the framing and anchoring effects, as well as a susceptibility to marketing and the fact that irrelevant events can influence our behavior. Our neural circuits not only link related concepts but also the emotions and body states associated with each concept. Consequently, the mere exposure to words can contaminate our behavior and emotions. The phenomenon of semantic priming reveals that if a pair of words is flashed in quick succession on a computer screen we can react more quickly if they are related to each other. So we recognize the word
calm
faster if it is preceded by
patience
compared to if it had been preceded by
python
. But the brain is not a computationally compartmentalized device. Neural activity produced by the word
patience
can leak out into other areas of the brain and actually influence how long people wait before interrupting an ongoing conversation.
It seems that at some level everything in the brain is connected to everything else—every thought, emotion, and action seems capable of influencing every other. Studies have reported subtle but many different examples of this crosstalk. Among other examples one study showed that people’s estimate of the value of a foreign currency is higher when they are holding a heavy as opposed to a light clipboard—as if the weight of the clipboard carried over to the weight of the currency.
10
Another study reported that when asked to think about the future, people unconsciously leaned slightly forward.
11
And who has not noted that they tend to buy less food during their weekly excursion to the supermarket when shopping on a full stomach. When your “stomach” (more accurately your hypothalamus) is in a satiated state, estimates of how much food is needed for the week are downgraded. This interplay between body and cognition has been termed
embodied cognition
. Some take it to reflect the special bond between body and mind, but it may simply be the inevitable consequence of the fact that neurons that are activated together come to be directly or indirectly connected. For instance, in most cultures time is envisioned as moving forward, thus the future lies ahead.
12
The concept of “forward,” in turn, must be linked to motor circuits capable of triggering frontward movements; if this were not the case, how would we automatically understand the command to move forward? So it comes to be that activity of neurons representing the future spreads to neurons representing the concept of “forward,” which in turn nudge the motor circuits responsible for forward movement.
The decisions we make generally come down to one group of neurons being activated over others. If the waiter asks you if you want a piece of cheesecake, your decision engages the motor neurons responsible for uttering “no, thanks” or “yes, please.” A single neuron, in turn, “decides” whether it will fire by integrating the whispers and screams of its presynaptic associates. Some of its partners are encouraging it to fire, while others attempt to dissuade it from doing so. But either way a neuron can’t help but listen to, and be influenced by, even if only slightly, what every partner is saying. The upside is that our neural circuits automatically take context into account: we immediately grasp the different implications of the sentence “you have a bad valve” depending on whether the person saying it is wearing a white coat or a greasy blue uniform. The downside is that our neural circuits automatically take context into account: it seems inevitable that a medical procedure that has a 95 percent survival rate sounds like a better alternative than one that has a 5 percent mortality rate.
DEBUGGING
It has been my goal to describe some of what is known about how the brain works in the context of its flaws, much as one might study the human body from the perspective of its diseases. But what of the cure? Can we debug the brain’s bugs?
The brain’s shortcomings as a computational device have never been a secret; we have long learned to work around its many idiosyncrasies and limitations. From the first person to tie a string around his finger to remind himself of something to the fact that our cars beep at us if we leave the lights on, we have developed a multitude of strategies to cope with the limitations of our memory. Students use mnemonics to create meaningful sentences that better tap into the brain’s natural strengths to remember lists (“my very easy method just seems useless now” to remember the planets of the solar system, for example). Phones and computers store the myriad of telephone and account numbers for us; electronic calendars remind us when we should be where, and programs remind us if we forgot to attach the attachments in our email correspondence.
We go to therapy to debug our phobia generating fear circuits. And people engage in many behaviors that range from avoiding specific situations to seeking medical help to curtail their smoking, drinking, or spending habits.
While useful, these relatively easy-to-implement workarounds do not address the greater societal problems that arise from the brain’s inherent biases. To tackle the large-scale consequences of our brain bugs, it has been suggested that laws and regulations be designed to take our propensity to make irrational decisions into account. This approach has been termed
asymmetric paternalism
by the economist George Loewenstein and his colleagues, and
libertarian paternalism
, or, more intuitively,
choice architecture
, by Richard Thaler and Cass Sunstein.
13
The tenet of this philosophy is that regulations should nudge us toward the decisions that are in our own and society’s best interests; yet, these regulations should not restrict the choices available or our freedom. This philosophy can be encapsulated by the law that took effect in 1966, which required cigarette packages to carry health warnings. The government did not make cigarettes illegal, which would curtail our freedom, but it took steps to assure that health dangers of smoking were universally known.
We are faced with a multitude of decisions as we navigate modern life. How much should you insure your house for? How much money should you contribute to your retirement plan? Given the complexity and inevitable subjectivity of such assessments many people choose whatever happens to be the default option, which is often assumed to be the best choice and has the benefit of not requiring any additional effort or thought. The pull of the default option, or what is called the
default bias
, is observed when new employees choose whether to contribute to a defined-contribution retirement plan. Because of our present bias people often do not save enough for retirement, a problem that is compounded if the default option is nonenrollment. Choice architecture asserts that the default option should be automatic enrollment, to counterbalance the omnipresent short-term gratification bias. Studies confirm that automatic enrollment significantly enhances retirement savings.
14
Other examples of choice architecture would be as simple as placing healthy foods in the more visible and accessible locations of a cafeteria. To address the need for organ donors, state laws could change the default option to “presumed consent,” while allowing anyone who does not wish to donate the option to opt-out.
15
These examples remind us that our brain bugs are a two-way street, if arbitrary factors such as the way questions or information is framed can lead us astray, framing can also be used to put us back on track. Similarly we have seen that a brain bug that contributes to our susceptibility to advertising is that as social animals we rely heavily on imitation and cultural transmission. Imitation is hardwired into our brain, and we are keenly attuned to what the people around us think and do. So it is perhaps not surprising that studies suggest that one of the most important determinants of people’s behavior is what they believe others are doing, and that this fact can be used to nudge people toward prosocial behaviors.
The psychologist Robert Cialdini performed a simple experiment demonstrating the effectiveness of peer influences on whether people resisted temptation and obeyed the regulations posted at the Petrified Forest National Park in Arizona. The park loses a significant amount of wood fossils every year as a result of visitors’ taking petrified “mementos” during their visits. To curtail the problem signs have long been posted to discourage the practice. But Cialdini and colleagues wondered if how the information was framed altered the effectiveness of the signs. So they placed one of two signs in different high-theft areas. One sign stated: “Many past visitors have removed petrified wood from the Park, changing the natural state of the Petrified Forest” and showed an image of three people picking up the fossils. The other sign stated: “Please don’t remove the petrified wood from the Park, in order to preserve the natural state of the Petrified Forest,” along with a picture of a single person picking up a fossil. They also baited the paths with marked pieces of petrified wood to quantify the effectiveness of the different signs. The results showed that in the paths with the first sign, 8 percent of the marked fossils disappeared. In contrast, this number was only 2 percent in the locations in which the second sign was placed.
16
It appears that in attempting to discourage people from taking the fossils the first sign encouraged theft by giving people the impression that this was the norm. It’s one thing to resist the temptation to take a little “memento” from the national park, but it’s another thing altogether to feel like you’re the only one not to take the memento. This mass mentality probably also applies to looting: if everyone else is grabbing free stuff people might shift from a frame of mind in which they know it is wrong to take valuables that do not belong to them, to one in which they feel like a sucker for not taking advantage of the same opportunity others are exploiting.
Cialdini’s study provides evidence that the gist of the information and the way it is presented can be used to encourage socially orientated behaviors. Other studies have similarly shown that emphasizing positive actions of other people can be used to increase environmentally friendly attitudes or tax compliance.
One wonders if simple nudges could also be used to counteract our brain bugs in the voting booth. We see the president, senators, and representatives we elect mostly in the distance, campaigning, giving interviews, and debating. But we lack an intuitive feel for what they actually do and the repercussions of their decisions. It is easy to visualize the importance of having one’s cardiac surgeon be skilled and intelligent; the consequences of a surgeon’s error are plain to see and few people need to be taught that if your cardiac surgeon screws up, you could die. While the jobs of our political representatives in many ways carry more weight than that of the surgeon, the relationship between their decisions and our lives is much more convoluted and difficult to visualize. It is difficult to imagine that many people would settle for a Dan Quayle or Sarah Palin as their cardiac surgeon, but many people seem comfortable envisioning them ruling the most powerful nation on the planet. What if upon voting for the president people were reminded of what is potentially at stake? A voter might be asked to consider which candidate they would rather have decide whether their eighteen-year-old child will be sent off to war, or who they would rather entrust to ensure the nation’s economy will be robust and solvent when they are living off Social Security. When the power of our elected officials is spelled out in personal terms presumably at least some voters would reconsider their allegiance to candidates who clearly lack the experience, skill, and intellect proportional to the task at hand.
There is no doubt that designing regulations and presenting public information with our brain bugs in mind is fundamental, yet, it is also ultimately limited in reach. Choice architecture relies on someone, or some organization, deciding what is in our own best interest, which may be clear-cut when it comes to enrolling or not in a retirement plan; but what is the desirable default for homeowner’s insurance coverage, or the optimal option among the universe of different health care plans? Furthermore, what is in our own best interest is often mutually exclusive with the best interests of others, most commonly with that of companies selling us things, such as health insurance or rental car insurance. Private companies exist to make profits, and a reasonable rule of thumb is that the more money customers pay, the more a company profits; companies certainly cannot be counted on to determine the best default options. Even employers may not be interested in maximizing the investment plans of their workforce, since if they “match” the employee’s contributions the company is effectively paying them more. Well-conceived policies and regulations that counterbalance our default bias, present bias, tendency to procrastinate, and unhealthy habits are to be strived for. They will not, however, be able to be universally implemented. Furthermore, choice architecture is at best a nudge, not an actual solution or fix. For the most part studies indicate that information content and the context in which it is framed matters, but the effects are often relatively small, helping some people, but far from all, improve their decisions.
Someday, in the distant future, maybe we will reprogram the genetic code that controls our fear circuits and eliminate our susceptibility to fearmongering. In that distant day, perhaps we will use the raw memory capacity of smartphones in a more intimate manner: by directly linking their silicon circuits with our own through neural prostheses. Until that day, however, debugging will have to rely on the same strategy that brought
Homo sapiens
from our hunter-gatherer origins to the point where we can transplant both genes and organs from one individual to another: education, culture, and effortful deliberation.