Ha! (20 page)

Read Ha! Online

Authors: Scott Weems

We know this because we've all stood in front of windows. No computer has ever stood in front of a window.

To understand why computers struggle when recognizing good jokes, think back to the EEG findings from
Chapter 2
. As we learned, our brains elicit two kinds of reactions to jokes—the P300 and the N400. The P300 reflects an orienting reflex, a shift in attention telling us that we've just seen something new or unexpected. The N400 is more semantic in nature. It measures how satisfying the new punch line is, and how well it activates a new perspective or script.

In that earlier chapter we also discovered that whereas all jokes elicit a P300, only funny ones elicit an N400, because these bring about a satisfying resolution. A related finding is that a word's cloze probability is inversely proportional to the size of the N400 it produces—the higher the cloze probability (i.e., the more we expect to see that word), the smaller the N400. This size difference reflects how easily new words are integrated into already constructed meanings, with easier integration meaning smaller N400s. At first you might think that cloze probability should influence the “surprise” response of the P300, but this isn't the case. Low-probability words aren't shocking, only incongruent. It's a matter of context—larger N400 responses mean that contexts are being shifted, while P300 responses mean that we're simply shocked, context having nothing to do with it.

It's a subtle difference, one that computers struggle with. To computers, there's no such thing as context, only a constant stream of probabilities. That's where we humans distinguish ourselves, bringing us back to the
constructing, reckoning,
and
resolving
stages from
Chapter 2
. The human brain doesn't just recognize cloze probabilities, it builds hypotheses and revises those hypotheses based on new evidence. It's always looking for patterns and constructing contexts, and by relying on both probabilities and expectations, it becomes an active manipulator of its environment rather than a passive receiver.

To see how this relates to humor, let's review a study conducted by the cognitive scientist Seana Coulson of the University of California at San Diego. Coulson's aim was to understand the human brain's sensitivity to both context and cloze probability. First, she showed subjects sixty sentences, some of which ended in a funny punch line and some of which didn't (e.g., “She read so much about the bad effects of smoking she decided she'd have to give up
the habit/reading
”). Only the joke endings were expected to bring about shifts in perspective. Next, she varied the cloze probability of the sentence endings, dividing them into two categories. Sentences for which the joke setup activated a salient, high cloze-probability ending—as in the above example—were labeled “high constraint.” Those with a lower cloze-probability ending were called “low constraint.” For example, “Statistics indicate that Americans spend eighty million a year on games of chance, mostly
dice/weddings
” is a low-constraint sentence because there are many possible endings—
dice
being only one of several low cloze-probability alternatives.

Not surprisingly, the N400s were bigger for sentences with funny punch lines than for those with unfunny ones. But this difference appeared only among the high-constraint sentences. That's because these were instances in which the subjects' world knowledge had set up some expectation and context, and the punch line brought a new way of thinking. Cloze probability is important to humor, but so is violation of our expectations. We're pattern detectors, but we're
constructors, reckoners,
and
resolvers
too. Computers' inability to incorporate all three processes is what causes them to struggle.

Before moving on to the next section, let's take one more look at how our thinking differs from a computer's. A little later we'll be addressing creativity, and how humor is just one example of this unique skill, a skill we still hold over our computer overlords. But for now, I want to drive home the point that the human brain is much more than just a parallel processor, or dozens of parallel processors linked together, as with IBM's Deep Blue or Watson. Indeed, it's like a child who can't sit still, always looking around the corner for what's coming next.

One benefit of computers is that they always follow directions: at any given time, we can tell a computer to stop working and tell us what it knows. It won't ignore our command, and it won't keep working and hope we don't notice. Humans are a different story. Our brains work so fast, and in such hidden ways, that it's nearly impossible to see what calculations they're really making. Analyzing jokes is especially difficult, because comprehension occurs in seconds. There's no way to stop people halfway through a joke and identify what they're thinking. Or is there?

“Semantic priming” studies are among the oldest in the field of psychology. The process is relatively simple: subjects are given a task—say, reading a joke—and then interrupted with an entirely different task that indirectly measures their hidden thoughts. For example, after reading the setup to a joke, they may be shown a string of letters and asked if those letters constitute a real word or not (called a “lexical decision” task). Imagine that you're a voluntary participant in a study and are instructed to read the following: “A woman walks into a bar with a duck on a leash. . . .” Then, the letters
S-O-W
appear on the screen and you're asked whether they form a real word or not. How long would it take you to recognize that
S-O-W
refers to a female pig?

Now, imagine that you're given the same task after reading the full joke:
A woman walks into a bar with a duck on a leash. The bartender says, “Where did you get the pig?” The woman says, “That's not a pig. It's a duck!” The bartender replies, “I was talking to the duck.”

Would you immediately recognize the meaning of
S-O-W
this time? Of course you would, because the word
pig
would have been activated in your mind. Without priming, it usually takes subjects between a
third of a second and three times that long to recognize a given word. With priming (e.g., reading the above joke), that reaction time is decreased by a quarter of a second. This may not seem like much, but in the world of psychology it's a huge effect.

I mention semantic priming because Jyotsna Vaid, a psychologist at Texas A&M University, used this very task to find out the precise point at which subjects revised their interpretations and “got” a joke. For our example joke there are at least two possible interpretations. One of these is that the woman owns a pet duck and that the bartender doesn't know his birds from his boars. A good way to check for this interpretation is to use
P-E-T
in the lexical decision task, because if it's what subjects are thinking, then the word
pet
should be at the top of their minds. The second possible interpretation is that ducks can understand questions from surly bartenders, and that the woman is as ugly as a pig. For that one,
S-O-W
should be highly activated.

Earlier I noted that jokes become funny when scripts suddenly change due to an incongruous punch line—for example, a doctor's wife inviting a raspy-voiced man inside for an afternoon tryst rather than a chest exam. Now we're seeing the exact point at which these shifts occur. Not surprisingly, Vaid saw that the initial, literal interpretations of the jokes were dominant when subjects started reading. In other words, they had no choice but to assume the woman owned a pet duck. However, as soon as the punch line came and an incongruity was detected, the second interpretation became active too. The first one didn't disappear, though. Instead, it stayed active until the end of the joke, after the subjects had been given a chance to laugh. Only then did they make up their minds and move on—and the word
pet
stopped receiving facilitation in the lexical decision task. From these results we see that our brains build hypotheses, sometimes more than one at a time, and only as more evidence becomes available are old ones jettisoned like rotten fruit.

In a sense, then, we're built to be pattern detectors, always taking in new information and building stories. Much of the time those interpretations are correct. Sometimes they aren't.

And when they aren't, occasionally we laugh.

T
RANSFORMATIONAL
C
REATIVITY

“Computers are creative all the time,” says Margaret Boden. But will they ever generate ideas—or jokes—that convince us they're truly creative without seeming artificial or mechanical? “Many respectable ideas have been generated by computers which have amazed us and that we value. But what we haven't seen is a computer that creates something amazing and then says, ‘Don't you think this is interesting? This is valuable.' There are many systems which come up with amazingly novel ideas, but if there's any value in it, humans still need to persuade us why.”

Boden is referring to a major problem with creativity—and a big challenge for humor researchers too. Creativity is subjective. Knowing when a punch line works or not, as with a painting or a sonata, requires being able to assess its value and novelty. But this capability is something many people lack, so imagine how difficult it must be for computers. How do we justify any work of art? How do we know that the punch line
An airplane hangar
isn't funny but a telegram-sending dog proclaiming
“But that would make no sense at all”
is?

According to Boden, there exists more than one type of creativity. In fact, there are several. The first and simplest form is “combinatorial creativity,” which is the type displayed by simple programs like The Joking Computer. Combinatorial creativity involves combining familiar ideas in an unfamiliar way, as when words are put together to form a pun or rhyme. A good example, though it's not particularly funny, is the earlier punch line
A funny bunny.
Odds are that you never heard that joke before. It's possible that nobody has. But it didn't change the way you looked at jokes because it only manipulated a simple rhyme.

A second type is “exploratory creativity,” which involves making new connections within existing knowledge. It's similar to combinatorial creativity, except that now we're dealing with a greater degree of novelty. Although outside the realm of humor, consider Paul McCartney's song “Yesterday.” It wasn't the first Beatles ballad. It also wasn't the first recorded use of a cello, as classical musicians had been using
the instrument for centuries. It was, however, the first modern rock song to give the cello such a prominent role. Now hip-hop artists such as Rihanna and Ne-Yo use it all the time.

Exploratory creativity allows us to make connections we've not seen before. Consider, for example, the Steven Wright joke
There was a power outage at a department store yesterday and twenty people were trapped on the escalator.
It's essentially an analogy, since elevators are different from escalators in their ability to trap people, thus triggering the script that Americans are overweight, lazy mall-dwellers. Probably no other comic has made the connection between escalator failures and sedentary shoppers, but Wright did and he got a pretty good joke out of it.

The third type of creativity, “transformational creativity,” is something entirely different. It occurs when we're forced to restructure our thinking, and Boden cites post-Renaissance Western music as a salient example. Prior to the work of Austrian composer Arnold Schoenberg, orchestral music always had a tonal key. Composers sometimes introduced modulations in the middle of a piece, but they always returned to the original key by the end, signaling the work's theme. These modulations were often surprising, but they weren't transformational in the sense I mean here. A transformational change came only when Schoenberg created a new kind of music never heard before—“atonality.” Though disturbing to many at first, Shoenberg's dropping of the tonal key was quickly adopted by others and then subjected to several exploratory alterations itself.

We see such variation in humor, too. Stand-up comedians approach their art in different ways, and this variety is what makes comedy clubs so fun. But not all comedians rewrite their genre. Jerry Seinfeld, although funny and fantastically successful at pointing out the obvious, didn't force us to look at comedy differently. Neither did Steve Martin, even though he's one of the smartest comedians ever to have graced the stage. Andy Kaufman, on the other hand, was a transformational creative genius. He created alter egos so believable that his audience didn't know if they were a joke or real. He pretended to get into fights with fellow actors and comedians during live performances—sometimes
even storming off the stage. Once, he ended a performance by taking the entire audience out for milk and cookies.

Nobody had ever created comedy like Kaufman, just as nobody had told dirty jokes and offended audiences like Lenny Bruce. For every hundred Seinfelds or Martins, there's only a handful of Kaufmans or Bruces.

Returning to the brain for a moment, it's worth noting that no single brain region is responsible for this type of creativity. One scientific review of seventy-two recent experiments revealed that no single brain region is consistently active during creative behavior. There is, however, something special about people who make novel connections or imagine the unimaginable. What sets them apart is the connectivity within their resting brains. This finding was discovered by a team of researchers from Tohoku in Japan, who observed that people with highly connected brains—as measured by shared brain activity over multiple regions—are more flexible and adaptive thinkers. Connected brains are creative brains.

Other books

Wind Dancer by Chris Platt
Anal Milf by Aaron Grimes
Dancing Lessons by R. Cooper
Temple of the Winds by Terry Goodkind
Don't Go Breaking My Heart by Ron Shillingford
Dockalfar by Nunn, PL
Dark Star by Robert Greenfield