Unfair (19 page)

Read Unfair Online

Authors: Adam Benforado

Similarly, in evaluating the believability of in-court testimony, judges often direct jurors to focus on the witness's demeanor—how she acts, as opposed to what she says.
In the Third Circuit, for example, jurors are explicitly encouraged to consider “the witness' appearance, behavior, and manner while testifying.”
Indeed, we have such faith in these judgments that we allow them to decide outcomes.
If a jury watches a defendant and decides—based solely on her body language—that she has perjured herself, the jury can use that as affirmative evidence of guilt.

The problem is that when researchers have looked at these commonsense cues, they have found that most have nothing to do with whether or not someone is lying.
The handful that are somewhat predictive tend to involve verbal elements (like the tension in one's voice or a lack of detail in one's responses) rather than visual
ones.
Numerous studies have shown, for example, that people who are lying are actually less likely to move their arms and legs (not more likely, as observers tend to assume), and averting one's gaze is not related to deceit.
Even worse, when people are under a lot of pressure to appear trustworthy, that often exacerbates the behaviors that observers wrongly associate with lying. A truthful person who will calmly look you in the eye when discussing matters in her own home will stare down at her restless legs when called upon to defend herself in court.

The believability of a witness can also be boosted by completely irrelevant factors.
Put the affable and handsome George Clooney on the stand, and people are much more inclined to believe what he says than if you call up someone viewed as unattractive or unpleasant.
Likewise, tell observers that a witness is a prize-winning poet or cancer survivor (an immaterial positive detail), and they are significantly more likely to trust his statements than if you tell them that he flunked out of dental school or hates dogs (an extraneous negative detail). Some of these cues are downright bizarre.
Initial evidence suggests, for instance, that people are inclined to be more trusting of men with brown eyes than of men with blue eyes, except when the blue eyes appear on someone with a broad face.
According to the researchers, people with brown eyes tend to have rounder faces, so it is actually the shape of the men's faces that is doing the work: round faces are perceived as happier, and happy faces are linked to trust (untrustworthy faces, by contrast, are perceived to be angrier).

Overall, it turns out that we are quite bad at ferreting out deception.
In a recent analysis of more than two hundred studies, participants were able to identify lies and truths correctly just 54 percent of the time, only marginally better than chance.
And the elements that we would imagine have a big impact on our ability to detect lies—whether we are judging a long-term acquaintance or a stranger, whether the lies involve high or low stakes, whether we are interacting with the person or observing
from a third-party perspective—don't move detection rates more than a few points in any direction.
Moreover, the people one might expect to have special talents at judging deception—police officers, judges, psychiatrists, auditors—aren't any better than the rest of us at sorting out lies from truths.

Yet just like my students, experimental subjects tend to be very confident in their lie-detection performance.
And, unfortunately, those who are most confident in their judgments aren't generally more accurate than those voicing greater doubt.
Although it would be nice if moving from the laboratory to a real-world setting brought an improvement in lie-detection performance, there is reason to think that it just gets worse in the courtroom.
First of all, while study participants are generally permitted to focus on whatever they think is most helpful in judging deceit, jurors are explicitly told to zero in on elements likely to lead them astray, in particular the witness's demeanor.
And being unable to attend to visual cues—on account of blindness—is considered a valid reason for exclusion from a jury during
voir dire
.
Ironically, contrary to the assumptions of many courts, the blind may often be in a
better
position to assess truthfulness than jurors focused intently on body language, because they are less likely to be misled by downward stares and shaky limbs.

Furthermore, during an actual trial, witnesses are likely to be considerably more nervous—even if they are innocent or telling the truth—than in a laboratory experiment, and signs of nervousness are frequently misconstrued as evidence of deceit.
Meanwhile, some dishonest witnesses do not show telltale signs of lying, which may be a simple reflection of the fact that they have undergone extensive practice with attorneys: when it comes to lying, research suggests that practice really does make perfect.
Finally, it is hard to see how a juror might be able to make use of the few cues that appear to have some diagnostic value.
How is she supposed to see from across the room whether a defendant's pupils are dilated, or decide which of hundreds of behaviors to
focus on, all while trying to pay attention to what the witness is saying?

—

So, you might be saying to yourself, there is a gap between our intuitions about lying and the actual scientific evidence, but at least there are proven techniques out there that can make up for our shortcomings. There is a
science
of lie detection.

And herein lies the paradox. Logic would suggest that the great confidence we have in our own ability to detect deception would make us particularly unreceptive to outside experts.
Besides, everyone knows that, for the right price, an expert can be found to defend any side of an issue. But as we saw with Sergeant Duke's analysis of the video footage, we can lose this perspective in the heart of a trial when there are experts in front of us, using the trappings of science. Their methods and tools erase our concerns, in part, because they seem to eliminate subjectivity and bias. The result is that we often put too much faith in the science and technology that experts bring into the courtroom.

Nowhere is that more clear than in the case of assessing deceit.
The fact that we expect lying to produce bodily manifestations has left us ready to accept an array of dubious lie detectors that seem to offer objective analysis. A parade of “experts” have convinced generation after generation that their vats and techniques and machines can expose what was previously hidden: the secret mendacity revealed in a floating torso, trembling hands, quivering eyelids, beads of sweat, or blotchy skin.

The heart has received special attention over the centuries.
Daniel Defoe, the author of the novel
Robinson Crusoe
, suggested in a 1730 essay on combating crime that guilt and innocence might be assessed by checking a potential thief's blood circulation: “a fluttering heart, an unequal pulse, a sudden palpitation shall evidently confess he is the man, in spite of a bold countenance or a false tongue.”

Some two hundred years later, in 1923, William Marston would attempt to bring results from his lie detector—which measured changes in systolic blood pressure—into an actual criminal proceeding.
Like Defoe, Marston pursued endeavors in both fact and fiction—a coincidence, no doubt, but a fitting one, for Marston's “systolic blood pressure deception test” was almost as fanciful as the magic lasso that his comic-book character, Wonder Woman, wrapped around villains to force them to speak the truth.

Yet there was nothing fictive about the events of 1923: the deception test was being introduced in the trial of a real man, James Frye, who stood accused of robbery and murder.
The court ultimately refused to allow Marston to testify as an expert, on the grounds that his lie-detector method had not been generally accepted by the broader scientific community.
But his efforts laid critical groundwork for the first modern polygraph, which measured a subject's pulse, blood pressure, and breathing.
And it was Frye's case that established the standard for the admissibility of expert testimony in the United States for the next several decades and, in some states, to this day.

Despite widespread use by intelligence agencies, businesses, police departments, and even private individuals—with several hundred thousand tests administered each year—the polygraph has never won over scientists, so polygraph results are generally not admissible as evidence against criminal defendants.
As the Supreme Court put it in 1998, there is “simply no consensus that polygraph evidence is reliable.”
Part of the problem is that even the most sophisticated polygraph is only measuring physiological responses—such as cardiovascular, respiratory, and electrodermal activity—and these responses may or may not be closely correlated with deception.
People regularly lie without sweating and sweat while telling the truth.

More recent developments, like Neuro Linguistic Programming—based lie detection, which is grounded in the notion that those who are right-handed tend to look up to their left when
visualizing a real event that they experienced and look up to the right when visualizing an imagined event, have run into the same problem.
It's appealing to think that Koon's rightward glances on the stand really meant something, but there's no evidence to support the notion that eye movements can be used to detect deception.
More generally, methods and technologies designed to use a person's nonverbal behavior to uncover deception have turned out to be about as effective as measuring a person's skull to assess his criminal propensities.

These failures have motivated efforts to capture lying at its source—the brain.
Until very recently this approach was impossible, but breakthroughs in neuroscience, including the EEG and fMRI, have finally allowed us a glimpse into the black box.
We now have the potential to ask a person whether he murdered his wife or show him an image of a bloody bathroom sink while we observe what is happening inside his head.

The implications of these neuroscientific advances have not been lost on the world of lie detection.
And there has been a mad dash by private companies, law enforcement and intelligence agencies, and others to develop specific applications. In
2011, the Defense Advanced Research Projects Agency (DARPA) oversaw a budget of almost $250 million focused on cognitive neuroscience research, and national security entities have invested millions more in the last decade to develop effective lie-detection technology. Today, even private citizens can gain access to some of the latest tools. Want to “prove” that you've never had an affair or stolen from your workplace?
For a few thousand dollars a pop, there are at least two companies that will assess your brain activity using fMRI or EEG equipment.

—

It was only a matter of time before lawyers tried to bring EEG and fMRI into the courtroom. From the outset, however, scholars and judges have questioned whether these new technologies are
up to the challenge, particularly for criminal matters. It is one thing to permit someone to undergo a brain scan in an attempt to verify her marital faithfulness and quite another to allow a jury to use lie-detection evidence presented by an expert as the basis for acquitting an accused murderer.

In 2012, Judge Eric M. Johnson, of the Montgomery County, Maryland, Circuit Court, had to make just such a decision about the expert testimony of Frank Haist, an assistant professor of psychiatry at the University of California, San Diego. Haist had been hired by No Lie MRI to study the brain of a man accused of a brutal murder. What he and the company promised was remarkable and tantalizing: an opportunity to reveal the secret truths inside the defendant's head.

The prosecution alleged that Gary Smith, a former Army Ranger, had spent a routine night drinking beer and playing pool at the VFW before fatally shooting his roommate and fellow veteran, Michael McQueen.

When the police had arrived at the men's Gaithersburg apartment, they'd found Gary covered in blood, vomiting outside.
He was the one who had called the cops, crying to the dispatcher, “Oh my God, help me….I dropped him off at the house, and I came back, and he had a big hole in his head.”
Sure enough, when the officers had entered the house, they'd found Michael's body.
But, curiously, there was no gun.

Gary had proceeded to try several stories on the police.
He'd come home and found Michael dead, but the weapon and shooter were gone.
No, the gun was there—Michael had killed himself.
No, wait, Gary
had
been in the apartment.
He had panicked because it was his gun that Michael used.
He had thrown it into a lake.

It was this strange series of lies that was at the crux of the case. What had really happened that night?

Gary hoped he could clear things up with the help of experts from No Lie MRI.
Technicians asked Gary a series of questions
as he lay inside the MRI machine.
He was told to lie on certain questions that were demonstrably true—like whether he had ever served in Iraq—to establish a baseline for when they got to questions about the murder, where he was told to be truthful.


Did you shoot Mike McQueen?”

Gary responded that he had not.

Professor Haist then reviewed the different fMRI images.
Knowing which areas of the brain tend to be highly active when people are lying, and having a sense of Gary's brain state when he lied, Haist could look critically at the images of Gary's brain when he answered questions about Michael's death.

In Haist's opinion, the images taken when Gary was known to be lying could be clearly distinguished from those taken when he answered questions about the murder.
Even the uninitiated would see the difference: the scans show far more red and yellow patches at moments when Gary is known to be lying.
The clear implication, then, was that Gary must be telling the truth about his roommate's death.

If only it were that simple.

First of all, we do not yet have the data to tell whether lie-detection approaches using fMRI, EEG, and other brain-scan technologies actually work well enough for us to rely on them in a legal setting.
Some technologies have virtually no independent, peer-reviewed research to stand upon; they rest instead on the largely unsubstantiated claims of their promoters.
One company, Brainwave Science, offers an EEG-based tool, Brain Fingerprinting, which is purportedly “over 99% accurate” in “scientifically detect[ing] whether specific information is stored in the brain—or not—by measuring brainwaves.”
According to the Brainwave Science website, “Many seasoned criminals have discovered ways to beat a polygraph. Deception is much more difficult with Brain Fingerprinting. When the subject recognizes the stimuli, an involuntary MERMER (brain wave response) occurs. It's about as foolproof as it gets.”
The benefits are apparent: “Investigators can
identify or exonerate suspects based upon measuring brain-wave responses to crime-related photos or words displayed on a computer screen.” Convinced?
Neither are most scientists.

Other books

Ghost Town by Richard W. Jennings
The Escort by Raines, Harmony
Cupid's Dart by David Nobbs
One Shot Bargain by Mia Grandy
LEGO by Bender, Jonathan
After the Parade by Lori Ostlund
Play Date by Casey Grant
Storm (Devil's Hornets MC) by Kathryn Thomas