The Sober Truth (23 page)

Read The Sober Truth Online

Authors: Lance Dodes

CHAPTER NINE
THE FAILURE OF ADDICTION RESEARCH AND DESIGNING THE PERFECT STUDY

ONE OF THE MOST galling aspects of the current approach to addiction treatment is how little research is being done to seek better solutions. Despite all the failings of research on 12-step effectiveness, the field has come to a consensus around the idea that 12-step programs are useful and have been sufficiently studied. A review of the most recent three years (2010–2012) of the
American Journal on Addictions
shows no articles on or about 12-step programs. Two articles about 12-step treatment were published over the same period in the
Journal of Substance Abuse Treatment
, but neither examined either the effectiveness or mechanism of action of AA. There are, however, a number of papers whose purpose is simply to support 12-step treatment; Christine Timko, for example, published an article in the
Journal of Drug and Alcohol Dependence
with the stated goal of implementing and evaluating procedures “to help clinicians make effective referrals to 12-step self-help groups.”
1

Addiction research has for years followed the prevailing trade winds in popular science, folding itself into ever smaller cul-de-sacs of genetics and biochemistry. Studies of this kind are easy to fund (because they suggest pharmacological solutions supplied by drug manufacturers) and widely cited, helping to perpetuate the illusion that someday advances in molecular science might reduce complex human behavior to the ingredients in a few synaptic cocktails.

Alas, these approaches are doomed by their reductive scope and hampered by an issue of expertise. The research is inevitably conducted by biologists who have little or no training in the psychology of human addiction. This is not to say they aren’t competent scientists. We become experts in the things we do; these researchers are without question the world’s foremost experts on the study of rat brains. But to apply a rat-based dopamine study to a deeply intricate problem like human addiction is to neglect the wealth of knowledge and experience we have gained from treating humans.

The big problem that underlies such half-a-loaf takes on addiction is the absence of psychological awareness in the major addiction journals. This neglect is no accident: views not fitting with the present biochemical paradigm are simply not accepted for publication. (I have witnessed this firsthand as both a reviewer and an author.) What passes for psychological insight in the professional addiction literature are virtually always simple questionnaires that rank people according to superficial traits such as “interest in risky activities.” This absence of sophistication makes it impossible for these journals to recognize or meaningfully engage the psychology behind addictive behavior; there is simply no room for that conversation.

The other seismic shift in scientific literature that has strangled attempts to treat addiction from a psychological perspective is the injection of numbers into anything and everything that will harbor them. Most people who do good work in education or the humanities know that deeply significant truths cannot be measured. Great teaching, for example, is hard to quantify. Most good, worthy, and verifiable ideas don’t belong in a spreadsheet. Yet as a result of insecurity or ignorance, the majority of scientific publications today won’t even consider a paper that isn’t larded with numbers from top to bottom.

The consequence of this institutional blindness to qualitative and nuanced thought is that research is typically limited to broad statistical studies that do not investigate causes or meanings. In addiction research, these large population survey studies never once ask any questions about the feelings inside the people they are examining. As a consequence, they are often astonishingly obvious or trivial. Here are just a few recent examples from the major addiction journals:

“How Do Prescription Opioid Users Differ From Users of Heroin or Other Drugs in Psychopathology?” (
Journal of Addiction Medicine
)

This article statistically analyzed over nine thousand survey records (no people were interviewed), concluding the painfully obvious fact that using drugs such as heroin and morphine is correlated with the likelihood of using other drugs, being depressed and anxious, and having a lower “quality of life.”
2

“Health/Functioning Characteristics, Gambling Behaviors, and Gambling-Related Motivations in Adolescents Stratified by Gambling Problem Severity: Findings from a High School Survey” (
American Journal on Addictions
)

This article statistically analyzed data from a survey of over twenty-four hundred high school students. Its conclusion was that pathological gambling was associated with poor academic performance, depression, and aggression. The authors said their findings suggested a need for better interventions with adolescents who gamble.
3

“What Is Recovery? A Working Definition from the Betty Ford Institute” (
Journal of Substance Abuse Treatment
)

This article states that it fills the (presumed) need for a standard definition of the word
recovery
. It solves this problem as follows: “Recovery is a voluntarily maintained lifestyle characterized by sobriety, personal health, and citizenship.” Incredibly, this paper is listed as among the five “most cited” references for the entire
Journal of Substance Abuse Treatment
.
4

“Effect of Alcohol References in Music on Alcohol Consumption in Public Drinking Places” (
American Journal on Addictions
)

This paper describes a study designed to test “whether textual references to alcohol in music played in bars lead to higher revenues of alcoholic beverages.” The results were that “customers who were exposed to music with textual references to alcohol spent significantly more on alcoholic drinks.” Mind you, this article didn’t appear in a hospitality trade publication presumably because marketing people could have told you this already.
5

“Psychosocial Stress and Its Relationship to Gambling Urges in Individuals with Pathological Gambling” (
American Journal on Addictions
)

The title of this paper gives promise that the study will employ some psychological sophistication. Alas, its conclusion puts such hopes to rest: “Patients with PG [pathological, or compulsive, gambling] displayed significantly higher scores on the daily stress inventory . . . than did healthy subjects. These findings support the role of psychosocial stress in the course of PG.”
6
There is no mention of how this stress functions, why it drives addiction, or any aspect of human psychology that might help to explain and deepen the paper’s obvious conclusion.

What’s missing from this literature is any study that revisits the fundamental questions once and for all: What is addiction? How should we treat it? Why does it occur in some individuals and not others?

I mentioned earlier that in the 1990s, one attempt at such a study was conducted by the National Institute on Alcohol Abuse and Alcoholism. But the study, called “Project MATCH,” was severely limited in many ways. Most significantly, it looked at only three approaches: cognitive behavior therapy, “motivational enhancement” therapy, and 12-step treatment. It concluded that no difference in outcomes could be found among these. There was no control group and no psychodynamic group. Given the study’s design, it is not surprising that the results were so disappointing, and that serious questions have been raised about whether any of these treatments were effective at all.
7

What would it take to answer the question of how we should treat addiction? A definitive addiction study could potentially be designed, funded, and executed. A study of this kind would provide a blueprint for research panels at the NIH and universities and give lay readers a far better way to interpret the headlines that constantly trumpet yet another breakthrough about addiction. Most importantly, a truly meaningful study would be long enough to measure true growth and change, versus the prevailing short-term glances at transient benefits. Before discussing how such a study could be created, I must first address some key issues that have interfered with proper acceptance of serious psychological research in the addiction literature.

THE MIRAGE OF “EVIDENCE-BASED” SCIENCE

One of the impediments to including psychological understanding in addiction research is the wildly popular idea that only “evidence-based” treatment is worthwhile. It is useful to examine whether this idea has merit.

Most people with a scientific bent would agree that science is based on evidence. Without strong supporting corroboration, we would have no way to distinguish between a gut feeling and a solid result, and no way to separate personal bias from objective fact. But the value of evidence depends entirely on whether the data is meaningful—whether it is valid (bears on the topic) and important. No field, from the hardest statistical science to the “softest” sociology, is immune to abuses of the word “evidence”; some just do a better job of hiding their foundational biases than others. As we have seen, the use of “evidence” in addiction studies is no guarantee that the numbers will be treated without bias or even that they represent anything useful. As we have also seen, the majority of addiction studies covering 12-step treatment fail to pass basic threshold standards of experimental control and causal inference. Yet these flawed methodologies are not always apparent to the lay reviewer, and the press hardly helps matters with its ongoing confusion between controlled science and meaningless correlations. As a consequence, much of what we are sold under the billing of evidence is simply
data
. And data without context is
noise
.

Consider just a few of the problems in widely cited articles on addiction (noted in chapter 5): compliance bias, lack of controls, inadequate length of study, ignoring data that would interfere with the study’s conclusions (dropout data, for instance), statistically dubious extrapolations, logically unfounded leaps from rats to people, and a number of advanced statistical regression methods designed to retroactively account for all of these (though these methods have had only mixed success—biostatisticians would be the first to admit that even the most sophisticated tricks of the field cannot “fix” a study that isn’t designed thoughtfully from the beginning).

And there is an even greater problem with the worship of evidence, regardless of its validity: it is very easy to find meaningless evidence. Setting up an experiment to study an irrelevant question is a bit like a telescope pointing at the wrong place—you may confirm that the sun is indeed hot, but if you’re looking for life on Mars, then you haven’t exactly advanced the dialogue. Experiments that are designed to answer facile or specious questions about their topics are doomed to irrelevance before they begin. Thus we have a parade of statisticians determined to figure out how many heroin addicts are likely to use cocaine, without bothering to ask if this data is actionable or illuminating. It can easily be “proven” that environmental cues remind us to drink or that compulsive gamblers tend to do poorly in school. You could send out a survey tomorrow and collect solid evidence that drinkers like to smoke or that there is more alcohol consumption when people are “stressed.” You might even publish and advance in academia for having done so, while just out of sight, the state of addiction research remains in stasis.

Nearly every addiction study is guilty of looking at the wrong things, and the reason is that most of these researchers have no training or interest in psychology. The false dogma that addiction is a biochemical disorder, or can be understood with superficial measures of behavior, has become self-perpetuating in the addiction literature. The gatekeepers who stand at the threshold of our science journals continue to reward trivial inquiries that shore up this woefully inadequate model of human behavior. If more researchers considered psychological explanations of addiction—and they should, given the preponderance of countervailing evidence that has left the “brain disease” concept in tatters (remember the veterans’ study discussed in chapter 5)—they might take an interest in more humanistic ideas about humans.

If someone wanted to study the psychology of addiction statistically (more on why this is not a great idea later), researchers could step away from the rats and examine what precipitates addictive actions in humans. In my second book, I raised this notion as a way to help people predict the next episode of addictive behavior.
8
The same question could be studied in a large-scale way by asking people to keep a record of the events, feelings, and situations that precede addictive acts. Subjects could then be interviewed to see if a common emotional thread can be found behind each of these precipitants. We might gather a good amount of evidence and find statistically significant commonalities in that data, suggesting that addiction is a comprehensible psychological symptom. No one has yet tried.

There is one other serious problem with the term “evidence-based science,” and it was highlighted eloquently in a now-famous paper by John Ioannidis, a professor of medicine and director of the Stanford Prevention Research Center at Stanford University School of Medicine.
9
Ioannidis showed that a research finding is less likely to be true when the studies conducted in a field are smaller, when effect sizes are smaller (the difference between a positive and negative finding is small), when researchers are prejudiced in favor of or against a certain result, and perhaps most importantly, when studies make fundamentally inaccurate assumptions about whether their findings will be true before they run the study (more on this below). I have seen all of these errors in addiction research: an inadequate number of people in studies, attempts to find statistical meaning in a small effect (an overall success rate of 12-step treatment of only 5 to 10 percent), and bias in presenting data (selection bias, compliance bias, omitting data that doesn’t fit the conclusion).

Other books

The Red Gloves Collection by Karen Kingsbury
The Mommy Mystery by Delores Fossen
Running in the Family by Michael Ondaatje
You Let Some Girl Beat You? by Ann Meyers Drysdale
Sandcats of Rhyl by Vardeman, Robert E.
HIS OTHER SON by SIMS, MAYNARD
The Wedding Gift by Kathleen McKenna
Branded Mage by D.W.