The Great Cholesterol Myth (20 page)

“I have been on cholesterol-lowering medication for some time. I had been telling my doctor that my medication was doing something to my muscles and he would not believe me. I changed doctors and the new one discovered that my muscles’ enzymes were 800 (normal is 200). He took me off the medication and my enzymes came down. When I went on a different statin, they climbed back up again.”
22

“My doctor insists I must take statins to lower my cholesterol even though I experience pain with all of them. Sometimes the pain gets so bad that I struggle not to cry when I walk down the hall of my child’s school. My doctor says I should accept ‘a little discomfort.’ He says this pain is rare but I know a lot of people who have had the same muscle pain.”
23

“I have taken Lipitor for several years. I now notice numbness in my feet and sporadic memory loss, difficulty balancing my checkbook and using the computer. I have a Ph.D., so this is alarming. My doctor says Lipitor is not to blame. My cholesterol is great and not to stop. Is there any evidence that Lipitor could be connected to these symptoms?”
24

Okay, so it’s pretty clear that statin drug side effects are hardly uncommon. But if so many people have so many symptoms as a result of taking statin drugs, why, you might well ask, have you not heard about them? Don’t doctors know about this stuff?

Interesting question. And one that was exhaustively investigated in a groundbreaking study by Beatrice Golomb, M.D., Ph.D., who wanted to find out exactly how doctors routinely handled patient reports of statin side effects.
25
What she found was disturbing: A comfortable majority of doctors dismissed the complaints. Patients in the study described symptoms of muscle pain, tightness, cramping, or weakness to a total of 138 doctors, 62 percent of whom dismissed the possibility that the symptoms were related to statins. Patients presented symptoms of nerve injuries, known as neuropathies, to 49 physicians, 65 percent of whom dismissed the possibility that the symptoms were statin-related. And they presented symptoms of impaired thinking or memory to 56 doctors, a whopping 71 percent of whom dismissed any possibility of a relationship to the meds!
26

This research is important for many reasons, but there’s one in particular that’s worth mentioning: If docs aren’t acknowledging these symptoms—known as
adverse effects
—that means they’re also notreporting them to MedWatch, the Food and Drug Administration’s reporting system for adverse events. Virtually every doctor we know who is knowledgeable about this believes that the side effects of statin drugs are deeply underreported, a fact that should concern all of us (though it certainly doesn’t cause the drug companies to lose any sleep).

Okay, we’ve answered the first question—”What are the risks?”—in our two-question inquiry. Now it’s time to take a look at the second question: “What are the benefits?” Only then can we make an intelligent decision about the risk-benefit ratio and decide whether it really makes sense to take (or stay) on a statin drug.

Let’s go to the proverbial videotape.

THE “BENEFITS” OF STATIN DRUGS: NOT EXACTLY WHAT WE’VE BEEN LED TO BELIEVE

To understand how you may have been misled about the benefits of statin drugs, it’ll be useful to first understand something about how it’s possible to mislead with numbers.

Imagine, if you will, that you are on a game show and the host asks you, “Would you rather have 90 percent of the money behind door number one, or 10 percent of the money behind door number two?” All things being equal—that is, if there were the same amount of money behind both doors—you’d pick the
90 percent option. But that wouldn’t be much of a game show, would it? The point is that unless you know how much money is behind the doors, it’s impossible to know the real significance of the 90 percent and the 10 percent. Obviously, you’d choose 10 percent of $1 million over 90 percent of $100.

So we must know the real,
absolute
amount of anything if we’re to evaluate its significance. The
percent alone
is a kind of meaningless number unless you know what it’s a percentage
of
.

Suppose we choose 90 percent of the money behind door number one and find $100 there. You can refer to your take-home haul as “90 percent of the total,” or you can refer to it as $90. Both are accurate, but the first (90 percent) is misleading. (It reminds us of what Jack, Dr. Jonny’s wisecracking tennis partner, says when the score is 2 to 1: “I’ve got a 100 percent lead over you!”)

When you refer to your take-home money as “90 percent” you are expressing the amount in relative terms. Relative to the whole, your $90 is, in fact, 90 percent. Sure sounds like a lot, doesn’t it? But when you refer to your take-home money as $90, you are expressing the real amount in absolute terms. Ninety dollars is the actual,
real
amount of money we’re talking about here. Who cares what percentage it was?

Absolute
and
relative
. Hold that thought.

Now there’s a parallel concept to absolute and relative amounts that’s used in clinical studies all the time. It’s called absolute versus relative risk. One—the absolute risk—is the real, true reduction in risk that you get when you take, for example, a drug that is reputed to help prevent heart disease. That’s the number you really want to know. The other—the relative risk—is a big smokescreen that
obscures
what you really want to know, just like “90 percent of the money behind door number one”
sounds
like a lot but really isn’t.

Here’s an illustration of what we’re talking about. Let’s say you’re a gambler, and you are offered the chance to buy a special magic wand that guarantees you a 100 percent increase in your chance of winning the lottery. This sounds like a really good deal, right? But remember, it’s a relative number. To evaluate your real chances of winning the lottery, we have to look at the
absolute
numbers. Your normal chance of winning the lottery without that magic wand is 1 in 87,000,000, so the magic wand just upped your chances to 2 in 87,000,000. Whoop-de-doo. Sure, it’s a
100 percent improvement
, which sounds impressive, but
so what
? You still have virtually
no chance
of winning the lottery, and you’re out of pocket for the cost of the wand. It’s like having 90 percent of a “fortune” that’s only worth a dollar.

The above example may seem silly, but it illustrates exactly what researchers do to make their results seem more dramatic, particularly when those research results are being used to tout the benefit of a drug. (Remember, most drug companies fund their own studies. Many if not most of these studies wind up being little more than marketing materials for the drugs being studied, wrapped up in the guise of science.) The researchers use percentages, specifically percentages that make the results sound far more impressive than they actually are. Yes, what they say is technically true—just as it’s true that the magic wand offers you a 100 percent increase in your lottery chances—but it’s wholly misleading. A more accurate
way to express what you’ve bought with the magic wand is to say your chances went from 1 in 87,000,000 to 2 in 87,000,000. Forget the “100 percent increase”—what really happened is you went from
one
chance in a zillion to
two
chances in a zillion. Not something you’d probably pay a lot of money for.

Fuzzy Math, Anyone?

Now let’s see how the drug companies use the same misleading “relative” numbers to mislead you about the effects of their drugs.

The makers of Lipitor, for example, famously advertised a 33 percent reduction in heart attack risk in their magazine ads. But read the fine print. It’s a relative number. Here’s how it’s computed. Let’s say you have a hundred randomly chosen men who are not taking medication; and let’s say that out of that hundred, it’s statistically likely that three of them would be expected to experience a heart attack at some point over the course of five years—in other words, 3 percent of the total number of men (one hundred) would be expected to have a heart attack.

Now, if you had put those same men on Lipitor over the course of the same five years, only two would be expected to have a heart attack (2 percent of the total number of men). A reduction from three heart attacks to two heart attacks is in fact a 33 1/3 percent reduction in relative risk, but the real,
absolute
number of heart attacks prevented is only
one
. One heart attack among a hundred men over the course of five years. The real
absolute reduction in risk
is 1 percent (the difference between the 3 percent in the no-drug group who would have had a heart attack and the 2 percent in the Lipitor group). The “33 percent reduction” figure is, again, a relative number, and because it’s way more impressive than the much more truthful “1 percent” (the absolute number), researchers frequently choose to use relative risk instead of absolute risk when they report results! (Doesn’t it sound much better to say Lipitor reduces risk by 33 percent than to say Lipitor reduces heart attack risk from 3 percent to 2 percent?)

Keep this in mind when you read our review of some of the studies used to promote the idea that statins save lives.

There’s a second concept that would be helpful to understand before we venture into the studies themselves, and that’s the distinction between
primary prevention
and
secondary prevention
. Primary prevention refers to treating people who have not had a heart attack for the purpose of preventing one. Secondary prevention refers to treating people who’ve already had a heart attack for the purpose of preventing another. As you’ll soon see, the effect of statins on these two populations is quite different.

Before we get to that, there’s something else you should know about study interpretation in general that may help you make more sense out of some of the statin propaganda. Studies usually produce a mass of data that can be spun in a number of ways. Let’s take one common substance we’re all familiar with: alcohol. There are no shortages of studies demonstrating that moderate alcohol consumption lowers the risk of heart disease. So far, so good. But those same studies have also teased out a troubling connection—alcohol consumption
increases the risk for breast cancer! Both facts—that alcohol helps your heart and that alcohol increases the risk for breast cancer—are absolutely true, but if you’re a manufacturer of alcoholic beverages you’re going to be talking up the reduction in heart disease risk and not calling attention to the association with breast cancer.

In much the same way, a drug company–sponsored study might indeed find a beneficial effect on heart disease associated with a particular drug, a beneficial effect similar to that of alcohol. But if in addition to lowering the risk for heart disease the drug increased the risk for diabetes—a finding that’s shown up in a couple of statin drug studies—that finding might easily be buried in the text where only the most determined investigators would be likely to uncover it.

Now that you understand these concepts—relative versus absolute percentage, primary versus secondary prevention, and burying inconvenient associations where they are less likely to be noticed—let’s look at some representative studies on statin drugs and see what they
really
say, as opposed to what their manufacturers would like you to
think
they say.

The ALLHAT Study: Not a Single Life Was Saved

The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT), conducted between 1994 and 2002, was the largest North American cholesterol study ever undertaken, and as of 2002, it was the largest study ever done using the statin drug pravastatin (brand name Pravachol). Ten thousand participants with high LDL cholesterol levels were divided into two groups. One group was treated with pravastatin, and the other group was simply given the standard advice on “lifestyle changes.”

Twenty-eight percent of the pravastatin takers did lower their cholesterol by a small but statistically significant amount (compared to 11 percent who did so in the “lifestyle change” group). This allowed the pravastatin folks to trumpet a significant reduction in cholesterol and declare the trial a success.

Not so fast.

When the death rates from heart attack were examined, there was no difference between the two groups. The statin drug lowered cholesterol in 28 percent of the people taking it, but not a single life was saved. Pravastatin neither significantly reduced “all-cause” mortality (death from any reason whatsoever), nor reduced fatal or nonfatal coronary heart disease in the patients who took it.
27

The ASCOT-LLA Trial: Not Exactly a Slam Dunk for Lipitor

The Anglo-Scandinavian Cardiac Outcomes Trial–Lipid Lowering Arm (ASCOT-LLA) was a multicenter randomized controlled trial in which more than ten thousand patients with high blood pressure and at least three other cardiovascular risk factors were assigned to one of two groups. Half were given Lipitor, half were given a placebo (an inactive substance in a pill form). Remember, too, that all patients in this study were hypertensive. Most were overweight (average BMI 28.6), 81 percent were male, and about a third were smokers.

In this study, even after a year, those taking Lipitor saw clear benefits, though as we’ve pointed out, this may be because of the many other things statin drugs do besides lower cholesterol. And the folks in this study certainly had risk factors (e.g., being overweight, having high blood pressure, etc.), so any one of the positive effects of statin drugs (e.g., its antioxidant, blood-thinning, or anti-inflammatory qualities) could easily have made a difference. Sure enough, fatal and nonfatal strokes, total cardiovascular events, and total coronary events were all significantly lowered.

Sounds like a slam dunk for Lipitor, doesn’t it?

Well, maybe.

After three years, there was no statistical difference in the number of deaths between the two groups. (In fact, there were actually a few more deaths among the women taking Lipitor than among the women taking the placebo.) So approximately $100 million was spent, and not a single life was saved.

Other books

Merrick by Bruen, Ken
Finding Cinderella by Colleen Hoover
Los presidentes en zapatillas by Mª Ángeles López Decelis
The Closed Circle by Jonathan Coe
The Shadows, Kith and Kin by Joe R. Lansdale
Frigid Affair by Jennifer Foor
The Darkness Within by Kelly Hashway
Star Struck by Val McDermid