Good Calories, Bad Calories (5 page)

Over the next thirty years, recorded cases of coronary-heart-disease fatalities increased dramatical y, but this rise—the al eged epidemic—had little to do with increasing incidence of disease. By the 1950s, premature deaths from infectious diseases and nutritional deficiencies had been al but eliminated in the United States, which left more Americans living long enough to die of chronic diseases—in particular, cancer and heart disease.

According to the Bureau of the Census, in 1910, out of every thousand men born in America 250 would die of cardiovascular disease, compared with 110

from degenerative diseases, including diabetes and nephritis; 102 from influenza, pneumonia, and bronchitis; 75 from tuberculosis; and 73 from infections and parasites. Cancer was eighth on the list. By 1950, infectious diseases had been subdued, largely thanks to the discovery of antibiotics: male deaths from pneumonia, influenza, and bronchitis had dropped to 33 per thousand; tuberculosis deaths accounted for only 21; infections and parasites 12. Now cancer was second on the list, accounting for 133 deaths per thousand. Cardiovascular disease accounted for 560 per thousand.

Fortune magazine drew the proper conclusion in a 1950 article: “The conquering of infectious diseases has so spectacularly lengthened the life of Western man—from an average life expectancy of only forty-eight years in 1900 to sixty-seven years today—that more people are living longer to succumb to the deeper-seated degenerative or malignant diseases, such as heart disease and cancer….” Sir Maurice Cassidy made a similar point in 1946 about the rising tide of heart-disease deaths in Britain: the number of persons over sixty-five, he explained, the ones most likely to have a heart attack, more than doubled between 1900 and 1937. That heart-attack deaths would more than double with them would be expected.

Another factor militating against the reality of an “epidemic” was an increased likelihood that a death would be classified on a death certificate as coronary heart disease. Here the difficulty of correctly diagnosing cause of death is the crucial point. Most of us probably have some atherosclerotic lesions at this moment, although we may never feel symptoms. Confronted with the remains of someone who expired unexpectedly, medical examiners would likely write “(unexplained) sudden death” on the death certificate. Such a death could wel have been caused by atherosclerosis, but, as Levy suggested, physicians often go with the prevailing fashions when deciding on their ultimate diagnosis.

The proper identification of cause on death certificates is determined by the International Classification of Diseases, which has gone through numerous revisions since its introduction in 1893. In 1949, the ICD added a new category for arteriosclerotic heart disease.*4 That made a “great difference,” as was pointed out in a 1957 report by the American Heart Association:

The clinical diagnosis of coronary arterial heart disease dates substantial y from the first decade of this century. No one questions the remarkable increase in the reported number of cases of this condition. Undoubtedly the wide use of the electrocardiogram in confirming clinical diagnosis and the inclusion in 1949 of Arteriosclerotic Heart Disease in the International List of Causes of Death play a role in what is often believed to be an actual increased “prevalence” of this disease. Further, in one year, 1948 to 1949, the effect of this revision was to raise coronary disease death rates by about 20 percent for white males and about 35 percent for white females.

In 1965, the ICD added another category for coronary heart disease—ischemic heart disease (IHD). Between 1949 and 1968, the proportion of heart-disease deaths attributed to either of these two new categories rose from 22 percent to 90 percent, while the percentage of deaths attributed to the other types of heart disease dropped from 78 percent to 10 percent. The proportion of deaths classified under al “diseases of the heart” has been steadily dropping since the late 1940s, contrary to the public perception. As a World Health Organization committee said in 2001 about reports of a worldwide

“epidemic” of heart disease that fol owed on the heels of the apparent American epidemic, “much of the apparent increase in [coronary heart disease]

mortality may simply be due to improvements in the quality of certification and more accurate diagnosis….”

The second event that almost assuredly contributed to the appearance of an epidemic, specifical y the jump in coronary-heart-disease mortality after 1948, is a particularly poignant one. Cardiologists decided it was time they raised public awareness of the disease. In June 1948, the U.S. Congress passed the National Heart Act, which created the National Heart Institute and the National Heart Council. Until then, government funding for heart-disease research had been virtual y nonexistent. The administrators of the new heart institute had to lobby Congress for funds, which required educating congressmen on the nature of heart disease. That, in turn, required communicating the message publicly that heart disease was the number-one kil er of Americans. By 1949, the National Heart Institute was al ocating $9 mil ion to heart-disease research. By 1960, the institute’s annual research budget had increased sixfold.

The message that heart disease is a kil er was brought to the public forceful y by the American Heart Association. The association had been founded in 1924 as “a private organization of doctors,” and it remained that way for two decades. In 1945, charitable contributions to the AHA totaled $100,000. That same year, the other fourteen principal health agencies raised $58 mil ion. The National Foundation for Infantile Paralysis alone raised $16.5 mil ion.

Under the guidance of Rome Betts, a former fund-raiser for the American Bible Society, AHA administrators set out to compete in raising research funds.

In 1948, the AHA re-established itself as a national volunteer health agency, hired a public-relations agency, and held its first nationwide fund-raising campaign, aided by thousands of volunteers, including Ed Sul ivan, Milton Berle, and Maurice Chevalier. The AHA hosted Heart Night at the Copacabana.

It organized variety and fashion shows, quiz programs, auctions, and col ections at movie theaters and drugstores. The second week in February was proclaimed National Heart Week. AHA volunteers lobbied the press to alert the public to the heart-disease scourge, and mailed off publicity brochures that included news releases, editorials, and entire radio scripts. Newspaper and magazine articles proclaiming heart disease the number-one kil er suddenly appeared everywhere. In 1949, the campaign raised nearly $3 mil ion for research. By January 1961, when Ancel Keys appeared on the cover o f Time and the AHA official y alerted the nation to the dangers of dietary fat, the association had invested over $35 mil ion in research alone, and coronary heart disease was now widely recognized as the “great epidemic of the twentieth century.”

Over the years, compel ing arguments dismissing a heart-disease epidemic, like the 1957 AHA report, have been published repeatedly in medical journals. They were ignored, however, not refuted. David Kritchevsky, who wrote the first textbook on cholesterol, published in 1958, cal ed such articles

“unobserved publications”: “They don’t fit the dogma and so they get ignored and are never cited.” Thus, the rise and fal of the coronary-heart-disease epidemic is stil considered a matter of unimpeachable fact by those who insist dietary fat is the culprit. The likelihood that the epidemic was a mirage is not a subject for discussion.

“The present high level of fat in the American diet did not always prevail,” wrote Ancel Keys in 1953, “and this fact may not be unrelated to the indication that coronary disease is increasing in this country.” This is the second myth essential to the dietary-fat hypothesis—the changing-American-diet story. In 1977, when Senator George McGovern announced publication of the first Dietary Goals for the United States, this is the reasoning he evoked: “The simple fact is that our diets have changed radical y within the last fifty years, with great and often very harmful effects on our health.” Michael Jacobson, director of the influential Center for Science in the Public Interest, enshrined this logic in a 1978 pamphlet entitled The Changing American Diet, and Jane Brody of the New York Times employed it in her best-sel ing 1985 Good Food Book. “Within this century,” Brody wrote, “the diet of the average American has undergone a radical shift away from plant-based foods such as grains, beans and peas, nuts, potatoes, and other vegetables and fruits and toward foods derived from animals—meat, fish, poultry, eggs, and dairy products.” That this changing American diet went along with the appearance of a great American heart-disease epidemic underpinned the argument that meat, dairy products, and other sources of animal fats had to be minimized in a healthy diet.

The changing-American-diet story envisions the turn of the century as an idyl ic era free of chronic disease, and then portrays Americans as brought low by the inexorable spread of fat and meat into the American diet. It has been repeated so often that it has taken on the semblance of indisputable truth

—but this conclusion is based on remarkably insubstantial and contradictory evidence.

Keys formulated the argument initial y based on Department of Agriculture statistics suggesting that Americans at the turn of the century were eating 25

percent more starches and cereals, 25 percent less fats, and 20 percent less meat than they would be in the 1950s and later. Thus, the heart-disease

“epidemic” was blamed on the apparently concurrent increase in meat and fat in the American diet and the relative decrease in starches and cereals. In 1977, McGovern’s Dietary Goals for the United States would set out to return starches and cereal grains to their rightful primacy in the American diet.

The USDA statistics, however, were based on guesses, not reliable evidence. These statistics, known as “food disappearance data” and published yearly, estimate how much we consume each year of any particular food, by calculating how much is produced nationwide, adding imports, deducting exports, and adjusting or estimating for waste. The resulting numbers for per-capita consumption are acknowledged to be, at best, rough estimates.

The changing-American-diet story relies on food disappearance statistics dating back to 1909, but the USDA began compiling these data only in the early 1920s. The reports remained sporadic and limited to specific food groups until 1940. Only with World War I looming did USDA researchers estimate what Americans had been eating back to 1909, on the basis of the limited data available. These are the numbers on which the changing-American-diet argument is constructed. In 1942, the USDA actual y began publishing regular quarterly and annual estimates of food disappearance. Until then, the data were particularly sketchy for any foods that could be grown in a garden or eaten straight off the farm, such as animals slaughtered for local consumption rather than shipped to regional slaughterhouses. The same is true for eggs, milk, poultry, and fish. “Until World War I , the data are lousy, and you can prove anything you want to prove,” says David Cal , a former dean of the Cornel University Col ege of Agriculture and Life Sciences, who made a career studying American food and nutrition programs.

Historians of American dietary habits have inevitably observed that Americans, like the British, were traditional y a nation of meat-eaters, suspicious of vegetables and expecting meat three to four times a day. One French account from 1793, according to the historian Harvey Levenstein, estimated that Americans ate eight times as much meat as bread. By one USDA estimate, the typical American was eating 178 pounds of meat annual y in the 1830s, forty to sixty pounds more than was reportedly being eaten a century later. This observation had been documented at the time in Domestic Manners of the Americans, by Fanny Trol ope (mother of the novelist Anthony), whose impoverished neighbor during two summers she passed in Cincinnati, she wrote, lived with his wife, four children, and “with plenty of beef-steaks and onions for breakfast, dinner and supper, but with very few other comforts.”

According to the USDA food-disappearance estimates, by the early twentieth century we were living mostly on grains, flour, and potatoes, in an era when corn was stil considered primarily food for livestock, pasta was known popularly as macaroni and “considered by the general public as a typical and peculiarly Italian food,” as The Grocer’s Encyclopedia noted in 1911, and rice was stil an exotic item mostly imported from the Far East.

It may be true that meat consumption was relatively low in the first decade of the twentieth century, but this may have been a brief departure from the meat-eating that dominated the century before. The population of the United States nearly doubled between 1880 and 1910, but livestock production could not keep pace, according to a Federal Trade Commission report of 1919. The number of cattle only increased by 22 percent, pigs by 17 percent, and sheep by 6 percent. From 1910 to 1919, the population increased another 12 percent and the livestock lagged further behind. “As a result of this lower rate of increase among meat animals,” wrote the Federal Trade Commission investigators, “the amount of meat consumed per capita in the United States has been declining.” The USDA noted further decreases in meat consumption between 1915 and 1924—the years immediately preceding the agency’s first attempts to record food disappearance data—because of food rationing and the “nationwide propaganda” during World War I to conserve meat for “military purposes.”

Another possible explanation for the appearance of a low-meat diet early in the twentieth century was the publication in 1906 of Upton Sinclair’s book The Jungle, his fictional exposé on the meatpacking industry. Sinclair graphical y portrayed the Chicago abattoirs as places where rotted meat was chemical y treated and repackaged as sausage, where tubercular employees occasional y slipped on the bloody floors, fel into the vats, and were

“overlooked for days, til al but the bones of them had gone out to the world as Anderson’s Pure Leaf Lard!” The Jungle caused meat sales in the United States to drop by half. “The effect was long-lasting,” wrote Waverly Root and Richard de Rochemont in their 1976 history Eating in America. “Packers were stil trying to woo their customers back as late as 1928, when they launched an ‘eat-more-meat’ campaign and did not do very wel at it.” Al of this suggests that the grain-dominated American diet of 1909, if real, may have been a temporary deviation from the norm.

Other books

The Fairest Beauty by Melanie Dickerson
Noli Me Tangere by JosÈ Rizal
Playing With the Boys by Liz Tigelaar
The Hurricane Sisters by Dorothea Benton Frank
Volk by Piers Anthony
House Divided by Ben Ames Williams
The Poor Relation by Bennett, Margaret