The Accidental Theorist (15 page)

Read The Accidental Theorist Online

Authors: Paul Krugman

Rat Democracy
 

Like most people who once hoped for better, I have become resigned to the accumulation of tawdry detail about how President Clinton financed his reelection campaign. But condemning Clinton’s brazen opportunism begs the question: Where did the opportunities to be so brazen come from?

This may seem to be a question for a political analyst, not an economist. But there is an approach to political analysis known as “rat choice” (rat as in “rational”—it’s not a comment on the candidates) that flourishes along the border of the two fields. The working hypothesis of rat choice is that voting behavior reflects the more or less rational pursuit of individual interests. This may sound obvious, innocuous, and even excessively optimistic. But if you really think its implications through, they turn out to be quite subversive. Indeed, if you take rat choice seriously, you stop asking why democracy works so badly and start asking why it works at all.

What is the problem? Won’t rational voters simply choose politicians who promise to serve their interests? Well, in a rough sense they do. The logic of democratic politics normally pushes both parties toward the center—more precisely, toward policies that serve the interests of the median voter. Consider, for example, the question of how big the government should be. In general, people with low incomes prefer a government that imposes high taxes in order to provide generous benefits. Those with high incomes prefer a government that does no such thing. The Democrats are, by inclination, the party of outstretched palms, the Republicans the party of tight fists. But both are forced to move away from those inclinations toward actual policies that more or less satisfy the voters in the middle, who don’t like paying taxes but do like knowing that they won’t be stuck with Grandma’s medical bills.

But there are lots of issues that are not so big—issues that only involve, say, $10 or 20 billion a year—like who profits from electricity deregulation, or how much the government spends subsidizing irrigation water for Western farmers. Although these issues, cumulatively, are important to the electorate, the electorate doesn’t vote—individual voters do. And it is rarely in the interest of the individual voter to take the trouble to track the details of public policy. After all, how much difference will one vote make?

Bells have just started going off in the head of any reader who remembers Econ 1. What I have just said is that the duties of a good citizen—such as becoming well informed before voting (and for that matter bothering to vote at all)—are subject to the dreaded
free-rider problem.
The free-rider problem arises whenever some valuable good or service is not “excludable”—that is, whenever the benefit cannot be restricted only to those who pay for it. It is clearly in the interest of all boaters to have a rescue service. But no individual boater has any incentive to pay for the service if others are willing to do so. If we leave provision of a lifesaving service up to individual decisions, each individual will try to free-ride on everyone else, and the service will be inadequate or worse.

The solution is government. It is in the collective interest of boaters that each boat owner be required to pay a fee, to support a Coast Guard that provides those nonexcludable benefits. And the same is true of police protection, public sanitation, national defense, the Centers for Disease Control, and so on. The free-rider problem is the most important reason all sane people concede that we need a government with some coercive power—the power, if nothing else, to force people to pay taxes whether or not they feel like it.

But there is a catch: The democratic process, the only decent way we know for deciding how that coercive power should be used, is itself subject to extremely severe free-rider problems. Ratchoice theorist Samuel Popkin writes (in his 1991 book,
The Reasoning Voter
): “Everybody’s business is nobody’s business. If everyone spends an additional hour evaluating the candidates, we all benefit from a better-informed electorate. If everyone but me spends the hour evaluating the candidates and I spend it choosing where to invest my savings, I will get a better return on my investments as well as a better government.” As a result, the public at large is, entirely rationally, remarkably ill-informed about politics and policy. And that leaves the field open for special interests—which means people with a large stake in small issues—to buy policies that suit them.

For example, not many voters know or care whether the United States uses a substantial amount of its diplomatic capital to open European markets to Central American bananas. Why should they? (I only keep track of the dispute because I have to update my textbook, which includes the sentence: “Efforts to resolve Europe’s banana split have proved fruitless.”) But Carl Lindner, the corporate raider who now owns Chiquita Brands, has strong feelings about the issue; and thanks to his $500,000 in contributions, so does President Clinton. It’s not that Clinton believed that money alone could buy him the election. But money does help, and any practical politician comes to realize that betraying the public interest on small issues involves little political cost, because voters lack the individual incentive to notice.

So what is the solution? One answer is to try to change the incentives of politicians, by making it more difficult for special interests to buy influence. It is easy to be cynical about this, but the truth is that legal limits on how money can be given do have considerable effect. To take only the most extreme example: Outright bribes do not, as far as we can tell, play a big role in determining federal policies—and who doubts that they would if they were legal? So by all means let us have campaign-finance reform; but let us not expect too much from it.

Another answer is to promote civic virtue. There are those who believe that if only the media would treat the public with proper respect, people would respond by acting responsibly—that they would turn away from salacious stories about celebrities and read earnest articles about the flat-panel-display initiative instead. Well, never mind. But it is probably true that the quality of politics in America has suffered from the erosion of public trust in institutions that used to act, to at least some degree, as watchdogs. Once upon a time a politician had to worry about the reactions of unions, churches, newspaper editors, even local political bosses, all of whom had the time and inclination to pay attention to politics beyond the sound bites. Now we have become an atomized society of individuals who get their news—if they get it at all—from TV. If anyone has a good idea about how to bring back the opinion leaders of yore, I am all for it.

Finally, we can try to remove temptation, by avoiding policy initiatives that make it easy for politicians to play favorites. This is one reason why some of us cringed when Ron Brown began taking planeloads of businessmen off on sales trips to China and so on. Whether or not those trips did any good, or gave the right impression about how foreigners might influence American foreign policy, they obviously raised the question of who got to be on the plane—and how.

But there is ultimately no way to make government by the people truly be government for the people. That is what rat choice teaches, and nobody has yet proved it wrong—even in theory.

A Medical Dilemma
 

Back in the early 1980s, before the Internet had even been born, science-fiction writers like Bruce Sterling invented a genre that came to be known as “cyberpunk.” Cyberpunk’s protagonists were usually outlaw computer hackers, battling sinister multinational corporations for control of cyberspace (a term coined by another sci-fi novelist, William Gibson). But in his 1996 novel
Holy Fire,
Sterling imagines a rather different future: a world ruled by an all-powerful gerontocracy, which appropriates most of the world’s wealth to pay for ever more costly life-extension techniques. And his heroine is, believe it or not, a ninety-four-year-old medical economist.

When the novel first came out, it seemed that Sterling was behind the curve. Public concern over medical costs peaked in 1993, then dropped off sharply. Not only did the Clinton health care plan crash and burn, the long-term upward trend in private medical costs also flattened, as corporations shifted many of their employees into cost-conscious HMOs. Even as debates over how to save Social Security make headlines, few question budget plans by both Congress and the Clinton administration that assume, while being systematically vague about the details, that the growth of Medicare can be sharply slowed with few ill effects. With remarkable speed, in other words, we have gone from a sense of crisis to a general belief that the problem of health costs will more or less take care of itself.

But recently there has been a flurry of stories with the ominous news that medical costs are on the rise again. Suddenly, our recent complacency about health costs looks as unjustified as our previous panic. In fact, both the panic and the complacency seem to stem from—what else?—a misdiagnosis of the nature of the problem.

Over the last generation the U.S. economy has been digitized; it has been globalized; but just as importantly, it has become medicalized. In 1970 we spent 7 percent of our GDP on medical care; today the number is twice that. Almost one worker in ten is employed in the health care service industry; if this trend continues, in a few years there will be more people working in doctors’ offices and hospitals than in factories.

So what? As Harvard health economist Joseph Newhouse put it, “Neither citizens nor economists…are especially concerned about rapid growth in most sectors of the economy, like the personal computer industry, the fax industry, or the cellular phone industry.” Yet where the growth of other industries is usually regarded as a cause for celebration, the growth of the medical sector is generally regarded as a bad thing. (Not long ago an article in the
Atlantic Monthly
even proposed a measure of economic growth that deducts health care from the GDP, on the grounds that medical expenditures are a cost, not a benefit.) Indeed, the very phrase “medical costs” seems to have the word “bloated” attached to it as a permanent modifier: We are not, everyone agrees, getting much for all that money.

Or are we? There is, of course, some truth to what Newhouse calls the “cocktail party story of excessive medical spending.” Traditional medical insurance gives neither physicians nor their patients an incentive to think about costs; the result can be what health care reform advocate Alain Enthoven calls “flat of the curve” medicine, in which doctors order any procedure that might possibly be of medical value, no matter how expensive. Reintroducing some incentives can produce important savings. In 1983, for example, Medicare replaced its previous policy of paying all hospital costs with a new policy of paying hospitals a lump sum for any given procedure. The result was an immediate sharp drop in the average number of days in the hospital, with no apparent adverse medical effects.

But after that one-time saving, the cost of hospitalization began rising again. There is, in fact, a clear rhythm in the health care industry. Every once in a while there is a wave of cost-cutting moves—fixed fees for Medicare, replacing traditional insurance with HMOs—that slows the growth of medical expenses for a few years. But then the growth resumes.

Why can’t we seem to keep the lid on medical costs? The answer—the clean little secret of health care—is simple: We actually do get something for our money. In fact, there is a consensus among health care experts that the main driving force behind rising costs is neither greed, nor inefficiency, nor even the aging of our population, but technological progress. Medical expenditures used to be small, not because doctors were cheap or hospitals were well managed, but because there was only so much medicine had to offer, no matter how much you were willing to spend. Since the 1940s, however, every year has brought new medical advances: new diagnostic techniques that can (at great expense) identify problems that could previously only be guessed at; new surgical procedures that can (at great expense) correct problems that could previously only be allowed to take their course; new therapies that can (at great expense) cure or at least alleviate conditions that could previously only be endured. We spend ever more on medicine mainly because we keep on finding good new things that (a lot of) money can buy.

It is often argued that the share of our national income that we devote to health care cannot continue to rise in the future as it has in the past. But why not? An old advertising slogan asserted that “When you’ve got your health, you’ve got just about everything.” Sterling’s protagonist goes through an implausible procedure (albeit one based on an extrapolation of some real medical research) that restores her youth; who would not give most of their worldy goods for that? Even barring such medical miracles, it is not hard to imagine that some day we might be willing to spend, say, 30 percent of our income on treatments that prolong our lives and improve their quality.

Some economists therefore argue that we should stop worrying about the rise in medical costs. By all means, they say, let us encourage some economic rationality in the system—for example by eliminating the bias created by the fact that wages are taxed but medical benefits are not—but if people still want to spend an ever-growing fraction of their income on health, so be it.

But matters are not quite that simple, for medicine is not just like other goods.

The most direct difference between medicine and other things is that so much of it is paid for by the government. In most advanced countries the government pays for most medical care; even in free-market, antigovernment America, the public sector pays for more than 40 percent of medical expenditures. This in itself creates a special problem. It is not at all hard to see how the American economy could support a much larger medical sector; it is, however, very hard to see how the U.S. government will manage to pay for its share of that sector’s costs. When Cassandras like Pete Peterson of the Concord Coalition present alarming numbers about the future burden of baby boomers on the budget, it turns out that only part of that prospective burden represents the sheer demographic effects of an aging population: Forecasts of rising medical costs account for the rest. Despite the aging of our population, the Congressional Budget Office projects that in 2030 Social Security payments will rise only from their current 5 percent of GDP to about 7 percent—but it projects that Medicare and Medicaid will rise from 4 to more than 10 percent of GDP. (Some people dismiss such forecasts: They point out that if medical costs were to rise to that extent, by the time baby boomers become a problem health care would be a much larger share of GDP than it is today—and that, they insist, is just not going to happen. But why not?)

Some might then say that the answer is obvious: We must abandon the idea that everyone is entitled to state-of-the-art medical care. (That is the hidden subtext of politicians who insist that Medicare is not being cut—that all that they are doing is slowing its growth). But are we really prepared to face up to the implications of such an abandonment?

We have come to take it for granted that in advanced nations almost everyone can at least afford the essentials of life. Ordinary people may not dine in three-star restaurants, but they have enough to eat; they may not wear Bruno Maglis, but they do not go barefoot; they may not live in Malibu, but they have a roof over their heads. Yet it was not always thus. In the past, the elite were physically superior to the masses, because only they had adequate nutrition: In the England of Charles Dickens, the adolescent sons of the upper class towered an average of four inches above their working-class contemporaries. What has happened since represents a literal levelling of the human condition, in a way that mere comparisons of the distribution of money income cannot capture.

There is really only one essential that is not within easy reach of the ordinary working American family, and that is medical care. But the rising cost of that essential—that is, the rising cost of buying the ever-growing list of useful things that doctors can now do for us—threatens to restore that ancient inequality with a vengeance.

Suppose that Lyndon Johnson had not passed Medicare in 1965. Then even now there would be a radical inequality in the prospects of the elderly rich and the ordinary citizen; the affluent would receive artificial hip replacements and coronary bypasses, while the rest would (like the elderly poor in less fortunate nations) limp along painfully—or die.

The current conventional wisdom is that the budget burden of health care will be cured with rationing—the Federal government will simply decline to pay for many of the expensive procedures that medical science makes available. But what if, as seems likely, those procedures really work—if there comes a time when those who can afford it can expect to be vigorous centenarians, and perhaps even buy themselves smarter children, while those who cannot can look forward only to the Biblical threescore and ten. Is this really a tolerable prospect?

There is, some might say, no alternative. But of course there is. It is possible to imagine a society that taxes itself heavily in order to provide advanced medical care to everyone, and that rations that care not by wealth but by other criteria. (Bruce Sterling’s imaginary future is ruled by “the polity,” a nanny state that rewards not wealth but personal hygiene: Society takes care of those who take care of themselves.)

Such an outcome sounds unthinkable in the current political climate, which is dominated by a low-tax, antigovernment ideology. But history is not over; ideologies may change. For all we know, the future may belong to the medical welfare state, a state whose slogan might be, “From each according to his ability, to each according to his needs.”

Other books

Meurtres en majuscules by Hannah,Sophie
Coming of Age on Zoloft by Katherine Sharpe
Andromeda’s Choice by William C. Dietz
I Kissed The Boy Next Door by Suzanne D. Williams
Who I'm Not by Ted Staunton
The Proposal & Solid Soul by Brenda Jackson
Dark Angel by Maguire, Eden