Why Government Fails So Often: And How It Can Do Better (25 page)

Their point, of course, is not that policy makers never invest in costly policy innovation. They sometimes do, as exemplified by the inspiring, successful policies discussed in
chapter 11
. Yet the fact that they are always trotted out suggests how rare such breakthroughs are. (Another successful change, the 1983 Social Security reform, was politically easier because it would “bite” only decades later.) The authors’ structural point is that zero-credit policy making makes such investments risky and thus less probable.

The far-reaching effect of zero-credit policy making is perhaps best illustrated by the stimulus legislation of 2009, the American Recovery and Reinvestment Act. Since president Barack Obama desperately needed some Republican votes in the Senate to avert a filibuster,
the Republican leadership had a great deal of bargaining power. It made a strategic decision not to cooperate with the administration on the bill, even though it contained some provisions that Republicans strongly favored. Its theory was that if they helped the bill to pass and the stimulus then succeeded, Obama would get all the credit, whereas if the stimulus failed, the Republicans could tell the voters “we told you so.”
76
The bill ultimately passed; the assessment of the program’s effects is controversial and ongoing. Although the stakes in this case were perhaps uniquely high, one could cite numerous other examples of the immobilizing politics of zero-credit policy making.

Another incentives-based source of policy failure is a feature of many administrative agencies or subagencies—what sociologist Philip Selznick in his classic study of the Tennessee Valley Authority described as “tunnel vision,” or the propensity of administrative units, particularly those with a single mission, to narrow their cognitive focus so as to promote that mission and that mission only.
77
(
Chapter 3
discussed multi-mission agencies, noting there that OMB now assigns even single-mission agencies other, sometimes conflicting policy goals or constraints.) This narrow focus perversely blinds them to other factors that should be relevant to their missions. (Congressional committees and subcommittees, which by political design are also highly specialized to serve their members’ electoral needs, exhibit—indeed, nurture—this same parochial tendency.) A recent Brookings Institution analysis sees this tunnel vision operating in certain environmental policies where “fuel efficiency and energy efficiency matter, but nothing else does. In effect, government officials are acting as if they are guided by a single mission myopia that leads to the exclusion of all concerns other than their agencies’ mandates.”
78
A related phenomenon is for agencies to pursue their dominant mission at the expense or to the exclusion of all the others.
79

IRRATIONALITY

Rationality is one of the most contested ideas in social science and philosophy. But almost regardless of how one defines it, two things
are clear. First, individual rationality can produce collective irrationality. Voters, for example, have a poor grasp of numbers and consistently overestimate the amount of government spending on key programs, the foreign-born share of the population, the number of illegal aliens in the country, the relative shares of the budget going to education and prisons, and the share of the population on welfare.
80
Second, individuals often make irrational choices. This is no paradox, nor is it confined to Prisoner’s Dilemma or other game theoretic situations in which individuals’ inability to communicate with each other in order to further mutually advantageous cooperation may induce decisions that leave each of them worse off.
*
The more general transformation of individual rationality into collective irrationality is endemic to politics and thus to policy making. The brief shutdown of the federal government in 1995 and 1996 is a dramatic example. The product of tactical choices and familiar bargaining ploys by president Bill Clinton, House Speaker Newt Gingrich, and other congressional leaders that they considered eminently rational, the shutdown was egregiously irresponsible and accomplished nothing for the public (unless a brief boost in Clinton’s power qualifies as useful). This judgment seems likely to apply to the October 2013 shutdown as well.

There are strong reasons to believe that collective choices are less rational than individual choices in competitive markets, although there are exceptions.

Markets are driven and disciplined by self-interested (broadly defined, as discussed earlier) decisions by individuals transacting voluntarily with one another and weighing costs and benefits at the margin. Absent coercion, fraud, or other market
failures, these transactions cannot occur unless they make at least one of the parties better off and none worse off (known as a Pareto-superior move). And intense competition from other markets and actors means that they fail unless they become more efficient.

This account of market rationality does not deny that markets sometimes experience panics, bubbles, and other forms of herd mentality. But whereas markets severely punish irrationality, politics does not; indeed, it magnifies it. As Nobel Prize laureate Kenneth Arrow famously demonstrated, there is no agreed-upon social welfare function that can transform individual voters’ preferences into a collective decision without violating certain elementary rules of logic. (The actual policy significance of this “cycling” phenomenon is disputed.) The Pareto-superior condition is never possible in politics; public policies invariably make some citizens better off at the expense of other citizens who are made worse off. Government seldom competes against other providers; this monopolization (as Wolf also emphasized) encourages inefficient provision whatever logic might dictate.

Political actors—voters, politicians, officials, interest groups—are often influenced by strong emotions, self-deception, and other factors that may suppress or deviate from their rational interests. (Again, markets harshly penalize such deviations.) Perhaps the most pervasive of these factors is
ignorance
, particularly among voters. (I discuss policymakers’ ignorance in the next chapter.) For more than fifty years, studies of how much voters know about the political system, their own representatives, and the important issues of the day have invariably found appalling levels of ignorance about the most basic facts. (The theological ignorance in America, perhaps the most religious of the advanced western democracies, is equally remarkable.
81
) Reviewing these studies, Vanderbilt University political scientist Larry Bartels explains that a number of analysts, while conceding this inattention and ignorance, have dismissed these findings on various grounds: that voters’ ignorance is actually rational in that they use “information shortcuts” such as party identification, endorsements by opinion leaders, and other cues to decide whom to vote for; that the most ignorant citizens don’t bother to vote at all; that aggregating large numbers of
voters substitutes the “wisdom of crowds” for the ignorance of many (Condorcet’s jury theorem); that they vote the same way they would vote if they were better informed; that they are well enough informed about how well they are doing to vote rationally; and that even if none of these explanations produces rational voting, their irrationality does not affect electoral outcomes. Bartels refutes each of these “rationalizations” of irrational voting behavior and shows that such behavior affects many important election outcomes.
82

Numerous other empirical studies by leading social scientists have established additional sources of predictable irrationality on the part of individuals, both as voters and in other settings. Four are of special interest. First, in widely cited experimental work that earned a Nobel Prize, Daniel Kahneman and Amos Tversky showed that individual decision making is commonly distorted by recurrent, recalcitrant cognitive patterns and logical errors—some forty-five of them! Some of these “heuristics and biases” include the “availability effect” (we tend to exaggerate phenomena that are easy to remember); “anchoring” (we tend to rely too heavily on one trait or piece of information); “loss aversion” (in assessing an objectively identical risk, we tend to strongly prefer avoiding losses over making gains); the “planning fallacy” (we tend to underestimate how long things will take even when we have prior experience with them); the “representativeness heuristic” (we assess the probability of an event according to how representative we think it is to some other event rather than by how likely it is); the “optimism bias” (we assume that we are less at risk of some negative outcome than others are); and “status quo bias” (we prefer the current state of affairs even when it is irrational to do so).
83
Other researchers have identified a “projection bias” (we exaggerate how much our future preferences will be like they are today).
84
In a book aptly entitled
How We Know What Isn’t So
, social psychologist Thomas Gilovich distinguishes cognitive distortions in everyday reasoning (“something out of nothing;” “too much from too little;” “seeing what we expect to see”), as well as motivation and social ones (“seeing what we want to see;” “believing what we are told;” “the imagined agreement of others”).
85
Future research will surely uncover
more such irrationalities, but how they actually interact and affect human decisions in the real world will likely remain controversial.
86

In a second body of irrationality research, legal scholar Cass Sunstein and his research colleagues (including Kahneman) have shown that people often make what they call “predictably incoherent judgments” in a wide variety of situations mainly because of two cognitive difficulties: they use category-bound thinking even when it leads them astray, and they cannot readily translate moral judgments into the metrics of numbers and years, as jury decisions and CBA often require.
87
In a more counterintuitive finding, they also noted empirical support for a “group polarization” theory: when groups deliberate over a decision (such as juries, interest groups, and legislative committees), their decisions tend to be more extreme versions of their predeliberation views.
88
Sunstein and colleagues have also proposed that policy makers deploy “soft paternalism” in the form of disclosures, warnings, and defaults (which individuals can reject, albeit at some cost) in order to “nudge” the public toward more rational decisions on pensions, fuel economy, and the like.
89
This approach assumes, of course, that officials will make systematically better choices in establishing these defaults despite two factors discussed above: their own biases and their poor information. It also assumes that the architecture of the nudge approach will better protect against the psychological sources of irrational choices than more intrusive regulatory forms.
90

A third area of irrationality research has established the phenomenon of “cultural cognition”—people’s propensity to assess objective evidence in ways that try to maintain consistency with their preexisting cultural or ideological commitments, such as individualism/hierarchy or communitarianism/egalitarianism. Led by my Yale Law School colleague Dan Kahan, these interdisciplinary studies find that this bias affects the public’s views of a large number of policy issues that pivot on assessments of risk.
91
Through cleverly designed statistical studies, they also show that people on both ends of the ideological spectrum (not just those who possess what one writer dismisses as “the Republican brain”
92
) resist scientific findings that they think
contradict their strongly held political or social values. They find no significant differences between liberals and conservatives as to their amount of ideological bias on controversial scientific questions—even for those on both sides who most engage in “cognitive reflection” rather than heuristic or intuitive forms of reasoning—but they find enough on both sides, as Kahan puts it, “for everyone to be troubled and worried.”
93
For every conservative skeptical of climate change, it seems, there is a liberal who is convinced of the repeatedly disproved link between vaccines and autism,
94
who denies the health and environmental benefits of genetically modified foods,
95
and who insists that public school teachers are underpaid.
96
The fourth body of research, by social psychologist Jonathan Haidt, extends Kahan’s findings by identifying six “moral modules”—care/harm; fairness/cheating; loyalthy/betrayal; authority/subversion; sanctity/degradation; and liberty/oppression—that evolutionary struggles have bequeathed to us, that shape our political values (among other things), and that cause us to reject evidence and arguments that contradict our existing moral commitments, usually by interpreting them to accord with those commitments.
97

Does this voter ignorance and irrationality produce failed public policies? Bryan Caplan, a George Mason University economist, argues strongly that it does.
98
One can imagine several ways that voter ignorance and irrationality might be palliated. Legislators animated by a Burkean ethos of independence from and fiduciary responsibility to their constituents could vote in ways (hopefully more rational) that contradict constituents’ desires. Irrational laws could still provide better-informed administrators with enough discretion to improve the legislators’ irrational choices. Caplan shows, however, that even if these were realistic possibilities, they would not fully solve the problem of ignorance-driven policy.

Other books

Death in Breslau by Marek Krajewski
Interdict by Viola Grace
Lawman's Perfect Surrender by Jennifer Morey
Ruth Galloway by Elly Griffiths
The Night of the Comet by George Bishop
Thy Neighbor by Norah Vincent
Lady Pirate by Lynsay Sands
Pickup Styx by Liz Schulte