Traffic (38 page)

Read Traffic Online

Authors: Tom Vanderbilt

There is one solid bit of advice that could be dispensed regarding whether you should take a trip with the fictional Fred: Ride in the backseat (if he had one, that is). The fatality risk in the backseat is 26 percent lower than in the front. The backseat is safer than air bags. But you run the risk of offending Fred.

The Risks of Safety

Be wary then; best safety lies in fear.

—William Shakespeare,
Hamlet

In the 1950s, when car fatalities in the United States were approaching their zenith, an article in the
Journal of the American Medical Association
argued that the “elimination of the mechanically hazardous features of the interior construction”—for example, metal dashboards and rigid steering columns—would prevent nearly 75 percent of the annual road fatalities, saving some 28,500 lives.

Car companies were once rightly castigated for trying to shift the blame for traffic fatalities to the “nut behind the wheel.” And in the decades since, in response to public outcry and the ensuing regulations, the insides of cars have been made radically safer. In the United States (and most other places), fewer people in cars die or are injured now than in the 1960s, even though more people drive more miles. But in an oft-repeated pattern with safety devices from seat belts to air bags, the actual drop in fatalities did not live up to the early hopes. Consider the so-called chimsil. The term is slang for “center high-mounted stop lamp” (CHMSL), meaning the third rear brake light that became mandatory on cars in the 1980s, after decades of study.

On paper at least, the chimsil sounded like a great idea. It would give drivers more information that the car ahead was braking. Unlike brake lights, which go from one shade of red to a brighter shade of red (some engineers have argued that an outright change in colors would make more sense), the chimsil would illuminate only during braking. Drivers scanning through the windshield of the car ahead of them to gauge traffic would have more information. Tests had shown that high-mounted lamps improved reaction times. Experts predicted that the lamps would help reduce certain types of crashes, particularly rear-end collisions. Early studies, based on a trial that equipped some cars in taxi fleets with the lights, indicated that these incidents could be cut by 50 percent. Later estimates, however, dropped the benefit to around 15 percent. Studies now estimate that the chimsil has “reached a plateau” of reducing rear-end crashes by 4.3 percent. This arguably justifies the effort and cost of having them installed, but the chimsil clearly has not had the effect for which its inventors had hoped.

Similar hopes greeted the arrival of the antilock braking system, or ABS, which helps avoid “locked brakes” and allows for greater steering control during braking, particularly in wet conditions. But problems arose. A famous, well-controlled study of taxi drivers in Munich, Germany, found that cars equipped with ABS drove faster, and closer to other vehicles, than those without. They also got into more crashes than cars without ABS. Other studies suggested that drivers with ABS were less likely to rear-end someone but more likely to be rear-ended
by
someone else.

Were drivers trading a feeling of greater safety for more risk? Perhaps they were simply swapping collisions with other vehicles for potentially more dangerous “single-vehicle road-departure” crashes—studies on test tracks have shown that drivers in ABS-equipped cars more often veered off the road when trying to avoid a crash than non-ABS drivers did. Other studies revealed that many drivers didn’t know how to use ABS brakes correctly. Rather than exploiting ABS to drive more aggressively, they may have been braking the wrong way. Finally, drivers with ABS may simply have been racking up more miles. Whatever the case, a 1994 report by the National Highway Traffic Safety Administration concluded that the “overall, net effect of ABS” on crashes—fatal and otherwise—was “close to zero.” (The reason why is still rather a mystery, as the Insurance Institute for Highway Safety concluded in 2000: “The poor early experience of cars with antilocks has never been explained.”)

There always seems to be something else to protect us on the horizon. The latest supposed silver bullet for traffic safety is electronic stability control, the rollover-busting technology that, it is said, can help save nearly ten thousand lives per year. It would be a good thing if it did, but if history is a guide, it will not.

Why do these changes in safety never seem to have the predicted impact? Is it just overambitious forecasting? The most troublesome possible answer, one that has been haunting traffic safety for decades, suggests that, as with the roads in Chapter 7, the safer cars get, the more risks drivers choose to take.

While this idea has been around in one form or another since the early days of the automobile—indeed, it was used to argue against railroad safety improvements—it was most famously, and controversially, raised in a 1976 article by Sam Peltzman, an economist at the University of Chicago. Describing what has since become known as the “Peltzman effect,” he argued that despite the fact that a host of new safety technologies—most notably, the seat belt—had become legally required in new cars, the roads were no safer. “Auto safety regulation,” he concluded, “has not affected the highway death rate.” Drivers, he contended, were trading a decrease in accident risk with an increase in “driving intensity.” Even if the occupants of cars themselves were safer, he maintained, the increase in car safety had been “offset” by an increase in the fatality rate of people who did not benefit from the safety features—pedestrians, bicyclists, and motorcyclists. As drivers felt safer, everyone else had reason to feel less safe.

Because of the twisting, entwined nature of car crashes and their contributing factors, it is exceedingly difficult to come to any certain conclusions about how crashes may have been affected by changes to any one variable of driving. The median age of the driving population, the state of the economy, changes in law enforcement, insurance factors, weather conditions, vehicle and modal mix, alterations in commuting patterns, hazy crash investigations—all of these things, and others, play their subtle part. In many cases, the figures are simply estimates.

This gap between expected and achieved safety results might be explained by another theory, one that turns the risk hypothesis rather on its head. This theory, known as “selective recruitment,” says that when a seat-belt law is passed, the pattern of drivers who switch from not wearing seat belts to wearing seat belts is decidedly not random. The people who will be first in line are likely to be those who are already the safest drivers. The drivers who do not choose to wear seat belts, who have been shown in studies to be riskier drivers, will be “captured” at a smaller rate—and even when they are, they will still be riskier.

Looking at the crash statistics, one finds that in the United States in 2004, more people not wearing their seat belts were killed in passenger-car accidents than those who were wearing belts—even though, if federal figures can be believed, more than 80 percent of drivers wear seat belts. It is not simply that drivers are less likely to survive a severe crash when not wearing their belts; as Leonard Evans has noted, the most severe crashes
happen
to those not wearing their belts. So while one can make a prediction about the estimated reduction in risk due to wearing a seat belt, this cannot simply be applied to the total number of drivers for an “expected” reduction in fatalities.

Economists have a clichéd joke: The most effective car-safety instrument would be a dagger mounted on the steering wheel and aimed at the driver. The incentive to drive safely would be quite high. Given that you are twice as likely to die in a severe crash if you’re not wearing a seat belt, it seems that
not
wearing a seat belt is essentially the same as installing a dangerous dagger in your car.

And yet what if, as the economists Russell Sobel and Todd Nesbit ask, you had a car so safe you could usually walk away unharmed after hitting a concrete wall at high speed? Why, you would “race it at 200 miles per hour around tiny oval racetracks only inches away from other automobiles and frequently get into accidents.” This was what they concluded after tracking five NASCAR drivers over more than a decade’s worth of races, as cars gradually became safer. The number of crashes went up, they found, while injuries went down.

Naturally, this does not mean that the average driver, less risk-seeking than a race-car driver, is going to do the same. For one, average drivers do not get prize money; for another, race-car drivers wear flame-retardant suits and helmets. This raises the interesting, if seemingly outlandish, question of why car drivers, virtually alone among users of wheeled transport, do not wear helmets. Yes, cars do provide a nice metal cocoon with inflatable cushions. But in Australia, for example, head injuries among car occupants, according to research by the Federal Office of Road Safety, make up
half
the country’s traffic-injury costs. Helmets, cheaper and more reliable than side-impact air bags, would reduce injuries and cut fatalities by some 25 percent. A crazy idea, perhaps, but so were air bags once.

Seat belts and their effects are more complicated than allowed for by the economist’s language of incentives, which sees us all as rational actors making predictable decisions. I have always considered the act of wearing my seat belt not so much an incentive to drive more riskily as a grim reminder of my own mortality (some in the car industry fought seat belts early on for this reason). This doesn’t mean I’m immune from behavioral adaptation. Even if I cannot imagine how the seat belt makes me act more riskily, I
can
easily imagine how my behavior would change if, for some reason, I was driving a car
without
seat belts. Perhaps my ensuing alertness would cancel out the added risk.

Moving past the question of how many lives have been saved by seat belts and the like, it seems beyond doubt that increased feelings of safety can push us to take more risks, while feeling less safe makes us more cautious. This behavior may not always occur, we may do it for different reasons, we may do it with different intensities, and we may not be aware that we are doing it (or by how much); but the fact that we do it is why these arguments are still taking place. This may also explain why, as Peltzman has pointed out, car fatalities per mile still decline at roughly the same rate every year now as they did in the first half of the twentieth century, well before cars had things like seat belts and air bags.

         

In the first decade of the twentieth century, forty-seven men tried to climb Alaska’s Mount McKinley, North America’s tallest peak. They had relatively crude equipment and little chance of being rescued if something went wrong. All survived. By the end of the century, when climbers carried high-tech equipment and helicopter-assisted rescues were quite frequent, each decade saw the death of dozens of people on the mountain’s slopes. Some kind of adaptation seemed to be occurring: The knowledge that one could be rescued was either driving climbers to make riskier climbs (something the British climber Joe Simpson has suggested); or it was bringing less-skilled climbers to the mountain. The National Park Service’s policy of increased safety was not only costing more money, it perversely seemed to be costing more lives—which had the ironic effect of producing calls for more “safety.”

In the world of skydiving, the greatest mortality risk was once the so-called low-pull or no-pull fatality. Typically, the main chute would fail to open, but the skydiver would forget to trigger the reserve chute (or would trigger it too late). In the 1990s, U.S. skydivers began using a German-designed device that automatically deploys, if necessary, the reserve chute. The number of low- or no-pull fatalities dropped dramatically, from 14 in 1991 to 0 in 1998. Meanwhile, the number of once-rare open-canopy fatalities, in which the chute deploys but the skydiver is killed upon landing, surged to become the leading cause of death. Skydivers, rather than simply aiming for a safe landing, were attempting hook turns and swoops, daring maneuvers done with the canopy open. As skydiving became safer, many skydivers, particularly younger skydivers, found new ways to make it riskier.

The psychologist Gerald Wilde would call what was happening “risk homeostasis.” This theory implies that people have a “target level” of risk: Like a home thermostat set to a certain temperature, it may fluctuate a bit from time to time but generally keeps the same average setting. “With that reliable rip cord,” Wilde told me at his home in Kingston, Ontario, “people would want to extend their trip in the sky as often as possible. Because a skydiver wants to be up there, not down here.”

In traffic, we routinely adjust the risks we’re willing to take as the expected benefit grows. Studies, as I mentioned earlier in the book, have shown that cars waiting to make left turns against oncoming traffic will accept smaller gaps in which to cross (i.e., more risk) the longer they have been waiting (i.e., as the desire for completing the turn increases). Thirty seconds seems to be the limit of human patience for left turns before we start to ramp up our willingness for risk.

We may also act more safely as things get more dangerous. Consider snowstorms. We’ve all seen footage of vehicles slowly spinning and sliding their way down freeways. The news talks dramatically of the numbers of traffic deaths “blamed on the snowstorm.” But something interesting is revealed in the crash statistics: During snowstorms, the number of collisions, relative to those on clear days, goes up, but the number of fatal crashes goes
down.
The snow danger seems to cut both ways: It’s dangerous enough that it causes more drivers to get into collisions, and dangerous enough that it forces them to drive at speeds that are less likely to produce a fatal crash. It may also, of course, force them not to drive in the first place, which itself is a form of risk adjustment.

In moments like turning left across traffic, the risk and the payoff seem quite clear and simple. But do we behave consistently, and do we really have a sense of the actual risk or safety we’re looking to achieve? Are we always pushing it “to the max,” and do we even know what that “max” is? Critics of risk homeostasis have said that given how little humans actually know about assessing risk and probability, and given how many mis-perceptions and biases we’re susceptible to while driving, it’s simply expecting too much of us to think we’re able to hold to some perfect risk “temperature.” A cyclist, for example, may feel safer riding on the sidewalk instead of the street. But several studies have found that cyclists are more likely to be involved in a crash when riding on the sidewalk. Why? Sidewalks, though separated from the road, cross not only driveways but intersections—where most car-bicycle collisions happen. The driver, having already begun her turn, is less likely to expect—and thus to see—a bicyclist emerging from the sidewalk. The cyclist, feeling safer, may also be less on the lookout for cars.

Other books

The Perfect Soldier by Hurley, Graham
Tenebrae Manor by P. Clinen
Seven for a Secret by Victoria Holt