Traffic (36 page)

Read Traffic Online

Authors: Tom Vanderbilt

One often hears, on television or the radio, such slogans as “Every fifteen minutes, a driver is killed in an alcohol-related crash” or “Every thirteen minutes, someone dies in a fatal car crash.” This is meant, presumably, to suggest not just the magnitude of the problem but the idea that a fatal crash can happen to anyone, anywhere. And it can. Yet even when these slogans leave out the words “on average,” as they often do, we still do not take it to mean that someone is actually dying, like clockwork, every fifteen minutes.

These kinds of averages obscure the startling extent to which risk on the road is not average. Take the late-night hours on weekends. How dangerous are they? In an average year, more people were killed in the United States on Saturday and Sunday from midnight to three a.m. than all those who were killed from midnight to three a.m. the rest of the week. In other words, just two nights accounted for a majority of the week’s deaths in that time period. On Sunday mornings from twelve a.m. to three a.m., there was not one driver dying every thirteen minutes but one driver dying every seven minutes. By contrast, on Wednesday mornings from three a.m. to six a.m., a driver was killed every thirty-two minutes.

Time of day has a huge influence on what kinds of crashes occur. The average driver faces the highest risk of a crash during the morning and evening rush hours, simply because the volume of traffic is highest. But fatal crashes occur much less often during rush hours; one study found that 8 of every 1,000 crashes that happened outside the peak hours were fatal, while during the rush hour the number dropped to 3 out of every 1,000. During the weekdays, one theory goes, a kind of “commuters’ code” is in effect. The roads are filled with people going to work, driving in heavy congestion (one of the best road-safety measures, with respect to fatalities), by and large sober. The morning rush hour in the United States is twice as safe as the evening rush hour, in terms of fatal and non-fatal crashes. In the afternoon, the roads get more crowded with drivers out shopping, picking up the kids or the dry cleaning. Drivers are also more likely to have had a drink or two. The “afternoon dip,” or the circadian fatigue that typically sets in around two p.m., also raises the crash risk.

What’s so striking about the massive numbers of fatalities on weekend mornings is the fact that so few people are on the roads, and so many—estimates are as high as 25 percent—have been drinking. Or think of the Fourth of July, one of the busiest travel days in the country and also, statistically, the most dangerous day to be on the road. It isn’t simply that more people are out driving, in which case more fatalities would be expected—and thus the day would not necessarily be more dangerous in terms of crash rate. It has more to do with what people are doing on the Fourth: Studies have shown there are more alcohol-related crashes on the Fourth of July than on the same days the week before or after—and, as it happens, many more than during any other holiday.

What’s the actual risk imposed by a drunk driver, and what should the penalty be to offset that risk? The economists Steven D. Levitt and Jack Porter have argued that legally drunk drivers between the hours of eight p.m. and five a.m. are thirteen times more likely than sober drivers to cause a fatal crash, and those with legally acceptable amounts of alcohol are seven times more likely. Of the 11,000 drunk-driving fatalities in the period they studied, the majority—8,000—were the drivers and the passengers, while 3,000 were other drivers (the vast majority of whom were sober). Levitt and Porter argue that the appropriate fine for drunk driving in the United States, tallying up the externalities that it causes, should be about $8,000.

Risk is not distributed randomly on the road. In traffic, the roulette wheel is loaded. Who you are, where you are, how old you are, how you are driving, when you are driving, and what you are driving all exert their forces on the spinning wheel. Some of these are as you might expect; some may surprise you.

         

Imagine, if you will, Fred, the pickup-driving divorced Montana doctor out for a spin after the Super Bowl who is mentioned in this chapter’s title. Obviously, Fred is a fictional creation, and even if he did exist there’d be no way to judge the actual risk of driving with him. But each of the little things about Fred, and the way those things interact, play their own part in building a profile of Fred’s risk on the road.

The most important risk factor, one that is subtly implicated in all the others, is speed. In a crash, the risk of dying rises with speed. This is common sense, and has been demonstrated in any number of studies. In a crash at 50 miles per hour, you’re fifteen times more likely to die than in a crash at 25 miles per hour—not twice as likely, as you might innocently expect from the doubling of the speed. The relationships are not proportional but exponential: Risk begins to accelerate much faster than speed. A crash when you’re driving 35 miles per hour causes a
third
more frontal damage than one where you’re doing 30 miles per hour.

Somewhat more controversial is the relationship between speed and the potential for a crash. It is known that drivers who have more speeding violations tend to get into more crashes. But studies have also looked at the speeds of vehicles that crashed on a given road, compared them to the speeds of vehicles that did not crash, and tried to figure out how speed affects the likelihood that one will crash. (One problem is that it’s extremely hard to tell how fast cars in crashes were actually going.) Some rough guidelines have been offered. An Australian study found that for a mean speed—not a speed limit—of 60 kilometers per hour (about 37 miles per hour), the risk of a crash doubled for every additional 5 kilometers per hour.

In 1964, one of the first and most famous studies of crash risk based on speed was published, giving rise to the so-called Solomon curve, after its author, David Solomon, a researcher with the U.S. Federal Highway Administration. Crash rates, Solomon found after examining crash records on various sections of rural highway, seemed to follow a U-shaped curve: They were lowest for drivers traveling at the median speed and sloped upward for those going more or less than the median speed. Most strikingly, Solomon reported that “low speed drivers are more likely to be involved in accidents than relatively high speed drivers.”

Solomon’s finding, despite being almost a half century old, has become a sort of mythic (and misunderstood) touchstone in the speed-limit debate, a hoary banner waved by those arguing in favor of higher speed limits. It’s not the actual speed itself that’s the safety problem, they insist, it’s
speed variance.
If those slower drivers would just get up to speed, the roads would flow in smooth harmony. It’s not speed that kills, it’s variance. (This belief, studies have indicated, is most strongly held by young males—who are, after all, experts, given that they get in the most crashes.) And what causes the most variance? Speed limits that are too low!

Dear reader, much as I—as guilty as anyone of an occasional craving for speed—would like to believe this, the arguments against it are too compelling. For one, it assumes that the drivers who are going slow want to be driving slowly, and are not simply slowing for congested traffic, or entering a road from a turn, when they are suddenly hit by one of those drivers traveling the mean speed or higher. Solomon himself acknowledged (but downplayed) that these kinds of events might account for nearly half of the rear-end crashes at low speeds. Studies have found that a majority of rear-end crashes involved a stopped vehicle, which presumably had stopped for a good reason—and not to get in the way of the would-be speed maven behind him. Further, Gary Davis, an engineering professor at the University of Minnesota, proving yet again that statistics are one of the most dangerous things about traffic, has suggested there is a disconnect—what statisticians call an “ecological fallacy”—at work in speed-variance studies. Individual risk is conflated with the “aggregate” risk, even if in reality, he suggests, what holds for the whole group might not hold for individuals.

In pure traffic-engineering theory, a world that really exists only on computer screens and in the dreams of traffic engineers and bears little resemblance to how drivers actually behave, a highway of cars all flowing at the same speed is a good thing. The fewer cars you overtake, the lower your chance of hitting someone or being hit. But this requires a world without cars slowing to change lanes to enter the highway, because they are momentarily lost, or because they’re hitting the tail end of a traffic jam. In any case, if faster cars being put at risk by slower cars were the mythical problem some have made it out to be, highway carnage would be dominated by cars trying to pass—but in fact, one study found that in 1996, a mere 5 percent of fatal crashes involved two vehicles traveling in the same direction. A much more common fatal crash is a driver moving at high speed leaving the road and hitting an object that isn’t moving at all. That is a case where speed variance really does kill.

Let us move on to perhaps the oddest risk factor: Super Bowl Sunday. In one study, researchers compared crash data with the start and end times of all prior Super Bowl broadcasts. They divided all the Super Bowl Sundays into three intervals (before, during, and after). They then compared Super Bowl Sundays to non–Super Bowl Sundays. They found that in the before-the-game period, there was no discernible change in fatalities. During the game, when presumably more people would be off the roads, the fatal crash rate was 11 percent less than on a normal Sunday. After the game, they reported a relative increase in fatalities of 41 percent. The relative risks were higher in the places whose teams had lost.

The primary reason for the increased postgame risk is one that I have already discussed: drinking. Nearly twenty times more beer is drunk in total on Super Bowl Sunday than on an average day. Fred’s risk would obviously be influenced by how many beers he had downed (beer, at least in the United States, is what most drivers pulled over for DUIs have been drinking) and the other factors that determine blood alcohol concentration (BAC). Increases in crash risk, as a number of studies have shown, begin to kick in with as little as .02 percent BAC level, start to crest significantly at .05 percent, and spike sharply at .08 to .1 percent.

Determining crash risk based on a person’s BAC depends, of course, on the person. A famous study in Grand Rapids, Michigan, in the 1960s (one that would help establish the legal BAC limits in many countries), which pulled over drivers at random, found that drivers who had a .01 to .04 percent BAC level actually had
fewer
crashes than drivers with a BAC of zero. This so-called Grand Rapids dip led to the controversial speculation that drivers who had had “just a few” were more aware of the risks of driving, or of getting pulled over, and so drove more safely; others argued that regular drinkers were more capable of “handling” a small intake.

The Grand Rapids dip has shown up in other studies, but it has been downplayed as another statistical fallacy—the “zero BAC” group in Michigan, for example, had more younger and older drivers, who are statistically less safe. Even critics of the study, however, noted that people who reported drinking with greater frequency had
safer
driving records than their teetotaler counterparts at every level of BAC, including zero. This does not mean that drinkers are better drivers per se, or that having a beer makes you a better driver. But the question of what makes a person a safe driver is more complicated than the mere absence of alcohol. As Leonard Evans notes, the effects of alcohol on driver
performance
are well known, but the effects of alcohol on driver
behavior
are not empirically predictable. Here is where the tangled paths of the cautious driver who has had a few, carefully obeying the speed limit, and the distracted sober driver, blazing over the limit and talking on the phone, intersect. Neither may be driving as well as they think they are, and one’s poorer reflexes may be mirrored by the other’s slower time to notice a hazard. Only one is demonized, but they’re both dangerous.

         

The second key risk is Fred himself. Not because he is Fred, for there is no evidence that people named Fred get in more crashes than people named Max or Jerry. It is the fact that Fred is male. Across every age group in the United States, men are more likely than women to be involved in fatal crashes—in fact, in the average year, more than twice as many men as women are likely to be killed in a car, even though there are more women than men in the country. The global ratio is even higher. Men do drive more, but after that difference is taken into account, their fatal crash rates are still higher.

According to estimates by researchers at Carnegie Mellon University, men die at the rate of 1.3 deaths per 100 million miles; for women the rate is .73. Men die at the rate of 14.51 deaths per 100 million trips, while for women it is 6.55. And crucially, men face .70 deaths per 100 million minutes, while for women the rate is .36. It may be true that men drive more, and drive for longer periods when they do drive, but this does not change the fact that for each minute they’re on the road, each mile they drive, and each trip they take, they are more likely to be killed—and to kill others—than women.

It is tempting to use this information to make some point about whether men or women are “better drivers,” but that’s complicated by the fact that in the United States, women get into nonfatal crashes at a higher rate than men. This might be at least partially the result of men driving more on roads that are more prone to fatal crashes (e.g., rural high-speed two-lane roads). What
can
be argued is that men drive more aggressively than women. Men may or may not be better drivers than women, but they seem to die more often trying to prove that they are.

As a gender, men seem particularly troubled by two potent compounds: alcohol and testosterone. Men are twice as likely as women to be involved in an alcohol-related fatal crash. They’re more likely to drink, to drink more, and to drive more after they drink. On the testosterone side, men are less likely to wear seat belts; and by just about every measure, they drive more aggressively. Men do things like ride motorcycles more often than women, an activity that is twenty-two times more likely to result in death than driving a car. Male motorcyclists, from Vietnam to Greece to the United States, are less likely than women to wear a helmet. As we all know, alcohol and testosterone mix in unpleasant ways, so motorcyclists who have been drinking are less likely to wear helmets than those who have not, just as male drivers who have been drinking are less likely to wear seat belts than those who are sober.

Other books

Dawn of Night by Kemp, Paul S.
The Western Light by Susan Swan
Royal Date by Sariah Wilson
Until Lilly by Reynolds, Aurora Rose
Wild Burn by Edie Harris
Supersymmetry by David Walton