Read Command and Control Online
Authors: Eric Schlosser
President Bush told members of his administration not to brag or gloat
about the downfall of the Soviet Union, an event with myriad causes that Mikhail Gorbachev had unintentionally but peacefully overseen. General Colin Powell ignored those instructions at the ceremony in Omaha marking the end of the Strategic Air Command. “
The long bitter years of the Cold War are over,” Powell said. “America and her allies have wonâtotally, decisively, overwhelmingly.”
T
he sociologist Charles B. Perrow began his research on dangerous technologies in August 1979, after the partial meltdown of the core at the Three Mile Island nuclear power plant. In the early minutes of the accident, workers didn't realize that the valves on the emergency coolant pipes had mistakenly been shutâone of the indicator lights on the control panel was
hidden by a repair tag. Perrow soon learned that similar mistakes had occurred during the operation of other nuclear power plants. At a reactor in Virginia, a worker cleaning the floor got
his shirt caught on the handle of a circuit breaker on the wall. He pulled the shirt off it, tripped the circuit breaker, and shut down the reactor for four days.
A lightbulb slipped out of the hand of a worker at a reactor in California. The bulb hit the control panel, caused a short circuit, turned off sensors, and made the temperature of the core change so rapidly that a meltdown could have occurred. After studying a wide range of “
trivial events in nontrivial systems,” Perrow concluded that human error wasn't responsible for these accidents. The real problem lay deeply embedded within the technological systems, and it was impossible to solve: “
Our ability to organize does not match the inherent hazards of some of our organized activities.” What appeared to be the rare exception, an anomaly, a one-in-a-million accident, was actually to be expected. It was normal.
Perrow explored the workings of high-risk systems in his book
Normal
Accidents
, focusing on the nuclear power industry, the chemical industry, shipping, air transportation, and other industrial activities that could harm a large number of people if something went wrong. Certain patterns and faults seemed common to all of them. The most dangerous systems had elements that were “
tightly coupled” and interactive. They didn't function in a simple, linear way, like an assembly line. When a problem arose on an assembly line, you could stop the line until a solution was found. But in a tightly coupled system, many things occurred simultaneouslyâand they could prove difficult to stop. If those things also interacted with each other, it might be hard to know exactly what was happening when a problem arose, let alone know what to do about it. The complexity of such a system was bound to bring surprises. “
No one dreamed that when X failed, Y would also be out of order,” Perrow gave as an example, “and the two failures would interact so as to both start a fire and silence the fire alarm.”
Dangerous systems usually required standardized procedures and some form of centralized control to prevent mistakes. That sort of management was likely to work well during routine operations. But during an accident, Perrow argued, “
those closest to the system, the operators, have to be able to take independent and sometimes quite creative action.” Few bureaucracies were flexible enough to allow both centralized and decentralized decision making, especially in a crisis that could threaten hundreds or thousands of lives. And the large bureaucracies necessary to run high-risk systems usually resented criticism, feeling threatened by any challenge to their authority. “
Time and time again, warnings are ignored, unnecessary risks taken, sloppy work done, deception and downright lying practiced,” Perrow found. The instinct to blame the people at the bottom not only protected those at the top, it also obscured an underlying truth. The fallibility of human beings guarantees that no technological system will ever be infallible.
â¢Â   â¢Â   â¢
A
FTER
SERVING
AS
A
CONSULTANT
to the Joint Chiefs of Staff on strategic nuclear policy, Scott D. Sagan applied “normal accident” theory to the workings of the American command-and-control system during the
Cuban Missile Crisis. According to Sagan, now a professor of political science at Stanford University, the crisis was the most severe test of that system during the Cold War, “
the highest state of readiness for nuclear war that U.S. military forces have ever attained and the longest period of time (thirty days) that they have maintained an alert.” Most historians attributed the peaceful resolution of the crisis to decisions made by John F. Kennedy and Nikita Khrushchevâto the rational behavior of leaders controlling their military forces. But that sense of control may have been illusory, Sagan argued in
The Limits of Safety
, and the Cuban Missile Crisis could have ended with a nuclear war, despite the wishes of Khrushchev and Kennedy.
With hundreds of bombers, missiles, and naval vessels prepared to strike, the risk of accidents and misunderstandings was ever present. At the height of the confrontation, while Kennedy and his advisers were preoccupied with the Soviet missiles in Cuba,
an Atlas long-range missile was test-launched at Vandenberg Air Force Base, without the president's knowledge or approval. Other missiles at Vandenberg had already been placed on alert with nuclear warheadsâand the Soviet Union could have viewed the Atlas launch as the beginning of an attack. The Jupiter missiles in Turkey were an issue of great concern to Secretary of Defense Robert McNamara throughout the crisis. McNamara ordered American troops to sabotage the missiles if Turkey seemed ready to launch them. But he was apparently unaware that nuclear weapons had been loaded onto fighter planes in Turkey. The control of those weapons was “
so loose, it jars your imagination,” Lieutenant Colonel Robert B. Melgard, the commander of the NATO squadron, told Sagan. “
In retrospect,” Melgard said, “there were some guys you wouldn't trust with a .22 rifle, much less a thermonuclear bomb.”
During
one of the most dangerous incidents, Major Charles Maultsby, the pilot of an American U-2 spy plane, got lost and inadvertently strayed into Soviet airspace. His mistake occurred on October 27, 1962âthe same day as the Atlas missile launch and the shooting down of a U-2 over Cuba. Maultsby was supposed to collect air samples above the North Pole, seeking radioactive evidence of a Soviet nuclear test. But the flight path was new, the aurora borealis interfered with his attempt at celestial navigation,
and Maultsby soon found himself flying over Siberia, pursued by Soviet fighter planes. The U-2 ran out of fuel, and American fighters took off to escort Maultsby back to Alaska. Under the DEFCON 3 rules of engagement, the American fighter pilots had the authority to fire their atomic antiaircraft missiles and shoot down the Soviet planes. A dogfight between the two air forces was somehow avoided, the U-2 landed safelyâand McNamara immediately halted the air sampling program. Nobody at the Pentagon had considered the possibility that these routine U-2 flights could lead to the use of nuclear weapons.
America's command-and-control system operated safely during the crisis, Sagan found, and yet “
numerous dangerous incidents . . . occurred despite all the efforts of senior authorities to prevent them.” He'd long believed that the risk of nuclear weapon accidents was remote, that nuclear weapons had been “
a stabilizing force” in international relations, reducing the risk of war between the United States and the Soviet Union. “
Nuclear weapons may well have made
deliberate
war less likely,” Sagan now thought, “but, the complex and tightly coupled nuclear arsenal we have constructed has simultaneously made
accidental
war more likely.” Researching
The Limits of Safety
left him feeling pessimistic about our ability to control high-risk technologies. The fact that a catastrophic accident with a nuclear weapon has never occurred, Sagan wrote, can be explained
less by “good design than good fortune.”
â¢Â   â¢Â   â¢
T
HE
T
ITAN
II
EXPLOSION
at Damascus was a normal accident, set in motion by a trivial event (the dropped socket) and caused by a tightly coupled, interactive system (the fuel leak that raised the temperature in the silo, making an oxidizer leak more likely). That system was also overly complex (the officers and technicians in the control center couldn't determine what was happening inside the silo). Warnings had been ignored, unnecessary risks taken, sloppy work done. And crucial decisions were made by a commanding officer, more than five hundred miles from the scene, who had little firsthand knowledge of the system. The missile might have exploded no matter what was done after its stage 1 fuel tank began to
leak. But to blame the socket, or the person who dropped it, for that explosion is to misunderstand how the Titan II missile system really worked. Oxidizer leaks and other close calls plagued the Titan II until the last one was removed from a silo, northwest of Judsonia, Arkansas, in June 1987. None of those leaks and accidents led to a nuclear disaster. But if one had, the disaster wouldn't have been inexplicable or hard to comprehend. It would have made perfect sense.
The nuclear weapon systems that Bob Peurifoy, Bill Stevens, and Stan Spray struggled to make safer were also tightly coupled, interactive, and complex. They were prone to “common-mode failures”âone problem could swiftly lead to many others. The steady application of high temperature to the surface of a Mark 28 bomb could disable its safety mechanisms, arm it, and then set it off. “
Fixes, including safety devices, sometimes create new accidents,” Charles Perrow warned, “and quite often merely allow those in charge to run the system faster, or in worse weather, or with bigger explosives.” Perrow was not referring to the use of sealed-pit weapons during SAC's airborne alerts. But he might as well have been. Promoted as being much safer than the weapons they replaced, the early sealed-pit bombs posed a grave risk of accidental detonation and plutonium scattering. Normal accident theory isn't a condemnation of modern technological systems. But it calls for more humility in how we design, build, and operate them.
The title of an influential essay on the role of technology in society asked the question: “
Do Artifacts Have Politics?” According to its author, Langdon Winner, the answer is yesâthe things that we produce are not only shaped by social forces, they also help to mold the political life of a society. Some technologies are flexible and can thrive equally well in democratic or totalitarian countries. But Winner pointed to one invention that could never be managed with a completely open, democratic spirit: the atomic bomb. “
As long as it exists at all, its lethal properties demand that it be controlled by a centralized, rigidly hierarchical chain of command closed to all influences that might make its workings unpredictable,” Winner wrote. “The internal social system of the bomb must be authoritarian; there is no other way.”
Secrecy is essential to the command and control of nuclear weapons. Their technology is the opposite of open-source software. The latest warhead designs can't be freely shared on the Internet, improved through anonymous collaboration, and productively used without legal constraints. In the years since Congress passed the Atomic Energy Act of 1946, the design specifications of American nuclear weapons have been “
born secret.” They are not classified by government officials; they're classified as soon as they exist. And intense secrecy has long surrounded the proposed uses and deployments of nuclear weapons. It is intended to keep valuable information away from America's enemies. But an absence of public scrutiny has often made nuclear weapons more dangerous and more likely to cause a disaster.
Again and again, safety problems were hidden not only from the public but also from the officers and enlisted personnel who handled nuclear weapons every day. The strict, compartmentalized secrecy hid safety problems from the scientists and engineers responsible for weapon safety. Through the Freedom of Information Act, I obtained a document that listed the “
Accidents and Incidents Involving Nuclear Weapons” from the summer of 1957 until the spring of 1967. It was 245 pages long. It gave brief accounts of the major Broken Arrows during that period. It also described hundreds of minor accidents, technical glitches, and seemingly trivial events:
a Genie antiaircraft missile released from a fighter plane by mistake and dropped onto a weapon trailer;
a Boar missile crushed by the elevator of an aircraft carrier;
a Mark 49 warhead blown off a Jupiter missile when explosive bolts detonated due to corrosion;
smoke pouring from a W-31 warhead atop a Nike missile after a short circuit;
the retrorockets of a Thor missile suddenly firing at a launch site in Great Britain and startling the crew;
a Mark 28 bomb emitting strange sounds, for reasons that were never discovered. I shared the document with Bob Peurifoy and Bill Stevensâwho'd never seen it. Both were upset after reading it. The Defense Atomic Support Agency had never told them about hundreds of accidents.
The United States was often more successful at keeping secrets from its own weapon designers than at keeping them from the Soviet Union. Beginning with the Soviet infiltration of the Manhattan Project, through
the John Walker spy ringâwhich from the late 1960s until 1985 provided
about a million documents on the Pentagon's war plans, codes, and submarine technology to the Sovietsâthe leadership in the Kremlin knew a lot more about the nuclear capabilities of the United States than the American people were ever allowed to know. One of the most important secrets of the Cold War was considered
so secret that the president of the United States wasn't allowed to know it. Harry Truman was deliberately never told that Army cryptologists had broken Soviet codes and deciphered thousands of messages about espionage within the United States.
But the Soviet Union learned the secret, when one of its spies, the British double agent Kim Philby, was given a tour of the Army's Signal Intelligence Service headquarters.
The need to protect national security has long been used as a justification for hiding things to avoid embarrassment. “
Secrecy is a form of government regulation,” a Senate commission, headed by Daniel Patrick Moynihan, said in 1997. “What is different with secrecy is that the public cannot know the extent or the content of the regulation.” To this day, the classification decisions at the Department of Defense and the Department of Energy have an arbitrary, often Kafkaesque quality.
Cold War documents that were declassified in the 1990s were later reclassifiedâmaking it illegal to possess them, even though the federal government once released them.