Read Cyber War: The Next Threat to National Security and What to Do About It Online
Authors: Richard A. Clarke,Robert K. Knake
Tags: #General, #Computers, #Technology & Engineering, #Political Science, #Security, #United States, #Political Freedom & Security, #Cyberterrorism, #Political Process, #Law Enforcement, #International Security, #Information warfare, #Military Science, #Terrorism, #Prevention
One bright spot in this overall picture of data going out the door unhindered is what happened at Johns Hopkins University’s Advanced Physics Laboratory (APL), outside Baltimore. APL does hundreds of millions of dollars of research every year for the U.S. government, from outer-space technology to biomedicine to secret “national security” projects. APL did discover in 2009 that it had huge amounts of data being secretly exfiltrated off its network and they stopped it. What is very telling is the way in which they stopped it. APL is one of the places that is really expert in cyber security
and has contracts with the National Security Agency. So one might think that they were able to get their intrusion systems to block the data theft. No. The only way in which these cyber experts were able to prevent their network from being pillaged was to disconnect the organization from the Internet. APL pulled the plug and isolated its entire network, making it an island in cyberspace. For weeks, APL’s experts went throughout the network, machine by machine, attempting to discover trapdoors and other malware. So the state of the art in really insuring that your data does not get copied right off your network appears to be to make sure that you are in no way connected to anybody. Even that turns out to be harder than it may seem. In large organizations, people innocently make connections to their home computers, to laptops with wi-fi connections, to devices like photocopiers that have their own connectivity through the Internet. If you are connected to the Internet in any way, it seems, your data is already gone.
The really good cyber hackers, including the best government teams from countries such as the U.S. and Russia, are seldom stumped when trying to penetrate a network, even if its operators think the network is not connected in any way to the public Internet. Furthermore, the varsity teams do something that causes network defenders to sound like paranoids. They never leave any marks that they were there, except when they want you to know. Think of Kevin Spacey’s character’s line in the movie
The Usual Suspects
: “The greatest trick the devil ever pulled was convincing the world he didn’t exist.”
2. VEGAS, BABY
Another reason given for why there has not been a groundswell sufficient to address America’s vulnerability to cyber war is that
the “thought leadership” group in the field can’t agree on what to do. To test that hypothesis, I went in search of the “thought leaders” in what you might think was one of the more unlikely places to find them, Caesars Palace, in Las Vegas, in the 104-degree heat of August 2009.
Caesars is an incongruous site on any day, filled as it is with statues and symbols of an empire that fell over fifteen centuries ago scattered among blinking slot machines and blackjack tables. At Caesars, conference rooms with names like the Colosseum and the Palatine are not crumbling ruins, but are state-of-the-art meeting facilities, with white boards, flat screens, and flashing control consoles. Every summer for the past dozen years, when the more mainstream conventions wrap up the Vegas conference season and the room prices drop, a slightly different kind of crowd descends on the Strip. They are mainly men, usually in shorts and T-shirts, often with backpacks, BlackBerrys, and Mac laptops. Few of them drop into the fashion-forward Hugo Boss, Zegna, or Hermès shops in Caesars Forum, but they have almost all been to the
Star Trek
show over at the Hilton. The crowd are hackers, and in 2009 over four thousand of them showed up for the Black Hat conference, enough information technology skill in one place to wage cyber war on a massive scale.
Despite the name, Black Hat is actually now a gathering of “white hat,” or “ethical,” hackers, people who are or work for chief information officers (CIOs) or chief information security officers (CISOs) at banks, pharmaceutical firms, universities, government agencies, almost every imaginable kind of large (and many medium-sized) company. The name Black Hat derives from the fact that the highlights of the show every year are announcements by hackers that they’ve figured out new ways to make popular software applications do things they were not designed to do. The software companies used to think of the conference as a meeting of bad guys. Usually
the demonstrations show that software’s writers were not sufficiently security conscious, with the result that there is a way to penetrate a computer network without authorization, maybe even gain control of a network.
Microsoft was the butt of the conference’s hacking for years, and the executives in Redmond looked forward annually to Black Hat the way most of us anticipate a tax audit. In 2009 the attention turned to Apple, because of the increasing popularity of its products. The most-discussed demonstration concerned how to hack an iPhone with a simple SMS text message. As much as Bill Gates, or now maybe even Steve Jobs, might like it to be illegal for people to find and publicize the flaws in their products, it is not a crime to do so. A crime occurs only when a hacker uses the method he’s developed (the “exploit”) to utilize the flaw he’s discovered in the software (the “vulnerability”) to get him into a corporate or government network (“the target”) where he is not authorized to be. Of course, once a vulnerability is publicized at Black Hat, or, worse yet, once an exploit is disseminated, anyone can attack any network running the flawed software.
I got into a little bit of trouble in 2002 for suggesting in my Black Hat keynote address that it was a good thing that hackers were discovering flaws in software. I was Special Advisor for Cyber Security to President Bush at the time. Someone, presumably in Redmond, thought it wrong for a nice conservative Republican White House to be encouraging illegal acts. Of course, what I actually said was that when the ethical hackers discovered flaws they should first tell the software maker, and then, if they got no response, call the government. Only if the software maker refused to fix the problem, I said, should the hackers go public. My logic was that if the hackers at Black Hat could discover the software flaws, China, Russia, and others probably could, too. Since those engaged in espionage and crime would find out anyway, it was better if everyone else knew.
Public knowledge of a “bug” in software would probably mean two things: (1) most sensitive networks would stop using the software until it got fixed, and (2) the software manufacturer would be shamed into fixing it, or pressured to do so by its paying customers, such as banks and the Pentagon.
Comments like that did not endear me to certain corporate interests. They also didn’t like it when, again in 2002, I was the keynote speaker at the annual RSA conference. The RSA conference is a gathering of about 12,000 cyber security practitioners. It is an occasion for many late-night parties. My keynote was early in the morning. I was standing backstage, thinking about how I needed more coffee. The band Kansas had been brought in and was playing loudly in the big hall. When they were done, I was supposed to walk out onstage through a cloud of theatrical smoke. You get the picture. Thinking of my caffeine needs, I noted shortly after starting the speech that a recent survey had shown that many large companies spent more money on free coffee for their employees and guests than they did on cyber security. To which I added, “If you’re a big company and spend more on coffee than on cyber security, you
will
be hacked.” Pause. Then go for it. “What’s more, if those are your priorities, you
deserve
to be hacked.” Dozens of irate telephone calls from corporate officials followed.
RSA is very corporate. Black hat is a lot more fun. The thrill at Black Hat is going into a dimly lit ballroom and seeing someone unaccustomed to public speaking projecting lines of code on a presentation screen. Hotel staff servicing the conference always look quizzical when a meeting room erupts in laughter or applause, which they do a lot, because to the average person nothing is being said that is obviously humorous, praiseworthy, or for that matter even understandable. Perhaps the only thing that most Americans would generally follow if they wandered off course into Black Hat while looking for the roulette tables is the conference’s Hacker
Court, mock trials with judges who seek to establish what sort of hacking should really be considered unethical. Apparently hacking the hackers is not in that category. Most conferencegoers just accept that they should have their wi-fi applications turned off on their laptops. Signs throughout the vast conference area note that the wi-fi network should be considered “a hostile environment.” The warning is about as necessary as a placard at an aquarium noting that there is no lifeguard on duty in the shark tank.
In 2009, conference organizer Jeff Moss broke with tradition by scheduling one meeting at Black Hat Vegas that was not open to all attendees. Indeed, Moss, who dressed only in black during the conference, limited the attendance at that meeting to thirty people, instead of the usual 500 to 800 who crowd each of the six simultaneous sessions that take place five or more times a day during the conference. The invitation-only session was populated by a group of “old hands,” people who knew where the virtual bodies were buried in cyberspace: former government officials, current bureaucrats, chief security officers in major corporations, academics, and senior IT company officials. Moss’s question to them: What do we want the new Obama Administration to do to secure cyberspace? In a somewhat unorthodox move, the Obama Administration had placed Moss on the Homeland Security Advisory Board, so there was some chance that his reporting of the group’s consensus views would be heard, assuming the group could strike a consensus.
To their surprise, the group reached general accord on a few things, as well as polarized disagreement on others. Where the consensus emerged was around five points. First, the group was all in favor of returning to the days when the federal government spent a lot on cyber security research and development. The agency that had done so, and which had also funded the creation of the Internet, DARPA (the Defense Advanced Research Projects Agency), had essentially abandoned the Internet security field during the Bush (43)
Administration and instead focused attention on “netcentric warfare,” apparently oblivious that such combat depended upon cyberspace being secure.
Second, there was a slight majority in favor of “smart regulation” of some aspects of cyber security, like maybe federal guidelines for the Internet backbone carriers. The smart part was the idea of government regulators specifying goals, rather than micromanaging by dictating means. Most thought, however, that the well-entrenched interest groups in Washington would successfully lobby Congress to block any regulation in this area. Third, the group thought worrying about who did cyber attacks, the so-called attribution problem, was fruitless and that people should instead focus on “resilience.” Resilience is the concept that accepts that a disruptive or even destructive attack will occur and advocates planning in advance for how to recover from such devastation.
The fourth consensus observation was that there really should be no connectivity between utility networks and the Internet. The idea of separating “critical infrastructure” from the open-to-anyone Internet seemed pretty obvious to the seasoned group of information security specialists. In a ballroom down the hall, however, the Obama Administration’s ideas about a Smart Electric Grid were being flayed by several hundred other security specialists, precisely because the plans would make the electric power grid, that sine qua non for all the other infrastructure, even more vulnerable to unauthorized penetration and disruption from the anonymous creatures who prowl the Internet.
The final point on which the “wise men” (including three women) were able to generally agree was that nothing would happen to solve the woes of cyberspace security until someone showed what is so lacking now: leadership. This observation apparently did not seem ironic to the group, who, arguably, were the leaders of the elite information technology security specialists in the country. Yet they
looked to the Obama Administration for leadership in the area. At that point the Obama White House had already called over thirty people to see if they were interested in being the administration’s leader on cyberspace security. The search went on in Washington, as did the demonstrations down the hall of how to hack systems. As the “thought leaders” wandered out of the Pompeii Room somewhat dejected, hoping for leadership, they could hear erupting, probably from the Vesuvius Room, the sound of hundreds wailing as a hacker virtually sliced apart another iPhone. We did not rush over to see what application had been hacked. Instead, we went off to the blackjack tables, where the odds of our losing seemed less than those for American companies and government agencies hoping to stay safe in cyberspace.
3. PRIVACY AND THE R WORD
When both the left and the right disagree with your solution to a problem, you know two things: (1) you are probably on the correct path, and (2) you stand almost no chance of getting your solution adopted. Many of the things that have to be done to reduce America’s vulnerability to cyber war are anathema to one or the other end of the political spectrum. That is why they have not been embraced thus far.
I will discuss the details of what might be done in the next chapter, but I can tell you now that some of the ideas will require regulation and some will have the potential, if abused, to violate privacy. In Washington, one might as well advocate random forced abortions as suggest new regulation or create any greater privacy risks.
My position on regulation is that it is neither good nor bad inherently; it depends upon what the regulation says. Complex, 1960s-style federal regulations generally serve only the Washington law
firms where they are written, and where policies to avoid them are devised at $1,000 an hour. “Smart regulation,” as discussed at Black Hat, articulates an end state and allows the regulated to figure out how best to get to it. Regulation that puts a U.S. company at an economic disadvantage to a foreign competitor is usually unwise, but a regulatory even playing field that passes on minimal costs to users does not seem to me to be one of the works of Satan. Regulations where compliance is not audited or enforced are worthless, almost as troubling as regulations requiring the hovering presence of federal officials. Third-party audits and remote compliance verification generally seem like sensible approaches. Refusal to regulate, or audit, or enforce, often results in things like the 2008 market crash and recession, or lead paint in children’s toys. Overregulation sometimes creates artificially high consumer prices and requirements that do little or nothing to solve the original problem, and suppresses creativity and innovation.