Cyber War: The Next Threat to National Security and What to Do About It (21 page)

Read Cyber War: The Next Threat to National Security and What to Do About It Online

Authors: Richard A. Clarke,Robert K. Knake

Tags: #General, #Computers, #Technology & Engineering, #Political Science, #Security, #United States, #Political Freedom & Security, #Cyberterrorism, #Political Process, #Law Enforcement, #International Security, #Information warfare, #Military Science, #Terrorism, #Prevention

It’s axiomatic that there is no single measure (or, as many in the Pentagon like to say, in a nod to the cowboy known as the Lone Ranger, no “silver bullet”) that could secure U.S. cyberspace. There may, however, be a handful of steps that would protect enough of the key assets, or at least throw doubt into the mind of a potential attacker, by making it very difficult to stage a successful large-scale cyber assault on America.

Protecting every computer in the U.S. from cyber attack is hopeless, but it may be possible to sufficiently harden the important networks that a nation-state attacker would target. We need to harden them enough that no attack could disable our military’s ability to respond or severely undermine our economy. Even if our defense is not perfect, these hardened networks may be able to survive sufficiently, or bounce back quickly enough, so that the damage done by an attack would not be crippling. If we can’t defend every major system, what do we protect? There are three key components to U.S. cyberspace that must be defended, or, to borrow another phrase from nuclear strategy, a “triad.”

THE DEFENSIVE TRIAD

Our Defensive Triad strategy would be a departure from what Clinton, Bush, and now Obama have done. Clinton in his National Plan and Bush in his National Strategy both sought to have every critical infrastructure defend itself from cyber attack. There were eventually eighteen industries identified as critical infrastructures, ranging from electric power and banking to food and retail. As previously noted, all three Presidents “eschewed regulation” as a means of reducing cyber vulnerabilities. Little happened. Bush, in the last of his eight years in office, approved an approach to cyber war that largely ignored the privately owned and operated infrastructures. It focused on defending government systems and on creating a military Cyber Command. Obama is implementing the Bush plan, including the military command, with little or no modification to date.

The Defensive Triad Strategy would use federal regulation as a major tool to create cyber security requirements, and it would, at least initially, focus defensive efforts on only three sectors.

First is the backbone. As noted in chapter 3, there are hundreds of Internet service provider companies, but only a half dozen or so large ISPs provide what is called the backbone of the Internet. They include AT&T, Verizon, Level 3, Qwest, and Sprint. These are the “trunks,” or Tier 1 ISPs, meaning that they can connect directly to most other ISPs in the country. These are the companies that own the “big pipes,” thousands of miles of fiber-optic cable running across the country, into every corner of the nation, and hooking up with undersea fiber-optic cables to connect to the world. Over 90 percent of Internet traffic in the U.S. moves on these Tier 1’s, and it is usually impossible to get to anyplace in the U.S. without traversing one of these backbone providers. So, if you protect the Tier 1’s,
you are worrying about most of the Internet infrastructure in the U.S. and also other parts of cyberspace.

To attack most private-sector and government networks, you generally have to connect to them over the Internet and specifically, at some point, over the backbone. If you could catch the attack entering the backbone, you could stop it before it got to the network it was going to attack. If you did that, you would not have to worry as much about hardening tens of thousands of potential targets for cyber attack. Think about it this way: if you knew someone from New Jersey was going to drive a truck bomb into a building in Manhattan, you could defend every important building on the island (have fun getting agreement on which ones those would be), or you could inspect all trucks before they went on one of the fourteen bridges or into the four tunnels that connect to the island.

Inspecting all the Internet traffic about to enter the backbone theoretically poses two significant problems, one technical and one of policy. The technical problem is, simply, this: there is a lot of traffic and no one wants you slowing it down to look for malware or attack scripts. The policy problem is that no one wants you reading their e-mails or webpage requests.

The technical issue can be overcome with existing technology. As speeds increase, there could be difficulty scanning without introducing delay if the scanning technology failed to keep pace. Today, however, several companies have demonstrated hardware/software combinations that can scan what moves on the Internet, the small packets of ones and zeros that combine to make an e-mail or webpage. The scanning can be done so fast that it introduces no measurable delay in the packets’ speeding down the fiber-optic line. And it is not just the “to” and “from” lines on the packets, the so-called headers, that would be examined, but the data level, where the malware would be. This capability is described as “deep-packet
inspection,” and the speed is called “line rate.” The absence of delay is called “no latency.” We can now do deep-packet inspection at a line rate with no latency. So the technical hurdle has been met, at least for now.

The policy problem can also be solved. We do not want the government or even an ISP reading our e-mails. The system of deep-packet inspection proposed here would be fully automated. It would not be looking for keywords, but only at the payload to see if there are predetermined patterns of ones and zeros that match up with known attack software. It’s looking for signatures. If it finds an attack, it could just “black hole” the packets, dump them into cyber oblivion, or it could quarantine them, put them aside for analysis. For Americans to be satisfied that such a deep-packet inspection system were not Big Brother spying on us, it would have to be run by the Tier 1 ISPs themselves and not by the government. Moreover, there would have to be rigorous oversight by an active Privacy and Civil Liberties Protection Board to ensure that neither the ISPs nor the government was illegally spying on us.

The idea of putting deep-packet inspection systems on the backbone does not create the risk of government spying on us. That risk already exists. As we saw with the illegal wiretapping in the Bush Administration, if the checks and balances in the system fail, the government can already improperly monitor citizens. That is a major concern and needs to be prevented by real oversight mechanisms and tough punishment for those who break the law. Our nation’s strong belief in privacy rights and civil liberties is not incompatible with what we need to do to defend our cyberspace. Giving guns to police does raise the possibility that some policemen may get involved in unjust shootings on rare occasions, but we recognize that we need armed police to defend us and we work hard at making sure that unjust shootings are prevented. So, too, we can deploy deep-packet inspection systems on Internet backbone ISPs, recognizing
that we need them there to protect us, and we have to make sure that they do not get misused.

How would such a system get deployed? The deep-packet inspection systems would be placed where fiber-optic cables come up out of the ocean and enter the U.S., at “peering points,” where the Tier 1 ISPs connect to each other and the smaller networks, and at various other points on the Tier 1 networks. The government, perhaps Homeland Security, would probably have to pay for the systems, even though they would be run by the ISPs and maybe systems integrator companies. The signatures of the malware that the black box scanners would look for would come from Internet security companies such as Symantec and McAfee, which have elaborate global systems to look for malware. The ISPs and government agencies could also provide signatures.

The black box inspectors would have to be connected to each other on a closed network, what is called “out-of-band communications” (not on the Internet), so that they could be updated quickly and reliably even if the Internet were experiencing difficulties. Imagine that a new piece of attack software enters into cyberspace, one that no one has ever seen before. This “Zero Day” malware begins to cause a problem by attacking some sites. The deep-packet inspection system would be tied into Internet security companies, research centers, and government agencies that are looking for Zero Day attacks. Within minutes of the malware being seen, its signature would be flashed out to the scanners, which would start blocking it and would contain the attack.

A precursor to this kind of deep-packet inspection system is already being deployed. Verizon and AT&T can, at some locations, scan for signatures that they have identified, but they have been reluctant to “black hole” (or kill) malicious traffic because of the risk that they might be sued by customers whose service is interrupted. The carriers would probably win any such suit because their service-
level agreements (SLAs) with their customers usually state that they have the right to deny service if the customer’s activity is illegal or disruptive to the network. Nonetheless, because of the typical abundance of caution from their lawyers, the companies are doing less than they could to secure cyberspace. Legislation or regulation is probably needed to clarify the issue.

The Department of Homeland Security’s “Einstein” system, discussed in chapter 4, has been installed at some of the locations where government departments connect to the Tier 1 ISPs. Einstein only monitors government networks. The Defense Department has a similar system at the sixteen locations where the unclassified DoD intranet connects to the public Internet.

A more advanced system, with higher speed capacity, more memory and processing capabilities, and out-of-band connectivity, could help to minimize or deter a large-scale cyber attack if it were broadly deployed to protect not just the government, but the backbone on which all networks rely. By defending the backbone in this way, we should be able to stop most attacks against our key government and private-sector systems. The independent Federal Communications Commission has the authority today to issue regulations requiring the Tier 1 ISPs to establish such a protective system. The Tier 1’s could pass along the costs to their customers and to smaller ISPs that peer with them. Alternatively, Congress could appropriate funds for some or all of the system. So far, the government is only beginning to move in this direction, and then only to protect itself, not the private-sector networks on which our economy, government, and national security rely.

ISPs should also be required to do more to keep our nation’s portion of the cyber ecosystem clean. Ed Amoroso, the chief security officer at AT&T, told me that his security operations center watches as computers that have been taken over by a botnet spew out DDOS and spam. They know what subscribers are infected, but they don’t
dare inform the customer (much less cut off access) out of fear that customers would switch providers and try to sue them for violating their privacy. That equation needs to be stood on its head. ISPs should be
required
to inform customers of the network when data shows that their computers have been made part of a botnet. ISPs should be required to shut off access if customers do not respond after being notified. They should also be required to provide free antivirus software to their subscribers, as many now do because it helps them manage their bandwidth better; and subscribers should be required to use it (or whatever antivirus software they choose). We don’t let car manufacturers sell cars without seat belts, and in most states we don’t let people drive cars unless they are wearing them. The same logic should apply on the Internet, because poor computer security by an individual creates a national security problem for us all.

In addition to the Tier 1 carriers screening Internet traffic at packet level for known malware, blocking those packets that match previously identified attacks, related steps could be added to strengthen the system. First, with relatively little investment of money and time, software could be developed to identify “morphed malware.” The software would look for slight variations in known attack signatures, changes that attackers might use in attempts to slip by the deep packet inspection of previously identified hacks. Second, in addition to having the Tier 1 ISPs looking for malware, government and large, regulated commercial institutions such as banks would also contract with hosting and data centers to do deep-packet inspection. At a handful of large hosting data centers scattered around the country, the fibers of Tier 1 ISPs come together to do switching among the networks. At these locations, some large institutions also have their own servers locked behind fencing in row upon row of blinking equipment or stashed in highly secured rooms. The operators of these centers can screen for known malware as a second level
of defense. Moreover, the data center operators or IT security firms can also look at data after it has passed by. The data centers can provide managed security services, looking for anomalous activity that might be caused by previously unidentified malware. Unlike the attempts to block known malware as it comes in, the managed security services would look for patterns of suspicious behavior and anomalous activity of data packets over time. By doing that, they may be able to spot more complicated two-step attacks and new Zero Day malware. That new malware would then be added to the list of things to be blocked. Searches could be performed for locations in the data banks where the new malware had gone, perhaps allowing the system to stop large-scale exfiltration of data.

By paying the ISPs and managed security service providers to do this sort of data screening, the government would remain sufficiently removed from the process to protect privacy and to encourage competition. The government’s role, in addition to paying for the defenses, would be to provide its own information about malware (locked up in a black box if necessary), incentivize firms to discover attacks, and create a mechanism to allow the public to confirm that privacy information and civil liberties are well protected. Unlike a single line of defense owned and operated by the government (such as the “Einstein” system being created by the Department of Homeland Security to protect civilian federal agencies), this would be a multilayer, multiple-provider system that would encourage innovation and competition among private sector IT companies. If the government was aware that a cyber war was about to break out, or if one already had, a series of federal network operation centers could interact with these private IT defenders and with the network operation centers (NOCs) of key privately owned institutions to coordinate a defense. For that to happen, the government would have to create in advance a dedicated communications network among the NOCs, one that was highly secure, entirely separated, and in other
ways different from the Internet. (The fact that such a new network would be needed should tell you something about the Internet.)

Other books

Hollow Dolls, The by Dahl, MT
Crossroads by Stephen Kenson
Back to the Moon-ARC by Travis S. Taylor, Les Johnson
Conan and the Spider God by Lyon Sprague de Camp
The Killing Room by Peter May
Forever Mine by Elizabeth Reyes
The Whiskey Sea by Ann Howard Creel
Scaredy Cat by Mark Billingham
The Holiday From Hell by Demelza Carlton