Cyber War: The Next Threat to National Security and What to Do About It (11 page)

Read Cyber War: The Next Threat to National Security and What to Do About It Online

Authors: Richard A. Clarke,Robert K. Knake

Tags: #General, #Computers, #Technology & Engineering, #Political Science, #Security, #United States, #Political Freedom & Security, #Cyberterrorism, #Political Process, #Law Enforcement, #International Security, #Information warfare, #Military Science, #Terrorism, #Prevention

In 2008, the hacker Dan Kaminsky showed how a sophisticated adversary could hack the system. Kaminsky released a software tool that could quietly access the Domain Name System computers and corrupt the database of name addresses and their related numbered addresses. The system would then literally give you a wrong number. Just misdirecting traffic could cause havoc with the Internet. One cyber security company found twenty-five different ways it could hack the Domain Name System to cause disruption or data theft.

The second vulnerability of the Internet is routing among ISPs, a system known as the Border Gateway Protocol. Another opportunity for a cyber warrior in the one-second, 2,000-mile trip of packets from my home came when they jumped onto the AT&T network. AT&T runs the most secure and reliable Internet service in the world, but it is as vulnerable as anyone else to the way the Internet works. When the packets got on the backbone, they found that AT&T does not connect directly to my company. So who does? The packets checked a database that all of the major ISPs contribute to. There they found a posting from Level 3 that said, in effect, “If you want to connect to mycompany.com, come to us.” This routing system regulates traffic at the points where the ISPs come together, where one starts and the other stops, at their borders.

BGP is the main system used to route packets across the Internet.
The packets have labels with a “to” and “from” address, and BGP is the postal worker that decides what sorting station the packet goes to next. BGP also does the job of establishing “peer” relationships between two routers on two different networks. To go from AT&T to Level 3 requires that an AT&T router and a Level 3 router have a BGP connection. To quote from a report from Internet Society, a nonprofit organization dedicated to developing Internet-related standards and policies, “There are no mechanisms internal to BGP that protect against attacks that modify, delete, forge, or replay data, any of which has the potential to disrupt overall network routing behavior.” What that means is that when Level 3 said, “If you want to get to mycompany.com, come to me,” nobody checked to see if that was an authentic message. The BGP system works on trust, not, to borrow Ronald Reagan’s favorite phrase, on “trust but verify.” If a rogue insider working for one of the big ISPs wanted to cause the Internet to seize up, he could do it by hacking into the BGP tables. Or someone could hack in from outside. If you spoof enough BGP instructions, Internet traffic will get lost and not reach its destination.

Everyone involved in network management for the big ISPs knows about the vulnerabilities of the Domain Name System and the BGP. People like Steve Kent of BBN Labs in Cambridge, Massachusetts, have even developed ways of eliminating those vulnerabilities, but the Federal Communications Commission has not required the ISPs to do so. Parts of the U.S. government are deploying a secure Domain Name System, but the practice is almost nonexistent in the commercial infrastructure. Decisions on the Domain Name System are made by a nongovernmental international organization called ICANN (pronounced “
eye
-can”), which is unable (“
eye-cannot
”) to get agreement on a secure system. The result is that the Internet itself could easily be a target for cyber warriors, but most cyber security experts think that unlikely because the Internet is so useful for attacking other things.

ICANN demonstrates the second vulnerability of the Internet, which is governance, or lack thereof. No one is really in charge. In the early days of the Internet, ARPA (DoD’s Advanced Research Project Agency) filled the function of network administrator, but nobody does now. There are technical bodies, but few authorities. ICANN, the Internet Corporation for Assigned Names and Numbers, is the closest that any organization has come to being responsible for the management of even one part of the Internet system. ICANN ensures that web addresses are unique. Computers are logical devices, and they don’t deal well with ambiguity. If there were two different computers on the Internet each with the same address, routers would not know what to do. ICANN solves that problem by working internationally to assign addresses. ICANN solves one of the problems of Internet governance, but not a host of other issues. More than a dozen intergovernmental and nongovernmental organizations play some role in Internet governance, but no authority provides overall administrative guidance or control.

The third vulnerability of the Internet is the fact that almost everything that makes it work is open, unencrypted. When you are crawling around the web, most of the information is sent “in the clear,” meaning that it is unencrypted. It’s like your local FM classic rock station broadcasting Pink Floyd and Def Leppard “in the clear” so that anyone tuned to that channel can receive the signal and rock along rolling down the highway. A radio scanner purchased at Radio Shack can listen in on the two-way communications between truckers, and in most cities, between police personnel. In some cities, however, the police will “scramble” the signal so that criminal gangs cannot monitor police comms. Only someone with a radio that can unencrypt the traffic can hear what is being said. To everyone else, it just sounds like static.

The Internet generally works the same way. Most communication is openly broadcast, and only a fraction of the traffic is encrypted.
The only difference is that it is a little more difficult to tune in to someone else’s Internet traffic. ISPs have access (and can give it to the government), and mail-service providers like Google’s Gmail have access (even if they say they don’t). In both those cases, by using their services you are more or less agreeing that they may be able to see your web traffic or e-mails. For a third party to get access, they need to do what security folks call “snoop” and use a “packet sniffer” to pick up the traffic. A packet sniffer is basically a wiretap device for Internet traffic and can be installed on any operating system and used to steal other people’s traffic on a local area network. When plugged into a local or an Ethernet network, any user on the system can use a sniffer to pull in all the other traffic. The standard Ethernet protocol tells your computer to ignore everything that is not addressed to it, but that doesn’t mean it has to. An advanced packet sniffer on an Ethernet network can look at all the traffic. Your neighbors could sniff everything on the Internet on your street. More advanced sniffers can trick the network in what is known as a “man-in-the-middle” attack. The sniffer appears to the router as the user’s computer. All information is sent to the sniffer, which then copies the information before passing it on to the real addressee.

Many (but not most) websites now use a secure, encrypted connection when you log on so that your password is not sent in the clear for anyone sniffing around to pick up. Due to cost and speed, most then drop the connection back into an unsecure mode after the password transmission is made. When sniffing the transmission isn’t possible, or when the data being sent is unreadable, that doesn’t mean you are safe. A keystroke logger, a small hidden piece of malicious code installed surreptitiously on your computer, can capture everything you type and then transmit it secretly. Of course, this type of keystroke logger does require you to do something stupid in order for it to be installed on your computer, such as visiting a website that has been infected or downloading a file from an e-mail
that is not really from someone you trust. In October 2008 the BBC reported that “computer scientists at the Security and Cryptography Laboratory at the Swiss Ecole Polytechnique Fédérale de Lausanne have demonstrated that criminals could use a radio antenna to ‘fully or partially recover keystrokes’ by spotting the electromagnetic radiation emitted when keys were pressed.”

A fourth vulnerability of the Internet is its ability to propagate intentionally malicious traffic designed to attack computers, malware. Viruses, worms, and phishing scams are collectively known as “malware.” They take advantage of both flaws in software and user errors like going to infected websites or opening attachments. Viruses are programs passed from user to user (over the Internet or via a portable format like a flash drive) that carry some form of payload to either disrupt a computer’s normal operation, provide a hidden access point to the system, or copy and steal private information. Worms do not require a user to pass the program on to another user; they can copy themselves by taking advantage of known vulnerabilities and “worm” their way across the Internet. Phishing scams try to trick an Internet user into providing information such as bank account numbers and access codes by creating e-mail messages and phony websites that pretend to be related to legitimate businesses, such as your bank.

All this traffic is allowed to flow across the Internet with few, if any, checks on it. For the most part, you as an Internet user are responsible for providing your own protection. Most ISPs do not take even the most basic steps to keep bad traffic from getting to your computer, in part because it is expensive and can slow down the traffic, and also because of privacy concerns.

The fifth Internet vulnerability is the fact that it is one big network with a decentralized design. The designers of the Internet did not want it to be controlled by governments, either singly or collectively, and so they designed a system that placed a higher priority
on decentralization than on security. The basic idea of the Internet began to form in the early 1960s, and the Internet as we know it today is deeply imbued with the sensibilities and campus politics of that era. While many regard the Internet as an invention of the military, it is really the product of now aging hippies on the campuses of MIT, Stanford, and Berkeley. They had funding through DARPA, the Defense Department’s Advanced Research Project Agency, but the ARPANET, the Advanced Research Project Agency’s Network, was not created just for the Defense Department to communicate. It initially connected four computers: at UCLA, Stanford, UC Santa Barbara, and, oddly, the University of Utah.

After building the ARPANET, the Internet’s pioneers quickly moved on to figuring out how to connect the ARPANET to other networks under development. In order to do that, they developed the basic transmission protocol still used today. Robert Kahn, one of the ten or so people generally regarded as having created the Internet, laid out four principles for how these exchanges would take place. They are worth noting here now:

  • Each distinct network should have to stand on its own, and no internal changes should be required to any such network to connect it to the Internet.
  • Communications should be on a best-effort basis. If a packet didn’t make it to the final destination, it should be retransmitted shortly from the source.
  • Black boxes would be used to connect the networks; these would later be called gateways and routers. There should be no information retained by the gateways about the individual packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
  • There should be no global control at the operations level.

While the protocols that were developed based on these rules allowed for the massive growth in networking and the creation of the Internet as we know it today, they also sowed the seeds for the security problems. The writers of these ground rules did not imagine that anyone other than well-meaning academics and government scientists would use the Internet. It was for research purposes, for the exchange of ideas, not for commerce, where money would change hands, or for the purposes of controlling critical systems. Thus, it could be one network of networks, rather than separate networks for government, financial activity, etc. It was designed for thousands of researchers, not billions of users who did not know and trust each other.

Up to and through the 1990s, the Internet was almost universally seen as a force for good. Few of the Internet’s boosters were willing to admit that the Internet was a neutral medium. It could easily be used to facilitate the free flow of communication between scientists and the creation of legitimate e-commerce, but could also allow terrorists to provide training tips to new recruits and to transmit the latest beheading out of Anbar Province on a web video. The Internet, much like the tribal areas of Pakistan or the tri-border region in South America, is not under the control of anyone and is therefore a place to which the lawless will gravitate.

Larry Roberts, who wrote the code for the first versions of the transmission protocol, realized that the protocols created an unsecure system, but he did not want to slow down the development of the new network and take the time to fix the software before deploying it. He had a simple answer for the concern. It was a small network. Rather than trying to write secure software to control the dissemination of information on the network, Roberts concluded that it would be far easier to secure the transmission lines by encrypting the links between each computer on the network. After all, the early routers were all in secure locations in government agencies and academic
laboratories. If the information was secure as it traveled between two points on the network, that was all that really mattered. The problem was that the solution did not envision the expansion of the technology beyond the handful of sixty-odd computers that then made up the network. Trusted people ran all those sixty computers. A precondition for joining the network was that you were a known entity committed to promoting scientific advancement. And with so few people, if anything bad got on the network, it would not be hard to get it off and to identify who had put it there.

Then Vint Cerf left ARPA and joined MCI. Vint is a friend, a friend with whom I fundamentally disagree about how the Internet should be secured. But Vint is one of those handful of people who can legitimately be called “a father of the Internet,” so what he thinks on Internet issues usually counts for a lot more than what I say. Besides, Vint, who always wears a bow tie, is a charming guy, and he now works for Google, which urges us all not to be evil.

MCI (now part of AT&T) was the first major telecommunications company to lay down a piece of the Internet backbone and to take the technology out of the small network of government scientists and academics, offering it to corporations and even, through ISPs, to home users. Vint took the transmission protocol with him, introducing the security problem to a far larger audience and to a network that could not be secured through encrypting the links. No one really knew who was connecting to the MCI network.

Other books

The War Cloud by Thomas Greanias
Tish Marches On by Mary Roberts Rinehart
The O’Hara Affair by Thompson, Kate
Witness by Cath Staincliffe
Cum For Bigfoot 10 by Virginia Wade
Passion and Affect by Laurie Colwin