Read The Internet Is Not the Answer Online
Authors: Andrew Keen
“We shape our tools and thereafter our tools shape us,” McLuhan said. And, in a sense, the fate of Baran’s grand idea on computer-to-computer communication that he developed in the early 1960s mirrored the technology itself. For a few years, bits and pieces of his ideas pinged around the computer science community. And then, in the midsixties, they were reassembled back at ARPA.
J. C. R. Licklider, who never stayed in a job more than a few years, was long gone, but his idea of “the Intergalactic Computer Network” remained attractive to Bob Taylor, an ex–NASA computer scientist, who now was in charge of ARPA’s Information Processing Techniques Office. As more and more scientists around America were relying on computers for their research, Taylor recognized that there was a growing need for these computers to be able to communicate with one another. Taylor’s concerns were more prosaic than an imminent Russian nuclear attack. He believed that computer-to-computer communication would cut costs and increase efficiency within the scientific community.
At the time, computers weren’t small and they weren’t cheap. And so one day in 1966, Taylor pitched the ARPA director, Charles Herzfeld, on the idea of connecting them.
“Why not try tying them all together?” he said.
“Is it going to be hard to do?” Herzfeld asked.
“Oh no. We already know how to do it,” Taylor promised.
“Great idea,” Herzfeld said. “Get it going. You’ve got a million dollars more in your budget right now. Go.”
30
And Taylor did indeed
get it going
. He assembled a team of engineers including Paul Baran and Wesley Clark, the programmer who had gotten J. C. R. Licklider hooked on the TX-2 computer back in the fifties. Relying on Baran’s distributed packet-switching technology, the team developed a plan to develop a trial network of four sites—UCLA, Stanford Research Institute (SRI), the University of Utah, and the University of California, Santa Barbara. They were linked together by something called an Interface Message Processor (IMP), which today we call routers—those little boxes with blinking lights that connect up the networked devices in our homes. In December 1968, Licklider’s old Boston consultancy BBN won the contract to build the network. By October 1969, the network, which became known as ARPANET, and was hosted by refrigerator-sized 900-pound Honeywell computers, was ready to go live.
The first computer-to-computer message was sent from UCLA to SRI on October 1, 1969. While trying to type “login,” the SRI computer crashed after the UCLA programmer had managed to type “log.” For the first, but certainly not for the last time, an electronic message sent from one computer to another was a miscommunication.
The launch of ARPANET didn’t have the same dramatic impact as the Sputnik launch twelve years earlier. By the late sixties, American attention had shifted to transformational issues like the Vietnam War, the sexual revolution, and Black Power. So, in late 1969, nobody—with the exception of a few unfashionable geeks in the military-industrial complex—cared much about two 900-pound computers miscommunicating with each other.
But the achievement of Bob Taylor and his engineering team cannot be underestimated. More than Sputnik and the wasteful space race, the successful building of ARPANET would change the world. It was one of the smartest million dollars ever invested. Had that money come from venture capitalists, it would have returned many billions of dollars to its original investors.
The Internet
In September 1994, Bob Taylor’s team reassembled in a Boston hotel to celebrate the twenty-fifth anniversary of ARPANET. By then, those two original nodes at UCLA and SRI had grown to over a million computers hosting Internet content and there was significant media interest in the event. At one point, an Associated Press reporter innocently asked Taylor and Robert Kahn, another of the original ARPANET team members, about the history of the Internet. What was the critical moment in its creation, this reporter wanted to know.
Kahn lectured the reporter on the difference between ARPANET and the Internet and suggested that it was something called “TCP/IP” that represented the “true beginnings of the Internet.”
“Not true,” Taylor interrupted, insisting that the “Internet’s roots” lay with the ARPANET.
31
Both Taylor and Kahn are, in a sense, correct. The Internet would never have been built without ARPANET. Growing from its four original IMPs in 1969, it reached 29 by 1972, 57 by 1975, and 213 IMPs by 1981 before it was shut down and replaced as the Internet’s backbone by the National Science Foundation Network (NSFNET) in 1985. But the problem was that ARPANET’s success led to the creation of other packet-switching networks—such as the commercial TELENET, the French CYCLADES, the radio-based PRNET, and the satellite network SATNET—which complicated internetworked communication. So Kahn was right. ARPANET wasn’t the Internet. And he was right, too, about TCP/IP, the two protocols that finally realized Licklider’s dream of an intergalactic computer network.
Bob Kahn and Vint Cerf met at UCLA in 1970 while working on the ARPANET project. In 1974 they published “A Protocol for Packet Network Intercommunication,” which laid out their vision of two complementary internetworking protocols that they called the Transmission Control Protocol (TCP) and the Internet Protocol (IP)—TCP being the service that guarantees the sending of the stream and IP organizing its delivery.
Just as Paul Baran designed his survivable network to have a distributed structure, so the same was true of Kahn and Cerf’s TCP/IP. “We wanted as little as possible at the center,” they wrote about the unerringly open architecture of these new universal standards that treated all network traffic equally.
32
The addition of these protocols to the ARPANET in January 1983 was, according to Internet historians Hafner and Lyon, “probably the most important event that would take place in the development of the Internet for years to come.”
33
TCP/IP enabled a network of networks that enabled users of every network—from ARPANET, SATNET, and PRNET to TELENET and CYCLADES—to communicate with each other.
Kahn and Cerf’s universal rulebook for digital communications fueled the meteoric growth of the Internet. In 1985, there were around 2,000 computers able to access the Internet. By 1987 this had risen to almost 30,000 computers and by October 1989 to 159,000.
34
Many of these computers were attached to local area networks as well as early commercial dial-up services like CompuServe, Prodigy, and America Online. The first so-called killer app, a term popularized by Larry Downes and Chunka Mui in their bestseller about the revolutionary impact of digital technology on traditional business,
35
was electronic mail. A 1982 ARPANET report reviewing the network’s first decade noted that email had come to eclipse all other applications in the volume of its traffic and described it as a “smashing success.”
36
Thirty years later, email had become, if anything, an even bigger hit. By 2012 there were more than 3 billion email accounts around the world sending 294 billion emails, of which around 78% were spam.
37
Another popular feature was the Bulletin Board System (BBS), which enabled users with similar interests to connect and collectively share information and opinions. Among the best known of these was the Whole Earth ’Lectronic Link (the WELL), begun in 1985 by the
Whole Earth Catalog
founder Stewart Brand. The WELL captured much of the countercultural utopianism of early online users who believed that the distributed structure of the technology created by Internet architects like Paul Baran, with its absence of a central dot, represented the end of traditional government power and authority. This was most memorably articulated by John Perry Barlow, an early WELL member and lyricist for the Grateful Dead, in his later 1996 libertarian manifesto “Declaration of the Independence of Cyberspace.”
“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of mind,” Barlow announced from, of all places, Davos, the little town in the Swiss Alps where the wealthiest and most powerful people meet at the World Economic Forum each year. “I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”
38
The real explanation of the Internet’s early popularity was, however, more prosaic. Much of it was both the cause and the effect of a profound revolution in computer hardware. Rather than being dependent on “giant brains” that one “walked into” like the 1,800-square-foot ENIAC, the invention of the transistor by a Bell Labs team in 1947—“the very substructure of the future,”
39
in the words of the technology writer David Kaplan—resulted in computers simultaneously becoming smaller and smaller and more and more powerful. “Few scientific achievements this century were as momentous,” Kaplan suggests about this breakthrough. Between 1967 and 1995, the capacity of computer hard drives rose an average of 35% every year, with Intel’s annual sales growing from under $3,000 in 1968 to $135 million six years later. Intel’s success in developing faster and faster microprocessors confirmed the prescient 1965 statement of its cofounder Gordon Moore, “Moore’s law,” which predicted that chip speed would double every year or eighteen months. And so, by the early 1980s, hardware manufacturers like IBM and Apple were able to build “personal computers”—relatively affordable desktop devices that, with a modem, allowed anyone access to the Internet.
By the end of the 1980s, the Internet had connected 800 networks, 150,000 registered addresses, and several million computers. But this project to network the world wasn’t quite complete. There was one thing still missing—Vannevar Bush’s Memex. There were no trails yet on the Internet, no network of intelligent links, no process of tying two items together on the network.
The World Wide Web
In 1960, a “discombobulated genius” named Ted Nelson came up with the idea of “nonsequential writing,” which he coined “hypertext.”
40
Riffing off Vannevar Bush’s notion of “information trails,” Nelson replaced Bush’s reliance on analog devices like levers and microfilm with his own faith in the power of digital technology to make these nonlinear connections. Like Bush, who believed that the trails on his Memex “do not fade,”
41
the highly eccentric Nelson saw himself as a “rebel against forgetting.”
42
His lifelong quest to create hypertext, which he code-named Xanadu, was indeed a kind of rebellion against forgetfulness. In Nelson’s Xanadu system, there was no “concept of deletion.” Everything would be remembered.
In 1980, twenty years after Nelson’s invention of the hypertext idea, a much less eccentric genius, Tim Berners-Lee, arrived as a consultant at the European Particles Physics Laboratory (CERN) in Geneva. Like Nelson, Berners-Lee, who had earned a degree in physics from Oxford University’s Queens College in 1976, was concerned with protecting his own personal forgetfulness. The problem, Berners-Lee wrote in his autobiography,
Weaving the Web
, was remembering “the connections among the various people, computers, and projects at the lab.”
43
This interest in memory inspired Berners-Lee to build what he called his first website program, Enquire. But it also planted what he called “larger vision” in his “consciousness”:
Suppose all the information stored on computers everywhere were linked
, I thought.
Suppose I could program my computer to create a space in which anything could be linked to anything.
All the bits of information in every computer at CERN, and on the planet, would be available to me and to anyone else. There would be a single global information space.
44
In 1984, when Berners-Lee returned to CERN and discovered the Internet, he also returned to his larger vision of a single global information space. By this time, he’d discovered the work of Vannevar Bush and Ted Nelson and become familiar with what he called “the advances” of technology giants like Donald Davies, Paul Baran, Bob Kahn, and Vint Cerf.
“I happened to come along with time, and the right interest and inclination, after hypertext and the Internet had come of age,” Berners-Lee modestly acknowledged. “The task left to me was to marry them together.”
45
The fruit of that marriage was the World Wide Web, the information management system so integral to the Internet that many people think that the Web actually
is
the Internet. “If I have seen further it is by standing on the shoulders of giants,” Isaac Newton once said. And Berners-Lee not only built upon the achievements of the Internet’s founding fathers, but he designed the Web to ride on top of the Internet to create what the Sussex University economist Mariana Mazzucato calls a “foundational technology.”
46
His program leveraged the Internet’s preexisting packet-switching technology, its TCP/IP protocols, and, above all, its completely decentralized structure and commitment to treating all data equally. The Web’s architecture was made up of three elements: first, a computer language for marking up hypertext files, which he called Hypertext Markup Language (HTML); second, a taxonomy for traveling between these hypertext files, which he called Hypertext Transfer Protocol (HTTP); third, a special address code linked to each hypertext file that would be able to instantly call up any other file on the Web, which he called a Universal Resource Locator (URL).
47
By labeling files and by using hypertext as a link between these files, Berners-Lee radically simplified Internet usage. His great achievement was to begin the process of taking the Internet out of the university and into the world.