Authors: Alex Wright
Even the modern World Wide Web, which does echo certain aspects of his vision, fails to fulfill Bush’s larger aspiration for a participatory scholarly research environment. Web browsers are ultimately unidirectional programs, designed primarily to let users consume information from a remote source. To the extent that users can create information through a Web browser, they must do so through the mediation of that remote source. As an authoring tool, the Web browser is all but impotent. The Memex was always envisioned as a two-way authoring environment, allowing users to create new information as readily as they could consume existing information. This is not to say that the Memex is necessarily an entirely superior concept. Bush envisioned the Memex as a powerful stand-alone machine, while the Web works by virtue of a distributed network, lending it the advantages of robustness and scale. And while the Memex may have anticipated the idea of hypertext, it was by no means a true hypertext system. Bush wanted the machine to link individual frames of a microfilm; but given its mechanical limitations, it could not penetrate any deeper into the body of the text to, for example, insert links between individual words or sentences. But Bush’s model did enjoy one enormous advantage over the kinds of hyperlinks that prevail across
today’s Web: His links were permanent. By contrast, Web hyperlinks are notoriously evanescent.
While the 1945 model Memex continues to influence developers today, few of Bush’s present-day apostles know anything of his later work. In 1945 Bush had spent only a few years thinking about the Memex. “In the quarter-century since then,” he later wrote, “the idea has been with me almost constantly, and I have watched new developments in electronics, physics, chemistry, and logic to see how they might help to bring it to reality. That day is not yet here, but has come far closer.”
22
Bush’s subsequent vision for the Memex reveals a far more provocative vision, influenced not just by changing technological realities but by the evolution of Bush’s own ideas about how computers should work. In particular, Bush developed a fascination with biological models of computing, as a potential alternative to the mathematical and logical models that have since predominated. “A great deal of our brain cell activity is closely parallel to the operation of relay circuits,” he wrote. “One can explore this parallelism and the possibilities of ultimately making something out of it almost indefinitely.”
23
Unlike digital computers, which rely on linear sequencing and the indexing of documents in hierarchical file structures, “the brain does not operate by reducing everything to indices and computation,” he wrote.
It follows trails of association, flying almost instantly from item to item, bringing into consciousness only the significant. The trails bifurcate and cross, they become erased in disuse, and emphasized by success. Ultimately, the machine can do this better. Its memory will be far greater and the items will not fade. Progress along trails will be at lightning speed. The machine will learn from its own experience, refine its own trails, explore in unknown territory.
24
The notion of a Memex that learns, adapting to user behavior along the way, suggests fascinating possibilities of machine-human symbiosis and new ways of thinking about information environments. Today, most of us experience personal computers as fixed entities, with hierarchical folders and a familiar set of programs. Our
computers are not so far removed from the dumb terminals of the mainframe era. They know very little about us. Bush’s vision suggests the possibility of smarter machines that could anticipate our needs and adapt themselves to our behaviors, like good servants.
Bush also tried to adapt his vision to the changing technological landscape. After the publication of his original essay, Bush recognized that the advent of digital computers made his analog mechanical device seem increasingly anachronistic. Although he never mastered the intricacies of digital computing, he did begin to revise his theories to account for the evolving development of computer technology. In 1958 Bush wrote a sequel to his original essay, entitled “Memex II,” which was never published during his lifetime.
25
Recognizing that microfilm was a soon-to-be-defunct technology, he envisioned a new kind of memory apparatus using biological crystals. Bush also envisioned the possibility of networking large remote collections of data, of letting users dial in to a Memex from afar. In his final years, he even speculated on the possibility of ESP arising as human brains and external storage devices began to coalesce. He believed that a long-term symbiosis between human brains and computing equipment would lead to the coevolution of new neural pathways in the human brain.
These provocative musings seem all the more prescient today, but Bush’s reputation has been largely limited to his comparatively primitive 1945 vision. While Bush’s subsequent thinking remains largely forgotten, “As We May Think” touched off a wave of innovation that still reverberates more than 60 years later.
In the 1950s, a young library science student named Eugene Garfield took Bush’s 1945 essay as his inspiration to explore a new way of facilitating access to scientific journals. His work ultimately led him to create a methodology known as citation ranking, a tool for assessing the impact of scholarly articles by tracking the frequency of bibliographic citations. Depending on the relative frequency of citations, an article would acquire a higher influence ranking, which would in turn lend more weight to whatever documents it cited. The technique,
used most powerfully in the influential Science Citation Index, provided a way of ranking the impact of scholarly journal articles. By measuring the cumulative impact of citations, it allowed scholars to assign a collective weight of importance to any given article. The tool effectively circumvented the old manual indexing systems that once predominated in the sciences. The Science Citation Index would go on to exert a sweeping influence on the trajectory of scientific research, becoming the de facto authority in determining the importance of scientific articles and an essential fixture in every science library. At many academic science libraries, a subscription to “SCI” constitutes the single biggest line item in the acquisitions budget.
Inspired by Bush’s vision of associative trails, Garfield’s notion of citation ranking would in turn inspire two young graduate students at Stanford, Larry Page and Sergey Brin, who extrapolated Garfield’s work to the problem of Web search results. In their white paper, “The Anatomy of a Large-Scale Hypertextual Search Engine,”
26
they outline a new algorithm based on Garfield’s idea: Pagerank. That algorithm lives at the heart of their search engine: Google.
Google, for all its success, does not necessarily embody the greatest triumph of Garfield’s theory. In fact, Google’s approach proves both the power and the limitations of citation analysis. Because it tries to be all things to all people, Google inevitably turns the Web into a kind of worldwide popularity contest, often at the expense of smaller, more focused communities of interest. As Paul Kahn puts it, “Link-node models work best on small units.”
27
As such, it is particularly well suited for weighing the impact of literature within smaller, focused communities like scientific specialties, where the literature consists mainly of research papers containing references to other papers within a well-defined domain of knowledge. When the same mode of analysis is applied to the great public Web, however, problems of context inevitably surface. Without the structured conventions of a scientific paper, the Web presents more difficult problems of determining context and the “aboutness” of a particular domain. Google’s brute-force algorithm in some ways represents a step backward from the more focused, special-purpose research environment that Garfield (and Bush) envisioned.
Bush’s essay would find another enthusiastic disciple in Doug Engelbart, who read the essay as a young Navy radio technician in the Philippines. “As We May Think,” Engelbart later recalled, would become the philosophical guidepost for his entire career. On returning from the Navy, Engelbart set his professional sights on exploring the kinds of networked information retrieval tools that Bush had envisioned. He would go on to develop a series of breakthrough technologies that would shape the course of the personal computer: the mouse, the graphical user interface, and a dramatically new kind of computer system that owed a deep debt to Bush’s vision.
In 1962 Engelbart published a report entitled “Augmenting Human Intellect: A Conceptual Framework,” in which he described his proposed framework for machine-assisted intelligence. Written during his tenure at the Stanford Research Institute (SRI) for his project sponsors in the Air Force, the report lays out Engelbart’s vision for how an “augmented” human intellect might work. His report was at once a practical blueprint and a philosophical treatise. “By ‘augmenting human intellect,’” he wrote, “we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems.” But Engelbart was not referring to the mathematical problems that had preoccupied computer scientists up to that point. He explicitly geared his solution to audiences that had traditionally had no truck with computers: diplomats, executives, social scientists, lawyers, and designers, as well as scientists, like biologists and physicists, who were then not typically using computers. His vision was expansive. “We do not speak of isolated clever tricks that help in particular situations. We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles, and the human ‘feel for a situation’ usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids.”
28
It was, at root, a deeply human-centered vision of computing.
The whole framework rested on a carefully considered model of how humans process information, recognizing that in order to make an information system useful it must involve breaking down infor
mation into atomized nuggets of information that map as closely as possible to the processes of human cognition. “The human mind neither learns nor acts by large leaps,” he wrote, “but by steps organized or structured so that each one depends upon previous steps.”
29
Engelbart identified the linchpin of the entire system as a series of “process hierarchies” that would enable the system to encode units of information that could be atomized into discrete units of semantic meaning that could then be recombined in endless possible reconfigurations. “Every process of thought or action is made up of sub-processes,” he wrote. “Let us consider such examples as making a pencil stroke, writing a letter of the alphabet, or making a plan … although every sub-process is a process in its own right, in that it consists of further sub-processes.” In turn, each of these process hierarchies forms a building block in a larger repertoire of higher-level hierarchies that Engelbart dubbed a “repertoire hierarchy.” For example, “the process of writing is utilized as a sub-process within many different processes of a still higher order, such as organizing a committee, changing a policy, and so on.”
30
Each of these routines then formed a discrete hierarchical process, which could then be stitched together in any number of possible combinations: networks of hierarchies. The entire framework rests on this conceit.
Working with a team of developers at SRI, Engelbart brought his vision to working reality in the form of a prototype system dubbed the oN-Line System (NLS). Given SRI’s institutional priorities and military funding, it is perhaps not surprising that the system placed a special emphasis on facilitating organizational workflow, or what Engelbart called “asynchronous collaboration among teams distributed geographically” (what software developers today would call “groupware”). NLS equipped its users with a set of tools that today we would consider rudimentary: a simple word processor, a tool for sending messages to other users, and a basic system for building associated links between documents. The user manipulated a cursor on the screen using a wooden box with wheels attached to the bottom, attached by a wire to the computer. Engelbart’s team originally called it a “bug” but later christened it with a name that stuck: mouse.
In 1968 Engelbart showed a working prototype of his new tool to
a public audience at the San Francisco Civic Auditorium. In later years the event would go down in Silicon Valley lore as “the mother of all demos.” For many of the 1,000 San Francisco audience members that night, his presentation proved nothing short of an epiphany. Perhaps some of them may have felt something like the curious pagans who first encountered Saint Augustine on the beach wielding his illuminated Gospel. These people had witnessed something profoundly new that they did not entirely understand, a strange vision so powerful they sensed it might soon change everyone’s life for good.
Today, many of us may find it difficult to understand just what a radically different vision Engelbart presented that night. In our age of networked personal computers, cell phones, and digital cameras, it requires a backward cognitive leap to conjure the world of 1968, when computers were still churning out punch cards and tape reels, relegated to back-office tasks managing payrolls, counting election results, and solving complex equations for NASA. Most people had never seen a computer.
More than a few members of that audience became enthusiastic converts to the digital revolution. Historians of computer science often invoke Engelbart’s demo as the seminal event in the personal computing revolution. NLS found its most immediate successor in a series of projects at the new Xerox PARC (Palo Alto Research Center), a think tank founded with the mission of researching “the architecture of information.” Indeed, several of Engelbart’s former associates left SRI to join the PARC team. The legacy of PARC on the subsequent development of the personal computer is difficult to overstate. In the 1970s, PARC researchers for all intents and purposes invented today’s windows-style graphical desktop, bit-map displays, and even the Ethernet protocol. When a young Steve Jobs visited PARC to see the work in progress, he came away with a vision that would later take shape as the Macintosh (which in turn provided the conceptual foundation for Microsoft Windows).