Darwin Among the Machines (21 page)

Read Darwin Among the Machines Online

Authors: George B. Dyson

A constant stream of brilliant individuals—from distinguished scientists to otherwise unknowns—appeared in Princeton to run their problems on the IAS machine. For this the Institute was ideal. The administration was flexible, intimate, and spontaneous. The computer project operated on a shoestring compared to other laboratories but was never short of funds. The facilities were designed to accommodate visitors for a day, a month, or a year, and the resources of Princeton University were close at hand. There were no indigenous computer scientists monopolizing time on the machine, although a permanent IAS meteorological group under Jule Charney ran their simulations regularly and precedence was still granted to the occasional calculation for a bomb. “My experience is that outsiders are more likely to use the machine on important problems than is the intimate, closed circle of friends,” recalled Richard Hamming, looking back on the early years of computing in the United States.
42

The machine was duplicated, but von Neumann remained unique. His insights permeated everything that ran on the computer, from the coding of Navier-Stokes equations for compressible fluids to S. Y. Wong's simulation of traffic flow (and traffic jams) to the compilation of a historical ephemeris of astronomical positions covering the six hundred years leading up to the birth of Christ. “Quite often the likelihood of getting actual numerical results was very much larger if he was not in the computer room, because everybody got so nervous when he was there,” reported Martin Schwarzschild. “But when you were in real thinking trouble, you would go to von Neumann and nobody else.”
43

Von Neumann's reputation, after fifty years, has been injured less by his critics than by his own success. The astounding proliferation of
the von Neumann architecture has obscured von Neumann's contributions to massively parallel computing, distributed information processing, evolutionary computation, and neural nets. Because his deathbed notes for his canceled Silliman lectures at Yale were published posthumously (and for a popular audience) as
The Computer and the Brain
(1958), von Neumann's work has been associated with the claims of those who were exaggerating the analogies between the digital computer and the brain. Von Neumann, on the contrary, was preoccupied with explaining the differences. How could a mechanism composed of some ten billion unreliable components function reliably while computers with ten thousand components regularly failed?

Von Neumann believed that entirely different logical foundations would be required to arrive at an understanding of even the simplest nervous system, let alone the human brain. His
Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components
(1956) explored the possibilities of parallel architecture and fault-tolerant neural nets. This approach would soon be superseded by a development that neither nature nor von Neumann had counted on: the integrated circuit, composed of logically intricate yet structurally monolithic microscopic parts. Serial architecture swept the stage. Probabilistic logics, along with vacuum tubes and acoustic delay-line memory, would scarcely be heard from again. If the development of solid-state electronics had been delayed a decade or two we might have advanced sooner rather than later into neural networks, parallel architectures, asynchronous processing, and other mechanisms by which nature, with sloppy hardware, achieves reliable results.

Von Neumann was as reticent as Turing was outspoken on the question of whether machines could think. Edmund C. Berkeley, in his otherwise factual and informative 1949 survey,
Giant Brains
, captured the mood of the time with his declaration that “a machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think.”
44
Von Neumann never subscribed to this mistake. He saw digital computers as mathematical tools. That they were members of a more general class of automata that included nervous systems and brains did not imply that they could think. He rarely discussed artificial intelligence. Having built one computer, he became less interested in the question of whether such machines could learn to think and more interested in the question of whether such machines could learn to reproduce.

“‘Complication' on its lower levels is probably degenerative, that is, that every automaton that can produce other automata will only be
able to produce less complicated ones,” he noted in 1948. “There is, however, a certain minimal level where this degenerative characteristic ceases to be universal. At this point automata which can reproduce themselves, or even construct higher entities, become possible.”
45
Millions of very large scale integrated circuits, following in the footsteps of the IAS design but traced in silicon at micron scale, are now replicated daily from computer-generated patterns by computer-operated tools. The newborn circuits, hidden in clean rooms and twenty-four-hour-a-day “fabs,” where the few humans present wear protective suits for the protection of the machines, are the offspring of von Neumann's
Theory of Self-Reproducing Automata
. Just as predicted, these machines are growing more complicated from one generation to the next. None of these devices, although executing increasingly intelligent code, will ever become a brain. But collectively they might.

Von Neumann's Silliman lecture notes gave “merely the barest sketches of what he planned to think about,” noted Stan Ulam in 1976. “He died so prematurely, seeing the promised land but hardly entering it.”
46
Von Neumann may have envisaged a more direct path toward artificial intelligence than the restrictions of the historic von Neumann architecture suggest. High-speed electronic switching allows computers to explore alternatives thousands or even millions of times faster than biological neurons, but this power pales in comparison with the combinatorial abilities of the billions of neurons and uncounted synapses that constitute a brain. Von Neumann knew that a structure vastly more complicated, flexible, and
unpredictable
than a computer was required before any electrons might leap the wide and fuzzy distinction between arithmetic and mind. Fifty years later, digital computers remain rats running two-dimensional mazes at basement level below the foundations of mind.

As a practicing mathematician and an armchair engineer, von Neumann knew that something as complicated as a brain could never be designed; it would have to be evolved. To build an artificial brain, you have to grow a matrix of artificial neurons first. In 1948, at the Hixon Symposium on Cerebral Mechanisms in Behavior, von Neumann pointed out in response to Warren S. McCulloch that “parts of the organism can act antagonistically to each other, and in evolution it sometimes has more the character of a hostile invasion than of evolution proper. I believe that these things have something to do with each other.” He then described how a primary machine could be used to exploit certain tendencies toward self-organization among a large number of intercommunicating secondary machines. He believed that selective evolution (via mechanisms similar to economic
competition) of incomprehensibly complex processes among the secondary machines could lead to the appearance of comprehensible behavior at the level of the primary machine.

“If you come to such a principle of construction,” continued von Neumann, “all that you need to plan and understand in detail is the primary automaton, and what you must furnish to it is a rather vaguely defined matrix of units; for instance, 10
10
neurons which swim around in the cortex. . . . If you do not separate them . . . then, I think that it is achievable that the thing can be watched by the primary automaton and be continuously reorganized when the need arises. I think that if the primary automaton functions in parallel, if it has various parts which may have to act simultaneously and independently on separate features, you may even get symptoms of conflict . . . and, if you concentrate on marginal effects, you may observe the ensuing ambiguities. . . . Especially when you go to much higher levels of complexity, it is not unreasonable to expect symptoms of this kind.”
47
The “symptoms of this kind” with which von Neumann and his audience of neurologists were concerned were the higher-order “ensuing ambiguities” that somehow bind the ingredients of logic and arithmetic into the cathedral perceived as mind.

Von Neumann observed, in 1948, that information theory and thermodynamics exhibited parallels that would grow deeper as the two subjects were mathematically explored. In the last years of his foreshortened life, von Neumann began to theorize about the behavior of populations of communicating automata, a region in which the parallels with thermodynamics—and hydrodynamics—begin to flow both ways. “Many problems which do not
prima facie
appear to be hydrodynamical necessitate the solution of hydrodynamical questions or lead to calculations of the hydrodynamical type,” von Neumann had written in 1945. “It is only natural that this should be so.”
48

Lewis Richardson's sphere of 64,000 mathematicians would not only model the large-scale turbulence of the atmosphere, they might, if they calculated and communicated fast enough, acquire an atmosphere of turbulence of their own. As self-sustaining vortices arise spontaneously in a moving fluid when velocity outweighs viscosity by a ratio to which Osborne Reynolds gave his name, so self-sustaining currents may arise in a computational medium when the flow of information among its individual components exceeds the computational viscosity by a ratio that John von Neumann, unfortunately, did not live long enough to define.

7
S
YMBIOGENESIS

Instead of sending a picture of a cat, there is one area in which we can send the cat itself
.

—
MARVIN MINSKY
1

“D
uring the summer of 1951,” according to Julian Bigelow, “a team of scientists from Los Alamos came and put a large thermonuclear calculation on the IAS machine; it ran for 24 hours without interruption for a period of about 60 days, many of the intermediate results being checked by duplicate runs, and throughout this period only about half a dozen errors were disclosed. The engineering group split up into teams and was in full-time attendance and ran diagnostic and test routines a few times per day, but had little else to do. So it had come alive.”
2
The age of digital computers dawned over the New Jersey countryside while a series of thermonuclear explosions, led by the MIKE test at Eniwetok Atol on 1 November 1952, corroborated the numerical results.

The new computer was used to explore ways of spawning as well as destroying life. Nils Aall Barricelli (1912–1993)—a mathematical biologist who believed that “genes were originally independent, virus-like organisms which by symbiotic association formed more complex units”—arrived at the institute in 1953 to investigate the role of symbiogenesis in the origin of life. “A series of numerical experiments are being made with the aim of verifying the possibility of an evolution similar to that of living organisms taking place in an artificially created universe,” he announced in the Electronic Computer Project's
Monthly Progress Report
for March.

The theory of symbiogenesis was introduced in 1909 by Russian botanist Konstantin S. Merezhkovsky (1855–1921) and expanded by Boris M. Kozo-Polyansky (1890–1957) in 1924.
3
“So many new facts arose from cytology, biochemistry, and physiology, especially of lower organisms,” wrote Merezhkovsky in 1909, “that [in] an attempt once
again to raise the curtain on the mysterious origin of organisms . . . I have decided to undertake . . . a new theory on the origin of organisms, which, in view of the fact that the phenomenon of symbiosis plays a leading role in it, I propose to name the theory of symbiogenesis.”
4
Symbiogenesis offered a controversial adjunct to Darwinism, ascribing the complexity of living organisms to a succession of symbiotic associations between simpler living forms. Lichens, a symbiosis between algae and fungi, sustained life in the otherwise barren Russian north; it was only natural that Russian botanists and cytologists took the lead in symbiosis research. Taking root in Russian scientific literature, Merezhkovsky's ideas were elsewhere either ignored or declared unsound, most prominently by Edmund B. Wilson's dismissal of symbiogenesis as “an entertaining fantasy . . . that the dualism of the cell in respect to nuclear and cytoplasmic substance resulted from the symbiotic association of two types of primordial microorganisms, that were originally distinct.”
5

Merezhkovsky viewed both plant and animal life as the result of a combination of two plasms:
mycoplasm
, represented by bacteria, fungi, blue-green algae, and cellular organelles; and
amoeboplasm
, represented by certain “monera without nuclea” that formed the nonnucleated material at the basis of what we now term eukaryotic cells. Merezhkovsky believed that
mycoids
came first. When they were eaten by later-developing
amoeboids
they learned to become nuclei rather than lunch. It is equally plausible that amoeboids came first, with mycoids developing as parasites later incorporated symbiotically into their hosts. The theory of two plasms undoubtedly contains a germ of truth, whether the details are correct or not. Merezhkovsky's two plasms of biology were mirrored in the IAS experiments by embryonic traces of the two plasms of computer technology—hardware and software—that were just beginning to coalesce.

The theory of symbiogenesis assumes that the most probable explanation for improbably complex structures (living or otherwise) lies in the association of less complicated parts. Sentences are easier to construct by combining words than by combining letters. Sentences then combine into paragraphs, paragraphs combine into chapters, and, eventually, chapters combine to form a book—highly improbable, but vastly more probable than the chance of arriving at a book by searching the space of possible combinations at the level of letters or words. It was apparent to Merezhkovsky and Kozo-Polyansky that life represents the culmination of a succession of coalitions between simpler organisms, ultimately descended from not-quite-living component parts. Eukaryotic cells are riddled with evidence of symbiotic
origins, a view that has been restored to respectability by Lynn Margulis in recent years. But microbiologists arrived too late to witness the symbiotic formation of living cells.

Other books

IN & OZ: A Novel by Tomasula, Steve
Losing It by Sandy McKay
31 - Night of the Living Dummy II by R.L. Stine - (ebook by Undead)
Doctor at Large by Richard Gordon
Loose Diamonds by Amy Ephron
Secure Location by Long, Beverly
Dearest Rose by Rowan Coleman