Einstein's Genius Club (18 page)

Read Einstein's Genius Club Online

Authors: Katherine Williams Burton Feldman

Pauli had visited Princeton once before, during a tour of the United States in 1935–36 (and had met Gödel briefly on the Atlantic crossing). Now, he was to join the Institute for Advanced Study for an indefinite period. In small-town Princeton, during the five lonely war years, he became very close to Einstein. They were wonderfully matched, personally as well as intellectually. The irrepressible Pauli was not intimidated by Einstein's stature. Pauli was sardonic, earthy, tactless, rough-edged—which is to say, a sort of rude version of Einstein. The elder man felt completely at home with his caustic younger colleague, and he gave as good as he got. “You were right after all,” he wrote to Pauli in 1931, conceding a point on quantum theory—and added: “you rascal [
Sie Spitz-bube
].”
193
In Princeton, they prospered together.

PART 3
THE UNIVERSE

Physics, mathematics, and the universe—these three words form the angles of a tangled and intimate set of relations. Einstein and Pauli the physicists, Gödel and Russell, the mathematicians—each worked within a science that attempts to describe the actual world. That, at least, was the purpose of mathematics at its inception (Euclid's geometry) and the effect of physics in the nineteenth and early twentieth century.

THE LOGIC OF PARADOX

B
EFORE WE TURN TO RELATIVITY
, quantum mechanics, and the search for a unified theory, we shall take a brief detour into another world altogether—that of mathematical logic. Like physics, the world of mathematics underwent revolutionary changes throughout the nineteenth century and into the twentieth. In so doing, it virtually merged with the doctrines of analytical philosophy and logicism. Few players in the twin worlds of mathematics and logic were more influential than Russell and Gödel.

There is good reason for starting with mathematics. True,
physics began from observations of the visible world. Yet it evolved through mathematics. From the late nineteenth century on, mathematics became an essential tool of the physicist. Though mathematics was never absent from early modern physics—Newton invented calculus, after all—in the twentieth century, mathematics overtook empiricism as the primary method for generating physics. What Newton could observe (albeit through eyes made keen by the imagination) in a falling apple or a setting moon no longer mattered in twentieth-century physics.

Mathematics does not describe the physical world per se. It does, however, problem-solve in the realms of space, number, form, and change. Through mathematics, Einstein explored four-dimensional geometries never seen on land or sea. Today, the mathematics of string theory yields nine space dimensions. These are not observable phenomena. The nine space dimensions cannot even be explained properly in nonmathematical terms.
1
We are led to these proposals not through observation, but through mathematics. Einstein, a born physicist schooled in nineteenth-century empiricism, approached mathematical formalism with trepidation: “As far as the laws of mathematics refer to reality, they are not certain; as far as they are certain, they do not refer to reality.”
2

For pragmatic physicists, mathematical formalism either works or not. Mathematics is a tool. Why does it work? No one has a satisfactory answer. In his celebrated paper, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” the physicist Eugene Wigner pondered the seeming miracle of the mathematics-physics connection:

The mathematical formulation of the physicist's often crude experience leads in an uncanny number of cases to an amazingly accurate description of a large class of phenomena. This shows that the mathematical language has more to commend it than being the only language which we can speak; it shows that it is, in a very real sense, the correct language.
3

Mathematics, like experimentation, sometimes yields surprising or even unwanted results, as if it, too, were beyond human control. In 1928, the British physicist Paul Dirac formulated an equation only to find that it predicted a hitherto unknown and startling particle, the antielectron (or positron). One might even say that it was not Dirac, but his equation (via a minus sign), that discovered anti-matter.

Alongside the brief and elegant Dirac equation, general relativity, with its phalanx of equations, is positively epical. It describes not a particle, but the structure of space-time in the universe. It is, nevertheless, a theory tied to observable phenomena, though the observation took place by way of Einstein's “thought experiment” as he imagined himself flying on a beam of light. But, as with Dirac, Einstein's relativity equations were wiser than their maker. As he pondered general relativity, Einstein realized, to his dismay, that the equations described an expanding universe. That was not his intent. To remedy matters, he proposed an emergency fix, a “cosmological term” or constant to keep the universe static. Not only was this fix ill received; it was also wrong. Twelve years later, Edwin Hubble proved that, far from being static, the universe was expanding. The cosmological constant was, in Einstein's view, the “greatest blunder” of his life.
4
It was a blunder born of his preference for the physical and observable over the mathematical. In time, Einstein grew more trusting of mathematical formalism, but only because there seemed no other way to pursue his unified field project.

For Russell and Gödel, no such “practical” matters intruded into mathematics. Still, their work would lead to very practical ends. Out of Russell's system of logical notation and, even more importantly, Gödel's incompleteness theorems emerged the foundations for the computer revolution.

By the time Russell came to Princeton in 1943, he had, by his own admission, left mathematics and mathematical logic far behind.
5
Still, his
Principia Mathematica
expressed the sheer prowess
of predicate logic as much through its comprehensiveness as through its innovations. These included an improved notational system and a comprehensive “type” theory based on a hierarchy of “classes.” The
Principia Mathematica
inspired successive generations of twentieth-century philosophers: Wittgenstein, Rudolf Carnap, A. J. Ayer, W. V. Quine, and, indeed, the whole of twentieth-century analytical philosophy.

Twenty years later, Gödel, fresh from his dissertation on the completeness of first-order logic, formulated two proofs on second-order logic. (First-order logic differs from second-order in its relative power: First-order logic deals only with individuals or types; second-order logic deals with propositions about individuals or types.) Gödel's two proofs became known as his “incompleteness” theorems. They brought about a paradigm shift as radical as those of “relativity” and “uncertainty.” At first, the shift was scarcely noticed. When Gödel made his announcement at a conference on epistemology, only one participant, the brilliant polymath John von Neumann, had an inkling of what the proofs implied. Only very slowly did their depth and breadth sink in. Still, true to the paradigm of paradigm shifts, Gödel's theorems met with resistance from a whole generation of logicians. No wonder: In two proofs, he had demonstrated without any doubt that (1) any consistent formal system that includes the arithmetic of counting numbers (that is, arithmetic using simple cardinal numbers—1, 2, 3, etc.) is incomplete, as there are statements in the system that cannot be proved or disproved, and (2) a formal system that includes the arithmetic of counting numbers cannot be proved consistent within the system itself.

Gödel's proofs rocked the mathematical world, but it is not useful to exaggerate their effect. They did not cast logic or mathematics onto the garbage heap. On the contrary, logical systems were useful before Gödel's proofs, and they remained useful afterwards. Mathematics continued to depend on axioms and systems that, although “incomplete,” worked quite well. But absolute consistency
and completeness, much sought as measures of the strength of mathematical systems, could not be found. Like “un-certainty,” Gödel's “incompleteness” suggests the limits of what can be formalized. It is always possible, according to Gödel's proofs, to find an axiom that is true but that cannot be proven within the arithmetic system. “Incompleteness” has as its positive formulation “inexhaustibility,” argues the Swedish mathematician and computer scientist Torkel Franzen. Gödel himself recognized the philosophical implications, carefully italicizing what cannot be:

It is
this
theorem [the second incompleteness theorem] which makes the incompletability of mathematics particularly evident. For,
it makes it impossible that someone should set up a certain well-defined system of axioms and rules and consistently make the following assertion about it: All of these axioms and rules I perceive (with mathematical certitude) to be correct, and moreover I believe that they contain all of mathematics
. If somebody makes such a statement he contradicts himself. For if he perceives the axioms under consideration to be correct, he also perceives (with the same certainty) that they are consistent. Hence, he has a mathematical insight not derivable from his axioms.
6

If a formal system of arithmetic cannot be complete, nor proven consistent within itself (the negative formulation), then it must be (in theory) always open to another axiom, ad infinitum (the positive formulation).
7

To explain “incompleteness,” we must look (briefly) to its place in philosophical history and (more briefly still) at what the proofs achieved.
8
Incompleteness takes its place in—or, more precisely, responds to—a line of philosophical thought that began with Gottfried Leibniz, a true polymath whose expertise ranged from Chinese history to library science. Leibniz was Newton's contemporary and greatest rival: They discovered calculus simultaneously and independently. Leibniz once postulated that
space was relative; Newton won that one, and space remained absolute until Einstein. As a logician, though, Leibniz was a towering presence. Into logic, Leibniz injected mathematics. The result was a symbolic logic that would unite mathematics and philosophy.

Important though Leibniz seems to us today, his works fell into some obscurity after his death. He was rediscovered by Kant, the last great Enlightenment philosopher, and then again at the end of the nineteenth century. In 1900, Bertrand Russell published
The Philosophy of Leibniz
. It was, as Russell writes, a “reconstruction of the system which Leibniz
should
have written”
9
had he not been writing philosophy primers for princes. Thus did Russell identify Leibniz as the progenitor manqué of symbolic logic.

Logic became a vibrant and fashionable field towards the end of the nineteenth century: Giuseppi Peano, Ernst Schröder, and Charles Peirce, along with Gottlob Frege, were instrumental in its development. Their project was to rid mathematics of its cobwebs and clutter—ambiguities that in the past had worried no one. Now, in the era of science, mathematics must be systematized and established within the realm of logic. It must be demonstrated, as Russell said, that “all pure mathematics follows from purely logical premises and uses only concepts definable in logical terms.”
10
Logicism was born.

Much influenced by Peano, Frege, and nineteenth-century formalism, Russell and his fellow Englishman Alfred North White-head launched the ambitious, almost foolhardy project that Frege's system of symbols began. Their plan: to derive all mathematical truths from a set of axioms and inference rules written in symbolic language. The idea grew out of the International Congress of Mathematics of 1900, held in Paris. At that same conference, though not noted by Russell in his autobiography, another germ was planted. It was the celebrated Hilbert “challenge” for young mathematicians: twenty-three mathematical problems that called out for solution. What Hilbert hoped for was a mathematics without paradox.

Paradox there was. In 1902, Gottlob Frege had just published the first volume of his two-volume treatise
Grundgesetz der Arithmetik
(
The Basic Laws of Arithmetic
). In it, he proved that mathematics could be reduced to logic—or so he thought. Russell would prove otherwise by coming up with what was soon dubbed Russell's paradox.

The paradox came to him while he was at work on what would become his monumental
Principia
. Suddenly, he experienced “an intellectual setback.”
11
Thinking about sets, he began to wonder about the “sets which are not members of themselves.” That would seem a simple concept. The
set
of all red convertibles is not a member of itself. However, the set of sets with more than one member is a member of that very set. What, then, of the set of all sets that are not members of themselves? Is the set of such sets also a member of that set? If so, then it posed a contradiction: If a set of all sets that are not members of themselves is a member of itself, then it is not actually a set of nonmembers, and vice versa. He pondered the contradiction for months, hoping to find a way out. In the end, he broke the news of the paradox to Frege in a letter. Frege's response, though gracious, left no doubt that he felt his life's work had been cast into disarray.

Other books

Wake by Lisa McMann
The Sorcerer's House by Gene Wolfe
Star Child by Paul Alan
Wild Robert by Diana Wynne Jones
Lynda's Lace by Lacey Alexander
Bear Claw Conspiracy by Andersen, Jessica
Jump Start by Susannah McFarlane
The Ride of My Life by Hoffman, Mat, Lewman, Mark
Ripped! by Jennifer Labrecque