Brilliant Blunders: From Darwin to Einstein - Colossal Mistakes by Great Scientists That Changed Our Understanding of Life and the Universe (38 page)

Anthropic Reasoning
 

Almost everybody would agree that the question “Does extraterrestrial intelligent life exist?” is one of the most intriguing questions in science today. That this is a reasonable question to ask stems from an important truth:
The properties of our universe, and the laws governing it, have allowed complex life to emerge. Obviously, the precise biological peculiarities of humans depend crucially on the Earth’s properties and its history, but some basic requirements would seem necessary for any form of intelligent life to
materialize. For instance, galaxies composed of stars, and planets orbiting at least some of those stars, appear to be reasonably generic. Similarly, nucleosynthesis in stellar interiors had to forge the building blocks of life: atoms such as carbon, oxygen, and iron. The universe also had to provide for a sufficiently hospitable environment—for a long enough time—that these atoms could combine and form the complex molecules of life, enabling primitive life to evolve to its “intelligent” phase.

In principle, one could imagine “counterfactual” universes that are not conducive for the appearance of complexity. For instance, consider a universe harboring the same laws of nature as ours, and the same values of all the “constants of nature” but one. That is, the strengths of the gravitational, electromagnetic, and nuclear forces are identical to those in our universe, as are the ratios of the masses of all the elementary particles. However, the value of one parameter—the cosmological constant—is a thousand times higher in this hypothetical universe. In such a universe, the repulsive force associated with the cosmological constant would have resulted in such a rapid expansion that no galaxies could have ever formed.

As we have seen, the question we have inherited from Einstein was this: Why should there be a cosmological constant at all? Modern physics transformed that question into: Why should empty space exert a repulsive force? However, owing to the discovery of accelerating expansion, we now ask: Why is the cosmological constant (or the force exerted by the vacuum) so small? In 1987, in the wake of all the previous failed attempts to put a cap on the energy of empty space,
physicist Steven Weinberg came up with a bold “What if?” question. What if the cosmological constant is not truly fundamental—explicable within the framework of a “theory of everything”—but
accidental
? That is, imagine that there exists a vast ensemble of universes—a “multiverse”—and that the cosmological constant may assume different values in different universes. Some universes, such as the counterfactual one we discussed with a thousandfold larger lambda, would not have developed complexity and life. We humans find ourselves in one of those universes that are “biophilic.” In such
a case, no grand unified theory of the basic forces would fix the value of the cosmological constant. Rather, the value would be determined by the simple requirement that it should fall within the range that would allow humans to evolve. In a universe with too large a cosmological constant, there would be no one to ask the question about its value. Physicist Brandon Carter,
who first presented this type of argument in the 1970s, dubbed it the “anthropic principle.” The attempts to delineate the “pro-life” domains are accordingly described as anthropic reasoning. Under what conditions can we even attempt to apply this type of reasoning to explain the value of the cosmological constant?

In order to make any sense at all, anthropic reasoning has to rely on three basic assumptions:

 

1. Observations are subjected to a “selection bias”—filtering of physical reality—even merely by the fact that they are executed by humans.

2. Some of the nominal “constants of nature” are accidental rather than fundamental.

3. Our universe is but one member of a gigantic ensemble of universes.

 

Let me examine very briefly each one of these points and attempt to assess its viability.

Statisticians always dread selection biases. These are distortions of the results, introduced either by the data-collecting tools or by the method of data accumulation. Here are a few simple examples to demonstrate the effect. Imagine that you want to test an investment strategy by examining the performance of a large group of stocks against twenty years’ worth of data. You might be tempted to include in the study only stocks for which you have complete information over the entire twenty-year period. However, eliminating stocks that stopped trading during this period would produce biased results, since these were precisely the stocks that did not survive the market.

During World War II, the Jewish Austro-Hungarian mathematician Abraham Wald demonstrated a remarkable understanding of selection bias.
Wald was asked to examine data on the location of enemy fire hits on bodies of returning aircraft, to recommend which parts of the airplanes should be reinforced to improve survivability. To his superiors’ amazement, Wald recommended adding armor to the locations that showed
no
damage. His unique insight was that the bullet holes that he saw in surviving aircraft indicated places where an airplane could be hit and still endure. He therefore concluded that the planes that had been shot down were probably hit precisely in those places where the persevering planes were lucky enough not to have been hit.

Astronomers are
very familiar with the
Malmquist bias
(named after the Swedish astronomer Gunnar Malmquist, who greatly elaborated upon it in the 1920s). When astronomers survey stars or galaxies, their telescopes are sensitive only down to a certain brightness. However, objects that are intrinsically more luminous can be observed to greater distances. This will create a false trend of increasing average intrinsic brightness with distance, simply because the fainter objects will not be seen.

Brandon Carter pointed out that we shouldn’t take the Copernican principle—the fact that we are nothing special in the cosmos—too far. He reminded astronomers that humans are the ones who make observations of the universe; consequently, they should not be too surprised to discover that the properties of the cosmos are consistent with human existence. For instance, we could not discover that our universe contains no carbon, since we are carbon-based life-forms. Initially, most researchers took Carter’s anthropic reasoning to be nothing more than a trivially obvious statement. Over the past couple of decades, however, the anthropic principle has gained some popularity. Today quite a few leading theorists accept the fact that in the context of a multiverse, anthropic reasoning can lead to a natural explanation for the otherwise perplexing value of the cosmological constant. To recapitulate the argument, if lambda were much larger (as some probabilistic considerations seem to require), then the
cosmic acceleration would have overwhelmed gravity before galaxies had a chance to form. The fact that we find ourselves here in the Milky Way galaxy necessarily biases our observations to low values of the cosmological constant in our universe.

But how reasonable is the assumption that some physical constants are “accidental”? A historical example can help clarify the concept. In 1597 the great German astronomer Johannes
Kepler published a treatise known as
Mysterium Cosmographicum
(
The Cosmic Mystery
). In this book, Kepler thought that he had found the solution to two bewildering cosmic enigmas: Why were there precisely six planets in the solar system (only six were known at this time) and what determined the sizes of the planetary orbits? Even in Kepler’s time, his answers to these riddles were borderline crazy. He constructed a model for the solar system by embedding the five regular solids known as the
Platonic solids
(tetrahedron, cube, octahedron, dodecahedron, and icosahedron) inside each other. Together with an outer sphere corresponding to the fixed stars, the solids determined precisely six spacings, which to Kepler “explained” the number of the planets. By choosing a particular order for which solid to embed in which, Kepler was able to achieve approximately the correct relative sizes for the orbits in the solar system. However, the main problem with Kepler’s model was not in its geometrical details—after all, Kepler used the mathematics that he knew to explain existing observations. The key failure was that Kepler did not realize that neither the number of planets nor the sizes of their orbits were fundamental quantities—ones that can be explained from first principles. While the laws of physics indeed govern the general process of planet formation from a protoplanetary disk of gas and dust, the particular environment of any young stellar object determines the end result.

We now know that there are billions of extrasolar planets in the Milky Way, and each planetary system is different in terms of its members and orbital properties. Both the number of the planets and the dimensions of their circuits are accidental, as is, for instance, the precise shape of any individual snowflake.

There is one particular quantity in the solar system that has been
crucial for our existence: the distance between the Earth and the Sun. The Earth is in the Sun’s habitable zone—the narrow circumstellar band that allows for liquid water to exist on the planet’s surface. At much closer distances, water evaporates, and at much larger ones, it freezes. Water was essential for life to emerge on Earth, since molecules could combine easily in the young Earth’s “soup” and could form long chains while being sheltered from harmful ultraviolet radiation. Kepler was obsessed with the idea of finding a first-principles explanation to the Earth-Sun distance, but this obsession was misguided. There was nothing to prevent the Earth (in principle) from forming at a different distance. But had that distance been significantly larger or smaller, there would have been no Kepler to wonder about it. Among the billions of solar systems in the Milky Way galaxy, many probably do not harbor life, since they don’t have the right planet in the habitable zone around the host star. Even though the laws of physics did determine the orbit of the Earth, there is no deeper explanation for its radius other than the fact that had it been very different, we wouldn’t be here.

This brings us to the last necessary ingredient of anthropic reasoning: For the explanation of the value of the cosmological constant in terms of an accidental quantity in a multiverse to hold any water, there must be a multiverse. Is there? We don’t know, but that has never stopped smart physicists from speculating. What we do know is that in one theoretical scenario known as “eternal inflation,” the dramatic stretching of space-time can produce an infinite and everlasting multiverse.
This multiverse is supposed to continually generate inflating regions, which evolve into separate “pocket universes.” The big bang from which our own “pocket universe” came into existence is just one event in a much grander scheme of an exponentially expanding substratum. Some versions of “string theory” (now sometimes called “M-theory”) also allow for a huge variety of universes (more than 10
500
!), each potentially characterized by different values of physical constants. If this speculative scenario is correct, then what we have traditionally called “the universe” could indeed be just one piece of space-time
in a vast cosmic landscape.

One should not get the impression that all (or even most) physicists believe that the solution to the puzzle of the energy of empty space will come from anthropic reasoning. The mere mention of the “multiverse” and “anthropics” tends to raise the blood pressure of some physicists. There are two main reasons for this adverse reaction. First, as already mentioned in chapter 9, ever since the seminal work of philosopher of science Karl Popper, for a scientific theory to be worthy of its name, it has to be falsifiable by experiments or observations. This requirement has become the foundation of the “scientific method.” An assumption about the existence of an ensemble of potentially unobservable universes appears, at first glance at least, to be in conflict with this prerequisite and therefore in the realm of metaphysics rather than physics. Note, however, that the boundary between what we define as observable and what is not is unclear. Consider, for instance, the “particle horizon”: that surface around us from which radiation emitted at the big bang is just reaching us. In the Einstein–de Sitter model—the model for a homogeneous, isotropic, constant curvature universe, with no cosmological constant—the cosmic expansion decelerates, and one could safely expect that all the objects currently lying beyond the horizon will eventually become observable in the distant future. But since 1998, we know that we don’t live in an Einstein–de Sitter cosmos: our universe is accelerating. In this universe any object now beyond the horizon will stay beyond the horizon forever. Moreover, if the accelerating expansion continues, as anticipated from a cosmological constant, even galaxies that we can now see will become invisible to us! As their recession speed approaches the speed of light, their radiation will stretch (redshift) to the point where its wavelength will exceed the size of the universe. (There is no limit on how fast space-time can stretch, since no mass is really moving.) So even our own accelerating universe contains objects that neither we nor future generations of astronomers will ever be able to observe. Yet we would not consider such objects as belonging to metaphysics. What could then give us confidence in potentially unobservable universes? The answer is a natural extension of the scientific method: We can believe in their
existence if they are predicted by a theory that gains credibility because it is corroborated in other ways. We believe in the properties of black holes because their existence is predicted by general relativity—a theory that has been tested in numerous experiments. The rules should be a straightforward extrapolation of Popper’s ideas: If a theory makes testable and falsifiable predictions in the observable parts of the universe, we should be prepared to accept its predictions in those parts of the universe (or multiverse) that are not accessible to direct observations.

Other books

Love Like Hate by Linh Dinh
An Innocent Fashion by R.J. Hernández
The Lesson of Her Death by Jeffery Deaver
Charred by Kate Watterson
Hunk for the Holidays by Katie Lane
The Irish Lover by Lila Dubois