Perhaps it is hard to visualize space as something discrete. After all, why can something not be made to fit into half the volume of the smallest unit of space? The answer is that this is the wrong way to think, for to pose this question is to presume that space has some absolute existence into which things can fit. To understand what we mean when we say that space is
discrete, we must put our minds completely into the relational way of thinking, and really try to see and feel the world around us as nothing but a network of evolving relationships. These relationships are not among things situated in space - they are among the events that make up the history of the world. The relationships define the space, not the other way round.
From this relational point of view it makes sense to say that the world is discrete. Actually it is easier, because then we have to conceive of only a finite number of events. It is harder to visualize a smooth space constructed from a network of relationships, as this would require there to be an infinite number of relationships between the events in any volume of space, however small that volume. Even if we had no other evidence (and we do), the fact that it makes the relational picture of spacetime so much easier to think about would be reason enough to imagine both space and time as discrete.
Of course, so far no one has ever observed an atom of space. Nor have any of the predictions that follow from the theories that predict that space is discrete been tested experimentally. So how is it that many physicists have already come to believe that space is discrete? This is indeed a good question, to which there is a good answer: the present situation is in some ways analogous to the period during which most physicists became convinced of the existence of atoms, during the twenty years spanning the last decade of the nineteenth century and the first decade of the twentieth. The first experiments that can be said to have detected atoms, which used the first, primitive elementary particle accelerators, were not done until just after this period, in 1911/12. By then most physicists were already convinced of the existence of atoms.
Presently we are in a crucial period during which the laws of physics are being rewritten - just as they were between 1890 and 1910, when the revolutions in twentieth-century physics that led to relativity and quantum physics began. The crucial arguments that led people to accept the existence of atoms were formulated during that period to resolve the paradoxes and contradictions that followed from the assumption
that matter and radiation were continuous. The experiments that detected atoms came later because their very conceptualization required ideas that were invented as part of the same process. Had the experiments been done twenty years earlier, the results may not even have been interpreted as evidence for the existence of atoms.
The crucial arguments that convinced people of the existence of atoms had to do with understanding the laws governing heat, temperature and entropy - the part of physics called thermodynamics. Among the laws of thermodynamics are the second law, which we have already discussed, which states that entropy never decreases, and the so-called zeroth law, which states that when the entropy of a system is as high as possible, it has a single uniform temperature. Between them comes the first law, which asserts that energy is never created or destroyed.
During most of the nineteenth century most physicists did not believe in atoms. It is true that the chemists had found that different substances combine in fixed ratios, which was suggestive of the existence of atoms. But the physicists were not very impressed. Until 1905 most of them thought either that matter was continuous, or that the question of whether there were atoms or not lay outside science, because even if they existed atoms would be forever unobservable. These scientists developed the laws of thermodynamics in a form that made no reference to atoms or their motions. They did not believe the basic definitions of temperature and entropy that I introduced in earlier chapters: that temperature is a measure of the energy of random motion, and that entropy is a measure of information. Instead, they understood temperature and entropy as essential properties of matter: matter was just a continuous fluid or substance, and temperature and entropy were among its basic properties.
Not only did the laws of thermodynamics make no reference to atoms, but the nineteenth-century founders of the theory even believed there was a reason why there could be no relation between atoms and thermodynamics. This is because the second law, by saying that entropy increases towards the future, introduces an asymmetry in time. According
to this law the future is different from the past because the future is the direction in which the entropy of the universe increases. On the other hand, these people reasoned that if there were atoms they would have to obey Newton’s laws. But these laws are reversible in time. Suppose you were to make a movie of a set of particles interacting according to Newton’s laws, and then show the movie twice to a group of physicists, once as it was made, and once running it backwards. As long as there were only a few particles in the movie, there is no way for the physicists to determine which was the right way for time to go.
Things are very different for large, macroscopic bodies. In the world we live in, the future is very different from the past, which is exactly what is captured in the law stating that entropy increases into the future. Because this seemed to contradict the fact that in Newton’s theory the future and the past are reversible, many physicists refused to believe that matter is made of atoms until the first few decades of the twentieth century, when conclusive experimental proof was obtained for their existence.
The ideas that temperature is a measure of energy in random motion and entropy is a measure of information underlie what is called the statistical formulation of thermodynamics. According to this view ordinary matter is made out of enormous numbers of atoms. This means that one has to reason statistically about the behaviour of ordinary matter. According to the founders of statistical mechanics, as the idea was called, one could explain the apparent paradox about the direction of time by deriving the laws of thermodynamics from Newton’s laws. The paradox was resolved by understanding that the laws of thermodynamics are not absolute: they describe what is most likely to happen, but there will always be a small probability of the laws being violated.
In particular, the laws assert that most of the time a large collection of atoms will evolve in such a way as to reach a more random - meaning more disorganized - state. This is just because the randomness of the interactions tends to wash out any organization or order that is initially present. But this need not happen, it is just what is most likely to happen. A
system which is very carefully prepared, or which incorporates structures that preserve a memory of what has happened to it - such as a complex molecule such as DNA - can be seen to evolve from a less ordered to a more ordered state.
The argument here is rather subtle, and it took several decades for most physicists to be convinced. The originator of the idea that entropy had to do with information and probability, Ludwig Boltzmann, committed suicide in 1906, which was before most physicists had accepted his arguments. (Whether his depression had anything to do with the failure of his colleagues to appreciate his reasoning, Boltzmann’s suicide had at least one far-reaching consequence: it convinced a young physics student named Ludwig Wittgenstein to give up physics and go to England to study engineering and philosophy.) In fact, the arguments that finally convinced most physicists of the existence of atoms had just been published the year before by the then patent office clerk Albert Einstein (‘Same Einstein’, as my physics teacher used to say.) This argument had to do with fact that the statistical point of view allowed the laws of thermodynamics to be violated from time to time. What Boltzmann had found was that the laws of thermodynamics would be exactly true for systems that contained an infinite number of atoms. Of course, the number of atoms in a given system, such as the water in a glass, is very large, but it is not infinite. Einstein realized that for systems containing a finite number of atoms the laws of thermodynamics would be violated from time to time. Since the number of atoms in the glass is large, these effects are small, but they still may in some circumstances be observed. By making use of this fact Einstein was able to discover manifestations of the motions of atoms that could be observed. Some of these had to do with the fact that a grain of pollen, observed in a microscope, will dance around randomly because it is being jiggled by atoms colliding with it. As each atom has a finite size, and carries a finite amount of energy, the jiggles that result when they collide with the grain of pollen can be seen, even if the atoms themselves are far too small to be seen.
The success of these arguments persuaded Einstein and a
few others, such as his friend Paul Ehrenfest, to apply the same reasoning to light. According to the theory published by James Clerk Maxwell in 1865, light consisted of waves travelling through the electromagnetic field, each wave carrying a certain amount of energy. Einstein and Ehrenfest wondered whether they could use Boltzmann’s ideas to describe the properties of light on the inside of an oven.
Light is produced when the atoms in the walls of the oven heat up and jiggle around. Could the light so produced be said to be hot? Could it have an entropy and a temperature? What they found was profoundly puzzling to them and to everyone else at the time. They found that horrible inconsistencies would arise unless the light were in a sense also to consist of atoms. Each atom of light, or quantum as they called it, had to carry a unit of energy related to the frequency of the light. This was the birth of quantum theory.
I shall tell no more of this story, for it is indeed a very twisted one. Some of the results that Einstein and Ehrenfest employed in their reasoning had been found earlier by Max Planck, who had studied the problem of hot radiation five years earlier. It was in this work that the famous Planck’s constant first appeared. But Planck was one of those physicists who believed neither in atoms nor in Boltzmann’s work, so his understanding of his own results was confused and, in part, contradictory. He even managed to invent a convoluted argument that assured him that photons did not exist. For this reason the birth of quantum physics is more properly attributed to Einstein and Ehrenfest.
The moral of this story is that it was an attempt to understand the laws of thermodynamics that prompted two crucial steps in our understanding of atomic physics. These were the arguments that convinced physicists of the existence of atoms, and the arguments by which the existence of the photon were first uncovered. It was no coincidence that both these steps were taken by the same young Einstein in the same year.
We can now turn back to quantum gravity, and in particular to quantum black holes. For what we have seen in the last few chapters is that black holes are systems which may be
described by the laws of thermodynamics. They have a temperature and an entropy, and they obey an extension of the law of increase of entropy. This allows us to raise several questions. What does the temperature of a black hole actually measure? What does the entropy of a black hole really describe? And, most importantly, why is the entropy of a black hole proportional to the area of its horizon?
The search for the meaning of temperature and entropy of matter led to the discovery of atoms. The search for the meaning of the temperature and entropy of radiation led to the discovery of quanta. In just the same way, the search for the meaning of the temperature and entropy of a black hole is now leading to the discovery of the atomic structure of space and time.
Consider a black hole interacting with a gas of atoms and photons. The black hole can swallow an atom or a photon. When it does so, the entropy of the region outside the black hole decreases because the entropy is a measure of information about that region, and if there are fewer atoms or photons there is less to know about the gas. To compensate, the entropy of the black hole must increase, otherwise the law that entropy can never decrease would be violated. As the entropy of the black hole is proportional to the area of its horizon, the result must be that the horizon expands a little.
And indeed, this is what happens. The process can also go the other way: the horizon can shrink a little, which means that the entropy of the black hole will decrease. To compensate, the entropy outside the black hole must increase. To accomplish this, photons must be created just outside the black hole - photons that comprise the radiation that Hawking predicted should be emitted by a black hole. The photons are hot, so they can carry the entropy that must be created to compensate for the fact that the horizon shrinks.
What is happening is that, to preserve the law that entropy does not decrease, a balance is being struck between, on the one hand, the entropy of atoms and photons outside the black hole, and, on the other, the entropy of the black hole itself. But notice that two very different things are being balanced. The entropy outside the black hole we understand in terms of
the idea that matter is made out of atoms; it has to do with missing information. The entropy of the black hole itself seems to have nothing to do with either atoms or with information. It is a measure of a quantity which has to do with the geometry of space and time: it is proportional to the area of the black hole’s event horizon.
There is something incomplete about a law which asserts a balance or an exchange between two very dissimilar things. It is as though we had two kinds of currency, the first of which was exchangeable into a concrete entity such as gold, while the other had no worth in terms other than paper. Suppose we were allowed to freely mix the two kinds of money in our bank accounts. Such an economy would be based on a contradiction, and could not survive for long. (In fact, communist governments experimented with two kinds of currency, one convertible into other currencies and one not, and discovered that the system is unstable in the absence of all sorts of complicated and artificial restrictions on the use of the two kinds of money.) Similarly, a law of physics that allows information to be converted into geometry, and vice versa, but gives no account of why, should not survive for long. There must be something deeper and simpler at the root of the equivalence.