Read The Story of Astronomy Online
Authors: Peter Aughton
The velocityâdistance relation became known as “Hubble's law,” with the scaling between the two properties known as “Hubble's constant,” which is measured in units of km per second per Megaparsec (Mpc). (A Megaparsec is one million parsecs; a parsec is a distance equivalent to 3.26 light years.) Hubble's constant gives a measure of the present rate of expansion of the universe, and offers an estimate for its lifetime. For any galaxy traveling at constant speed over a known distance, the ratio of the two attributes allows an estimate of the time it has taken. Thus the inverse of Hubble's constant yields an estimate of the age of the universe, assuming expansion has remained uniform. The first value for Hubble's constant was 500 km/s/Mpc, from which the universe was deduced to have an age of two billion years. The accurate evaluation of Hubble's constant was an important aim throughout 20th-century astronomy, and for many decades it was in error by over 50 percent. Its determination was one of the key projects guiding observations with the Hubble Space Telescope. It is only in the first years of the 21st century that measurements pinned down the Hubble
constant to a value of 70 km/s/Mpc, revising the resulting estimate of the universe's age closer to 14 billion years.
Hubble's discovery did not just revolutionize how astronomers viewed the history of the universe, overturning the idea of a static universe in favor of one that was evolving with time, it also pushed them to consider its future over the next billions of years. If the universe were expanding, would it ever stop? There were three immediate possibilities, and which was correct would depend on the total mass density of the universe. The expansion of space was carrying the galaxies further and further from each other. If the total mass of the universe was high, then the combined gravity would eventually slow the expansionâat some instant the galaxies would cease moving, and then pull back toward each other in an inevitable “Big Crunch” marking the end of the universe. At the other extreme, if the universe were comparatively empty, there would be insufficient gravity ever to draw the galaxies back together, and the expansion would continue until the galaxies moved so far apart that they would no longer be visible to each other. The third option was the boundary point between these two eventualities, with a critical mass density of the universe that would be enough to decelerate but not reverse the expansion, with the galaxy motions coming to rest only at infinity.
Before Hubble's work suggested the universe started with the Big Bang, a theory for a static universe had been put forward by Sir James Jeans (1877â1946) in about 1920. It was developed after World War II by Fred Hoyle (1915â2001) at Cambridge University as a rival to the Big Bang theory; Hoyle called it the steady-state theory. He proposed a universe where matter was created from nothing. The production of only a few atoms per year would be sufficient to cause the universe to expand as observed by Hubble. The idea of creating matter from nothing did not appeal to many astronomers, but the steady-state advocates pointed out that the Big Bang theory required a whole universe to be created out of nothing as well. The steady-state theory did at least encourage discussion about alternatives to the Big Bang as a way of explaining the origins of the universe. Hermann Bondi (1919â2005) and Thomas Gold (1920â2004) raised it again in a revised form in the 1980s, but by then the evidence in favor of the Big Bang was almost conclusive.
To truly comprehend the nature of the stars and the galaxies it was necessary to develop areas of science such as nuclear physics and quantum mechanics. This new knowledge provided astronomers with an insight into the way stars create such vast amounts of energy for so long and, ultimately, how they die
.
Early in the 20th century the word “nebula” was applied to an object in the sky that had a nebulous or poorly defined appearance compared with the sharp points of light that corresponded to the main sequence stars. It soon became apparent that there existed several kinds of nebulae. One kind was a cloud of gas, such as could be seen in the Horsehead Nebula in the constellation Orion. The other kind was a distant galaxy that had, for example, nebulous spiral arms. It soon became obvious that the two types of nebulae were very different objects. The
distant nebulae then became known as galaxies, while the term “nebula” was retained for the interstellar dust clouds. These “dust clouds” are the places where stars are created.
If an interstellar dust cloud is too warm, the speed of the atoms is too great for stars to form; but at temperatures around 10° kelvin the atoms can clump together under gravity. The clumps grow larger and can eventually combine and contract under gravity to form a protostar. A protostar cannot claim the status of a star until it begins to shine. A protostar consists mainly of the molecules in the nebula, namely, the lighter elements hydrogen (74 percent) and helium (24 percent), with the remaining 1 or 2 percent comprising the heavier elements. As the protostar shrinks under its own gravity it heats up, and at some critical point it reaches a temperature at which nuclear reactions can take place. The most important reaction is the burning of hydrogen to produce helium, and once the temperature is high enough to fire this reaction the star begins to shine. The fusion of hydrogen into helium generates heat and light in the form of photons (“packets” of energy), and the radiation pressure produced by the hydrogen fusion is sufficient to prevent the star from collapsing under its own gravity. The young star
enters a stable state where its temperature continues to rise until it becomes a main sequence star.
It is a curious fact that although astronomy deals with the science of very large objects and with the universe as a whole, in the 20th century it was found necessary to study the smallest entities in order to understand what was happening in the cosmos. The processes taking place inside the starsâthe creation of new elements by the building up of atomic nuclei from protons and neutronsâcould not be explained by chemistry; it required instead an understanding of atomic and nuclear physics.
There exist inside the stars conditions of temperature and pressure that could not possibly be created on Earth. Nuclear reactions are taking place. To begin with these were events that the astronomers could not fully understand, and it was necessary for them to enlist the help of the nuclear physicists to explain what was happening. A very good example concerned the Sun, our star. In the 19th century British physicist Lord Kelvin (1824â1907) calculated that the Sun could not shine for more than 20,000 years because by that time all its fuel would be used up. As we have mentioned in an earlier chapter, he had naturally assumed that the Sun was powered by chemical reactions when making this
calculation of the star's lifespan. Kelvin knew nothing of nuclear physics. He was working before Albert Einstein proposed his theory of relativity, and thus did not know that the equation
E
=
mc
2
proved that large amounts of energy could be created from small amounts of matter.
By 1900, knowledge of atomic theory was fairly well developed. The elements were tidily placed in their correct positions in the periodic table. The chemists did not at this time understand fully the internal structure of the atom, but they nevertheless had a good idea of the size of the atoms and molecules. Then, in 1910, Ernest Rutherford (1871â1937), a New Zealander who at that time was working at the University of Manchester, undertook his now-famous experiment in which he fired alpha particles at sheets of gold foil only a few atoms thick. The alpha particles were charged particlesâthe nuclei of helium generated by radioactive decay. The great majority of the particles went straight through the gold foil, but a small number were deflected in the process and an even smaller number, about one in 20,000, bounced back again from the foil. According to some researchers this rare event was like firing a bullet at a sheet of paper and having it bounce back again.
Rutherford was able to conclude that the atoms of
gold consisted mostly of empty space, and that nearly all the mass of the atom was concentrated in a nucleus at the center. The nucleus carried a positive charge, but it was surrounded by orbiting electrons that carried negative charges. The electrons were attracted to the nucleus by electrostatic force, but instead of falling into the nucleus they orbited round it rather like a minute planetary system. The alpha particles carried a positive charge and they were therefore repelled by the positive charges in the nuclei of the gold atoms. Usually this effect produced a small deflection, but in very rare cases an alpha particle collided almost head-on with a nucleus. Whenever this happened the particle could be detected as it bounced off the gold foil.
Niels Bohr (1885â1962), a Danish physicist who was one of Rutherford's team in Manchester, formulated an alternative atomic structure. He suggested that the electrons in the atom were not like the planets in the solar system, but were instead confined to discrete orbits corresponding to specific energy levels. The electrons could change from higher-level to lower-level orbits and lose energy in the form of a photon. Conversely, the atom could absorb a photon and move an electron to a higher energy orbit. The great benefit of the Bohr atomic model was that it could be used to explain the lines generated by hydrogen and other elements in the spectrum of the Sun and other stars. In the Bohr model, only photons of
particular frequencies could be emitted or absorbed by a specific element, with the spectrum of each element producing characteristic spectrum patterns. Astronomers could already identify elements by their spectral lines, but the Bohr atom explained how these lines were related to the atomic structure.
Another result of the new atomic theory was that it showed how it was possible to change atoms of one element to atoms of another. This could be achieved by the crude method of bombarding an atomic nucleus with fast-moving particles in the hope that one of them would strike the nucleus and knock out one or more of the protons. By this means the atom could be changed into an atom of a different element. It seemed to offer a solution to the age-old problem of the philosopher's stone, whereby base metals could be transmuted into gold. One case of transmutation was already well known. It was the radium clock used by geologists to date the age of the rocks. Uranium ore spontaneously gave out radioactivity in the form of alpha particles. In the radioactive process an atom of uranium was changed to one of thorium.
To begin with, scientists struggled to understand the nature of light at the atomic level. Some experiments indicated that light consisted of particles, while others suggested it
existed in the form of a wave. Austrian-Irish Erwin Schrödinger (1887â1961) was one of several physicists who helped develop the theory of quantum mechanics. He formulated an equation, known as the wave equation, that attempted to describe mathematically the way in which electrons and atoms behave.
The German physicist Werner Heisenberg (1901â76) developed a different theory called matrix mechanics. Later it was shown that both wave mechanics and matrix mechanics are really different mathematical approaches to the same theory. Heisenberg went on to suggest what became known as the uncertainty principle. It helped demonstrate the fact that both waves and particles are components of electromagnetic radiation, and therefore particles may act like waves in certain conditions.
Physics at the atomic level was no longer an exact science, and it was this that caused Albert Einstein to make his famous remark that
“God did not play dice with the universe.”
But on this occasion the world's greatest living physicist was proved wrong. Quantum mechanics defied common sense. It seemed totally illogical, but in the years between the two world wars the theory advanced to make many valid predictions about the world of the atom.
It was soon established that there were theoretically two ways to produce energy from the atom. The first way was through nuclear fission. This is the energy that powered the atomic bomb, and it works by creating a very crude and uncontrolled explosion. Particles are released from radioactive uranium to strike other atoms of uranium, producing a chain reaction that results in a nuclear explosion. Nuclear fission is also harnessed in a more controlled fashion in a nuclear power station, whereby the heat emitted is used to produce steam to drive a turbine, thus generating electricity. The second way that atomic energy can be produced is through nuclear fusion. This is a process whereby nuclei are fused together. At the temperature of the Sun's core hydrogen nuclei are continually fused together to form nuclei of helium, and in the process energy is released in the form of heat and light. Despite extensive research, controlled nuclear fusion has never been achieved on Earth; we simply do not possess the technology. Nuclear fusion is used to create hydrogen bombs, but the reaction is an uncontrolled one. If the problem of controlling nuclear fusion could be solved it would give us a clean method of generating large amounts of energy for conversion into electricity.
The British astronomer Sir Arthur Eddington (1882â1944) was the first to suggest that the source of energy in the Sun was nuclear fusion. He asserted that the temperatures in the core were so high that hydrogen nuclei were stripped of their electrons, leaving a single proton. Four such protons were capable of fusing together to form a nucleus of helium (
4
He) by changing two of the protons into neutrons. Other particles, called positrons and neutrinos, were given off during the transition. The mass of the helium atom created by this process is less than the mass of the four hydrogen atoms. The missing mass is converted into energy as given by Einstein's law
E
=
mc
2
and it manifests itself to us on Earth as visible light and heat carried by the photons. Inside the Sun 4 million tonnes (3.9 million tons) of hydrogen are converted into helium every second, but this is no great cause for immediate concernâthere is sufficient hydrogen in the Sun to last for another five billion years. After that time, however, when the hydrogen is finally all used up, the Sun will be left with a core of helium, and other thermonuclear reactions will begin.