Quantum Theory Cannot Hurt You (3 page)

Whereas quantum is based on uncertainty, the rest of physics is based on certainty. To say this is a problem for physicists is a bit of an understatement! “Physics has given up on the problem of trying to predict what would happen in a given circumstance,” said Richard Feynman. “We can only predict the odds.”

All is not lost, however. If the microworld were totally unpredictable, it would be a realm of total chaos. But things are not this bad. Although what atoms and their like get up to is intrinsically unpredictable, it turns out that the unpredictability is at least predictable!

PREDICTING THE UNPREDICTABILITY

Think of the window again. Each photon has a 95 per cent chance of being transmitted and a 5 per cent chance of being reflected. But what determines these probabilities?

Well, the two different pictures of light—as a particle and as a wave—must produce the same outcome. If half the wave goes through and half is reflected, the only way to reconcile the wave and particle pictures is if each individual particle of light has a 50 per cent
probability of being transmitted and a 50 per cent probability of being reflected. Similarly, if 95 per cent of the wave is transmitted and 5 per cent is reflected, the corresponding probabilities for the transmission and reflection of individual photons must be 95 per cent and 5 per cent.

To get agreement between the two pictures of light, the particlelike aspect of light must somehow be “informed” about how to behave by its wavelike aspect. In other words, in the microscopic domain, waves do not simply behave like particles; those particles behave like waves as well! There is perfect symmetry. In fact, in a sense this statement is all you need to know about quantum theory (apart from a few details). Everything else follows unavoidably. All the weirdness, all the amazing richness of the microscopic world, is a direct consequence of this wave-particle “duality” of the basic building blocks of reality.

But how exactly does light’s wavelike aspect inform its particle-like aspect about how to behave? This is not an easy question to answer.

Light reveals itself either as a stream of particles or as a wave. We never see both sides of the coin at the same time. So when we observe light as a stream of particles, there is no wave in existence to inform those particles about how to behave. Physicists therefore have a problem in explaining the fact that photons do things—for instance, fly through windows—as if directed by a wave.

They solve the problem in a peculiar way. In the absence of a real wave, they imagine an abstract wave—a mathematical wave. If this sounds ludicrous, this was pretty much the reaction of physicists when the idea was first proposed by the Austrian physicist Erwin Schrödinger in the 1920s. Schrödinger imagined an abstract mathematical wave that spread through space, encountering obstacles and being reflected and transmitted, just like a water wave spreading on a pond. In places where the height of the wave was large, the probability of finding a particle was highest, and in locations where it was small, the probability was lowest. In this way Schrödinger’s wave of
probability christened the wave function, informed a particle what to do, and not just a photon—any microscopic particle, from an atom to a constituent of an atom like an electron.

There is a subtlety here. Physicists could make Schrödinger’s picture accord with reality only if the probability of finding a particle at any point was related to the square of the height of the probability wave at that point. In other words, if the probability wave at some point in space is twice as high as it is at another point in space, the particle is four times as likely to be found there than at the other place.

The fact that it is the square of the probability wave and not the probability wave itself that has real physical meaning to this day causes debate about whether the wave is a real thing lurking beneath the skin of the world or just a convenient mathematical device for calculating things. Most but not all people favour the latter.

The probability wave is crucially important because it makes a connection between the wavelike aspect of matter and familiar waves of all kinds, from water waves to sound waves to earthquake waves. All obey a so-called wave equation. This describes how they ripple through space and allows physicists to predict the wave height at any location at any time. Schrödinger’s great triumph was to find the wave equation that described the behaviour of the probability wave of atoms and their like.

By using the Schrödinger equation, it is possible to determine the probability of finding a particle at any location in space at any time. For instance, it can be used to describe photons impinging on the obstacle of a windowpane and to predict the 95 per cent probability of finding one on the far side of the pane. In fact, the Schrödinger equation can be used to predict the probability of any particle, be it a photon or an atom, doing just about anything. It provides the crucial bridge to the microscopic world, allowing physicists to predict everything that happens there—if not with 100 per cent certainty, at least with predictable uncertainty!

Where is all this talk of probability waves leading? Well, the fact that waves behave like particles in the microscopic world leads unavoidably to the realisation that the microscopic world dances to an entirely different tune than that of the everyday world. It is governed by random unpredictability. This in itself was a shocking, confidence-draining blow to physicists and their belief in a predictable, clockwork universe. But this, it turns out, is only the beginning. Nature had many more shocks in store. The fact that waves not only behave as particles but also that those particles behave as waves leads to the realisation that all the things that familiar waves, like water waves and sound waves, can do, so too can the probability waves that inform the behaviour of atoms, photons, and their kin.

So what? Well, waves can do an awful lot of different things. And each of these things turns out to have a semi-miraculous consequence in the microscopic world. The most straightforward thing waves can do is exist as superpositions. Remarkably, this enables an atom to be in two places at once, the equivalent of you being in London and New York at the same time.

1
Another interesting characteristic of the photoelectric effect is that no electrons at all are emitted by the metal if it is illuminated by light with a wavelength—a measure of the distance between successive wave crests—above a certain threshold. This, as Einstein realised, is because photons of light have an energy that goes down with increasing wavelength. And below a certain wavelength the photons have insufficient energy to kick an electron out of the metal.

3

T
HE
S
CHIZOPHRENIC
A
TOM

H
OW AN ATOM CAN BE IN MANY PLACES AT ONCE AND DO MANY THINGS AT ONCE

If you imagine the difference between an abacus and the world’s fastest
supercomputer, you would still not have the barest inkling of how much
more powerful a quantum computer could be compared with the computers
we have today.

Julian Brown

It’s 2041. A boy sits at a computer in his bedroom. It’s not an ordinary
computer. It’s a quantum computer. The boy
gives the computer a task

and instantly it splits into
thousands upon thousands of versions of
itself, each of which works on a separate strand of the problem. Finally,
after just a few seconds, the strands come back together and a single
answer flashes on the computer display. It’s an answer that all the normal
computers in the world put together would have taken a trillion
trillion years to find. Satisfied, the boy shuts the computer down and
goes out to play, his night’s homework done.

Surely, no computer could possibly do what the boy’s computer has just done? Not only could a computer do such a thing, crude versions are already in existence today. The only thing in serious dispute is whether such a quantum computer merely behaves like a vast multiplicity of computers or whether, as some believe, it literally exploits the computing power of multiple copies of itself existing in parallel realities, or universes.

The key property of a quantum computer—the ability to do many calculations at once—follows directly from two things that waves—and therefore microscopic particles such as atoms and photons, which behave like waves—can do. The first of those things can be seen in the case of ocean waves.

On the ocean there are both big waves and small ripples. But as anyone who has watched a heavy sea on a breezy day knows, you can also get big, rolling waves with tiny ripples superimposed on them. This is a general property of all waves. If two different waves can exist, so too can a combination, or superposition, of the waves. The fact that superpositions can exist is pretty innocuous in the everyday world. However, in the world of atoms and their constituents, its implications are nothing short of earth-shattering.

Think again of a photon impinging on a windowpane. The photon is informed about what to do by a probability wave, described by the Schrödinger equation. Since the photon can either be transmitted or reflected, the Schrödinger equation must permit the existence of two waves—one corresponding to the photon going through the window and another corresponding to the photon bouncing back. Nothing surprising here. However, remember that, if two waves are permitted to exist, a superposition of them is also permitted to exist. For waves at sea such a combination is nothing out of the ordinary. But here the combination corresponds to something quite extraordinary—the photon being both transmitted and reflected. In other words, the photon can be on both sides of the windowpane simultaneously!

And this unbelievable property follows unavoidably from just two facts: that photons are described by waves and that superpositions of waves are possible.

This is no theoretical fantasy. In experiments it is actually possible to observe a photon or an atom being in two places at once—the everyday equivalent of you being in San Francisco and Sydney at the same time. (More accurately, it is possible to observe the
consequences
of a photon or an atom being in two places at once.) And since there
is no limit to the number of waves that can be superposed, a photon or an atom can be in three places, 10 places, a million places at once.

But the probability wave associated with a microscopic particle does more than inform it where it could be
located
. It informs it
how
to behave
in all circumstances—telling a photon, for instance, whether or not to be transmitted or reflected by a pane of glass. Consequently, atoms and their like can not only be in many places at once, they can
do many things at once
, the equivalent of you cleaning the house, walking the dog, and doing the weekly supermarket shopping all at the same time. This is the secret behind the prodigious power of a quantum computer. It exploits the ability of atoms to do many things at once, to do many calculations at once.

DOING MANY THINGS AT ONCE

The basic elements of a conventional computer are transistors. These have two distinct voltage states, one of which is used to represent the binary digit, or bit, “0”, the other to represent a “1.” A row of such zeros and ones can represent a large number, which in the computer can be added, subtracted, multiplied, and divided by another large number.
1
But in a quantum computer the basic elements—which may be single atoms—can be in a superposition of states. In other words, they can represent a zero and a one simultaneously. To distinguish them from normal bits, physicists call such schizophrenic entities quantum bits, or qubits.

One qubit can be in two states (0 or 1), two qubits in four (00 or 01 or 10 or 11), three qubits in eight, and so on. Consequently, when you calculate with a single qubit, you can do two calculations simultaneously, with two qubits four calculations, with three eight, and so on. If this doesn’t impress you, with 10 qubits you could do 1,024 calculations all at once, with 100 qubits 100 billion billion billion! Not surprisingly, physicists positively salivate at the prospect of quantum computers. For some calculations, they could massively outperform conventional computers, making conventional personal computers appear positively retarded.

But for a quantum computer to work, wave superpositions are not sufficient on their own. They need another essential wave ingredient: interference.

The interference of light observed by Thomas Young in the 18th century was the key observation that convinced everyone that light was a wave. When, at the beginning of the 20th century, light was also shown to behave like a stream of particles, Young’s double slit experiment assumed a new and unexpected importance—as a means of exposing the central peculiarity of the microscopic world.

INTERFERENCE IS THE KEY

In the modern incarnation of Young’s experiment, a double slit in an opaque screen is illuminated with light, which is undeniably a stream of particles. In practice, this means using a light source so feeble that it spits out photons one at a time. Sensitive detectors at different positions on the second screen count the arrival of photons. After the experiment has been running for a while, the detectors show something remarkable. Some places on the screen get peppered with photons while other places are completely avoided. What is more, the places that are peppered by photons and the places that are avoided alternate, forming vertical stripes—exactly as in Young’s original experiment.

But wait a minute! In Young’s experiment the dark and light
bands are caused by interference. And a fundamental feature of interference is that it involves the mingling of two sets of waves from the same source—the light from one slit with the light from the other slit. But in this case the photons are arriving at the double slit one at a time. Each photon is completely alone, with no other photon to mingle with. How, then, can there be any interference? How can it know where its fellow photons will land?

There would appear to be only one way—if each photon somehow goes through both slits simultaneously. Then it can interfere with itself. In other words, each photon must be in a superposition of two states—one a wave corresponding to a photon going through the left-hand slit and the other a wave corresponding to a photon going through the right-hand slit.

The double slit experiment can be done with photons or atoms or any other microscopic particles. It shows graphically how the behaviour of such particles—where they can and cannot strike the second screen—is orchestrated by their wavelike alter ego. But this is not all the double slit experiment demonstrates. Crucially, it shows that the individual waves that make up a superposition are not passive but can actively interfere with each other. It is this ability of the individual states of a superposition to interfere with each other that is the absolute key to the microscopic world, spawning all manner of weird quantum phenomena.

Take quantum computers. The reason they can carry out many calculations at once is because they can exist in a superposition of states. For instance, a 10-element quantum computer is simultaneously in 1,024 states and can therefore carry out 1,024 calculations at once. But all the parallel strands of a calculation are of absolutely no use unless they get woven together. Interference is the means by which this is accomplished. It is the means by which the 1,024 states of the superposition can interact and influence each other. Because of interference, the single answer coughed out by the quantum computer is able to reflect and synthesise what was going on in all those 1,024 parallel calculations.

Think of a problem divided into 1,024 separate pieces and one person working on each piece. For the problem to be solved, the 1,024 people must communicate with each other and exchange results. This is what interference makes possible in a quantum computer.

An important point worth making here is that, although superpositions are a fundamental feature of the microscopic world, it is a curious property of reality that they are never actually observed. All we ever see are the consequences of their existence—what results when the individual waves of a superposition
interfere
with each other. In the case of the double slit experiment, for instance, all we ever see is an interference pattern, from which we infer that an electron was in a superposition in which it went through both slits simultaneously. It is impossible to actually
catch
an electron going through both slits at once. This is what was meant by the earlier statement that it is possible only to observe the
consequences
of an atom being in two places at once, not it actually being in two places at once.

MULTIPLE UNIVERSES

The extraordinary ability of quantum computers to do enormous numbers of calculations simultaneously poses a puzzle. Though practical quantum computers are currently at a primitive stage, manipulating only a handful of qubits, it is nevertheless possible to imagine a quantum computer that can do billions, trillions, or quadrillions of calculations simultaneously. In fact, it is quite possible that in 30 or 40 years we will be able to build a quantum computer that can do more calculations simultaneously than there are particles in the Universe. This hypothetical situation poses a sticky question: Where exactly will such a computer be doing its calculations? After all, if such a computer can do more calculations simultaneously than there are particles in the Universe, it stands to reason that the Universe has insufficient computing resources to carry them out.

One extraordinary possibility, which provides a way out of the conundrum, is that a quantum computer does its calculations in
parallel realities or universes. The idea goes back to a Princeton graduate student named Hugh Everett III, who, in 1957, wondered why quantum theory is such a brilliant description of the microscopic world of atoms but we never actually see superpositions. Everett’s extraordinary answer was that each state of the superposition exists in a totally separate reality. In other words, there exists a multiplicity of realities—a
multiverse
—where all possible quantum events occur.

Although Everett proposed his “Many Worlds” idea long before the advent of quantum computers, it can shed some helpful light on them. According to the Many Worlds idea, when a quantum computer is given a problem, it splits into multiple versions of itself, each living in a separate reality. This is why the boy’s quantum personal computer at the start of this chapter split into so many copies. Each version of the computer works on a strand of the problem, and the strands are brought together by interference. In Everett’s picture, therefore, interference has a very special significance. It is the all-important
bridge
between separate universes, the means by which they interact and influence each other.

Everett had no idea
where
all the parallel universes were located. And, frankly, nor do the modern-day proponents of the Many Worlds idea. As Douglas Adams wryly observed in
The Hitchhiker’s Guide to
the Galaxy:
“There are two things you should remember when dealing with parallel universes. One, they’re not really parallel, and two, they’re not really universes!”

Despite such puzzles, half a century after Everett proposed the Many Worlds idea, it is undergoing an upsurge in popularity. An increasing number of physicists, most notably David Deutsch of the University of Oxford, are taking it seriously. “The quantum theory of parallel universes is not some troublesome, optional interpretation emerging from arcane theoretical considerations,” says Deutsch in his book,
The Fabric of Reality
. “It is the explanation—the only one that is tenable—of a remarkable and counterintuitive reality.”

If you go along with Deutsch—and the Many Worlds idea predicts exactly the same outcome for every conceivable experiment as
more conventional interpretations of quantum theory—then quantum computers are something radically new under the Sun. They are the very first machines humans have ever built that exploit the resources of multiple realities. Even if you do not believe the Many Worlds idea, it still provides a simple and intuitive way of imagining what is going on in the mysterious quantum world. For instance, in the double slit experiment, it is not necessary to imagine a single photon going through both slits simultaneously and interfering with itself. Instead, a photon going through one slit interferes with another photon going through the other slit. What other photon, you may ask? A photon in a neighbouring universe, of course!

WHY ARE ONLY SMALL THINGS QUANTUM?

Other books

So Disdained by Nevil Shute
The War Within by Woodward, Bob
The Reef by Di Morrissey
The Scottish Ploy by Chelsea Quinn Yarbro, Bill Fawcett
Alyssa's Desire by Raine, Krysten
The Firefighter's Cinderella by Dominique Burton
Secret Friends by Summer Waters