Letters to a Young Mathematician (4 page)

Nearly everyone makes use of number theory every day, if only because it forms the basis of Internet security codes and the data-compression methods employed by cable and satellite television. We don’t need to be able to
do
number theory to watch TV (otherwise ratings of many shows would be way down), but if nobody knew any number theory, crooks would be helping themselves to our bank accounts, and we’d be stuck with three channels. So the general area of math in which Fermat’s last theorem lives is undoubtedly useful.

The theorem itself, though, is unlikely to be of much use. Very few practical problems rest on adding two big powers together to get another such power. (Though I am told that at least one problem in physics does depend on this.) Wiles’s new methods, on the other hand, have opened up significant new connections between hitherto separate areas of our subject. Those methods will surely turn out to be important one day, very likely in fundamental physics, which is today’s biggest consumer of deep, abstract mathematical concepts and techniques.

Questions like Fermat’s last theorem are not important because we need to know the answer. In the end it probably doesn’t matter that the theorem was proved true rather than false. They are important because our efforts to find the answer reveal major gaps in our understanding of mathematics. What counts is not the
answer itself but knowing how to get it. It can only go in the back of the book when someone has worked out what it is.

The further we push out the boundaries of mathematics, the bigger the boundary itself becomes. There is no danger that we will ever run out of new problems to solve.

5
Surrounded by Math

Dear Meg,

I’m not surprised that you’re “both excited and a little bit intimidated,” as you put it, by your imminent move to university. Let me commend your good intuition on both counts. You’ll find the competition tougher, the pace faster, the work harder, and the content far more interesting. You’ll be thrilled by your teachers (some of them) and the ideas they lead you to discover, and daunted that so many of your classmates seem to get there ahead of you. For the first six months you’ll wonder why the school ever let you in. (After that you’ll wonder how some of the others were let in.)

You asked me to tell you something inspirational. Nothing technical, just something to hold on to when the going gets tough.

Very well.

Like many mathematicians, I get my inspiration from nature. Nature may not look very mathematical; you
don’t see sums written on the trees. But math is not about sums, not really. It’s about patterns and why they occur. Nature’s patterns are both beautiful and inexhaustible.

I’m in Houston, Texas, on a research visit, and I’m surrounded by math.

Houston is a huge, sprawling city. Flat as a pancake. It used to be a swamp, and when there’s a heavy thunderstorm, it tries to revert to its natural condition. Close by the apartment complex where my wife and I always stay when we visit, there is a concrete-lined canal that diverts a lot of the runoff from the rain. It doesn’t always divert quite enough; a few years ago the nearby freeway was thirty feet under water, and the ground floor of the apartment complex was flooded. But it helps. It’s called Braes Bayou, and there are paths along both sides of it. Avril and I like to go for walks along the bayou; the concrete sides are not exactly pretty, but they’re prettier than the surrounding streets and parking lots, and there’s quite a lot of wildlife: catfish in the river, egrets preying on the fish, lots of birds.

As I walk along Braes Bayou, surrounded by wildlife, I realize that I am also surrounded by math.

For instance . . .

Roads cross the bayou at regular intervals, and the phone lines cross there too, and birds perch on the phone lines. From a distance they look like sheet music, fat little blobs on rows of horizontal lines. There seem to be special places they like to perch, and it’s not at all
clear to me why, but one thing stands out. If a lot of birds are perching on a wire, they end up
evenly spaced
.

That’s a mathematical pattern, and I think there’s a mathematical explanation. I don’t think the birds “know” they ought to space themselves out evenly. But each bird has its own “personal space,” and if another birds gets too close, it will sidle along the wire to leave a bit more room, unless there’s another bird crowding it from the other side.

When there are just a few birds, they end up randomly spaced. But when there are a lot, they get pushed close together. As each one sidles along to make itself feel more comfortable, the “population pressure” evens them out. Birds at the edge of denser regions get pushed into less densely populated regions. And since the birds are all of the same species (usually they’re pigeons), they all have much the same idea of what their personal space should be. So they space themselves evenly.

Not
exactly
evenly, of course. That would be a Platonic ideal. As such, it helps us to comprehend a more messy reality.

You could do the math on this problem if you wanted to. Write down some simple rules for how birds move when the neighbors get too close, plonk them down at random, run the rules, and watch the spacing evolve. But there’s an analogy with a common physical system, where that math has already been done, and the analogy tells you what to expect.

It’s a
bird crystal
.

The same process that makes birds space themselves regularly makes the atoms in a solid object line up to form a repetitive lattice. The atoms also have a “personal space”: they repel each other if they’re too close together. In a solid, the atoms are forced to pack fairly tightly, but as they adjust their personal spaces, they arrange themselves in an elegant crystal lattice.

The bird lattice is one-dimensional, since they’re sitting on a wire. A one-dimensional lattice consists of equally spaced points. When there are just a few birds, arranged at random and not subject to population pressure, it’s not a crystal, it’s a gas.

This isn’t just a vague analogy. The
same
mathematical process that creates a regular crystal of salt or calcite also creates my “bird crystal.”

And that’s not the only math that you can find in Braes Bayou.

A lot of people walk their dogs along the paths. If you watch a walking dog, you quickly notice how rhythmic its movement is. Not when it stops to sniff at a tree or another dog, mind you; it’s rhythmic only when the dog is just bumbling happily along without a thought in its head. Tail wagging, tongue lolling, feet hitting the ground in a careless doggy dance.

What do the feet do?

When the dog is walking, there’s a characteristic pattern. Left
rear
, left
front
, right
rear
, right
front
. The foot-falls
are equally spaced in time, like musical notes, four beats to the bar.

If the dog speeds up, its gait changes to a trot. Now diagonal pairs of legs—left rear and right front, then the other two—hit the ground together, in an alternating pattern of two beats to the bar. If two people walked one behind the other, exactly out of step, and you put them inside a cow costume, the cow would be trotting.

The dog is math incarnate. The subject of which it is an unwitting example is known as gait analysis; it has important applications in medicine: humans often have problems moving their legs properly, especially in infancy or old age, and an analysis of how they move can reveal the nature of the problem and maybe help cure it. Another application is to robotics: robots with legs can move in terrain that doesn’t suit robots with wheels, such as the inside of a nuclear power station, an army firing range, or the surface of Mars. If we can understand legged locomotion well enough, we can engineer reliable robots to decommission old power stations, locate unexploded shells and mines, and explore distant planets. Right now, we’re still using wheels for Mars rovers because that design is reliable, but the rovers are limited in where they can go. We’re not decommissioning nuclear power stations at all. But the U.S. Army does use legged robots for some tidying-up tasks on firing ranges.

If we learn to reinvent the leg, all that will change.

Egrets standing in the shallows with that characteristic alert posture, long beaks poised, muscles tensed, are hunting catfish. Together they form a miniature ecology, a predator–prey system. Ecology’s connection with mathematics goes all the way back to Leonardo of Pisa, also known as Fibonacci, who wrote about a rather simple model of the growth of rabbits in 1202, in his
Liber Abaci
. To be fair, the book is really about the Hindu-Arabic number system, the forerunner of today’s ten-symbol notation for numbers, and the rabbit model is mainly there as an exercise in arithmetic. Most of the other exercises are currency transactions; it was a very practical book.

More-serious ecological models arose in the 1920s, when the Italian mathematician Vito Volterra was trying to understand a curious effect that had been observed by Adriatic fishermen. During World War I, when the amount of fishing was reduced, the numbers of food fish didn’t seem to increase, but the population of sharks and rays did.

Volterra wondered why a reduction in fishing benefited the predators more than it benefited the prey. To find out, he devised a mathematical model, based on the sizes of the shark and food-fish populations and how each affected the other. He discovered that instead of settling down to steady values, populations underwent repetitive cycles: large populations became smaller but then increased, over and over again. The
shark population peaked sometime after the food-fish population did.

You don’t need numbers to understand why. With a moderate number of sharks, the food fish can reproduce faster than they are eaten, so their population soars. This provides more food for the sharks, so their population also begins to climb; but they reproduce more slowly, so there is a delay. As the sharks increase in number, they eat more food fish, and eventually there are so many sharks that the food-fish population starts to decline. Now the food fish cannot support so many sharks, so the shark numbers also drop, again with a delay. With the shark population reduced, the food fish can once more increase . . . and so it goes.

The math makes this story crystal clear (within the assumptions built into the model) and also lets us work out how the average population sizes behave over a complete cycle, something the verbal argument can’t handle. Volterra’s calculations showed that a reduced level of fishing decreases the average number of food fish over a cycle but increases the average number of sharks. Which is just what happened during World War I.

All of the examples I’ve told you about so far involve “advanced” mathematics. But simple math can also be illuminating. I am reminded of one of the many stories mathematicians tell each other after all the nonmathematicians leave the room. A mathematician at a famous university went to look around the new auditorium, and
when she got there, she found the dean of the faculty staring at the ceiling and muttering to himself, “. . . forty-five, forty-six, forty-seven . . .” Naturally she interrupted the count to find out what it was for. “I’m counting the lights,” said the dean. The mathematician looked up at the perfect rectangular array of lights and said, “That’s easy, there are . . . twelve that way, and . . . eight that way. Twelve eights are ninety-six.” “No, no,” said the dean impatiently. “I want the
exact
number.”

Even when it comes to something as simple as counting, we mathematicians see the world differently from other folk.

6
How Mathematicians Think

Dear Meg,

I would say you’ve lucked out. If you’re hearing about people like Newton, Leibniz, Fourier, and others, it means your freshman calculus teacher has a sense of the history of his subject; and your question “How did they think of these things?” suggests that he’s teaching calculus not as a set of divine revelations (which is how it’s too often done) but as real problems that were solved by real people.

But you’re right, too, that the answer “Well, they were geniuses” isn’t really adequate. Let me see if I can dig a little deeper. The general form of your question— which is a very important one—is “How do mathematicians think?”

You might reasonably conclude from looking at textbooks that all mathematical thought is symbolic. The words are there to separate the symbols and explain what they signify; the core of the description is heavily
symbolic. True, some areas of mathematics make use of pictures, but those are either rough guides to intuition or visual representations of the results of calculations.

There is a wonderful book about mathematical creation,
The Psychology of Invention in the Mathematical
Field
, by Jacques Hadamard. It was first published in 1945, and it’s still in print and extremely relevant today. I recommend you pick up a copy. Hadamard makes two main points. The first is that most mathematical thinking begins with vague visual images and is only later formalized with symbols. About ninety percent of mathematicians, he tells us, think that way. The other ten percent stick to symbols the entire time. The second is that ideas in mathematics seem to arise in three stages.

First, it is necessary to carry out quite a lot of conscious work on a problem, trying to understand it, exploring ways to approach it, working through examples in the hope of finding some useful general features. Typically, this stage bogs down in a state of hopeless confusion, as the real difficulty of the problem emerges.

At this point it helps to stop thinking about the problem and do something else: dig in the garden, write lecture notes, start work on another problem. This gives the subconscious mind a chance to mull over the original problem and try to sort out the confused mess that your conscious efforts have turned it into. If your subconscious is successful, even if all it manages is to get part way, it will “tap you on the shoulder” and alert you to its
conclusions. This is the big “aha!” moment, when the little lightbulb over your head suddenly switches on.

Finally, there is another conscious stage of writing everything down formally, checking the details, and organizing it so that you can publish it and other mathematicians can read it. The traditions of scientific publication (and of textbook writing) require that the “aha!” moment be concealed, and the discovery presented as a purely rational deduction from known premises.

Henri Poincaré, probably my favorite among the great mathematicians, was unusually aware of his own thought processes and lectured about them to psychologists. He called the first stage “preparation,” the second “incubation followed by illumination,” and the third “verification.” He laid particular emphasis on the role of the subconscious, and it is worth quoting one famous section of his essay
Mathematical Creation
:

For fifteen days I strove to prove that there could not
be any functions like those I have since called Fuchsian
functions. I was then very ignorant; every day I seated
myself at my table, stayed an hour or two, tried a
great number of combinations and reached no results.
One evening, contrary to my custom, I drank black
coffee and could not sleep. Ideas rose in crowds; I felt
them collide until pairs interlocked, so to speak,
making a stable combination. By the next morning I
had established the existence of a class of Fuchsian
functions, those which come from the hypergeometric
series; I had only to write out the results, which took
but a few hours.

This was but one of several occasions on which Poincaré felt that he was “present at his own unconscious work.”

A recent experience of my own also fits Poincaré’s three-stage model, though I did not have the feeling that I was observing my own subconscious. A few years ago, I was working with my long-term collaborator Marty Golubitsky on the dynamics of networks. By “network” I mean a set of dynamical systems that are “coupled together,” with some influencing the behavior of others.The systems themselves are the nodes of the network— think of them as blobs—and two nodes are joined by an arrow if one of them (at the tail end) influences the other (at the head end). For example, each node might be a nerve cell in some organism, and the arrows might be connections along which signals pass from one cell to another.

Marty and I were particularly interested in two aspects of these networks: synchrony and phase relations. Two nodes are synchronous if the systems they represent do exactly the same thing at the same moment. That trotting dog synchronizes diagonally opposite legs: when the front left foot hits the ground, so does the back right. Phase relations are similar, but with a time lag. The
dog’s front right foot (which is similarly synchronized with its back left foot) hits the ground half a cycle later than the front left foot. This is a half-period phase shift.

We knew that synchrony and phase shifts are common in symmetric networks. In fact, we had worked out the only plausible symmetric network that could explain all of the standard gaits of four-legged animals. And we’d sort of assumed, because we couldn’t think of any other reason, that symmetry was also necessary for synchrony and phase shifts to occur.

Then Marty’s postdoc Marcus Pivato invented a very curious network that had synchrony and phase shifts but no symmetry. It had sixteen nodes, which synchronized in clusters of four, and each cluster was separated from one of the others by a phase shift of one quarter of a period. The network was almost symmetric at first sight, but when you looked closely you could see that the apparent symmetry was imperfect.

To us, Marcus’s example made absolutely no sense. But there was no question that his calculations were correct. We could check them, and we did, and they worked. But we were left with a nagging feeling that we didn’t really understand
why
they worked. They involved a kind of coincidence, which definitely happened, but “shouldn’t have.”

While Marty and Marcus worked on other topics, I worried about Marcus’s example. I went to Poland for a conference and to give some lectures, and for the whole
of that week I doodled networks on notepads. I doodled all the way from Warsaw to Krakow on the train, and two days later I doodled all the way back. I felt I was close to some kind of breakthrough, but I found it impossible to write down what it might be.

Tired and fed up, I abandoned the topic, shoved the doodles into a filing cabinet, and occupied my time elsewhere. Then one morning I woke up with a strange feeling that I should dig out the file and take another look at the doodles. Within minutes I had noticed that all the doodles that did what I wanted had a common feature, one that I’d totally missed when I was doodling them. Not only that; all of the doodles that didn’t do what I wanted lacked that feature. At that moment I “knew”
what the answer to the puzzle was, and I could even write it down symbolically. It was neat, tidy, and very simple.

The trouble with that kind of knowledge, as my biologist friend Jack Cohen often says, is that it feels just as certain when you’re wrong. There is no substitute for proof. But now, because I knew
what
to prove and had a fair idea of why it was true, that final stage didn’t take very long. It was blindingly obvious how to prove that the feature that I had observed in my doodles was
sufficient
to make happen everything I thought should happen. Proving that it was also necessary was trickier, but not greatly so. There were several relatively obvious lines of attack, and the second or third worked.

Problem solved.

This description fits Poincaré’s scenario so perfectly that I worry that I have embroidered the tale and rearranged it to make it fit. But I’m pretty sure that it really did happen the way I’ve just told you

What was the key insight? I’ve just looked through my notes from the Warsaw–Krakow train, and they are full of networks whose nodes have been colored. Red, blue, green . . . At some stage I had decided to color the nodes so that synchronous nodes got the same color. Using the colors, I could spot hidden regularities in the networks, and those regularities were what made Marcus’s example work. The regularities weren’t symmetries, not in the technical sense used by mathematicians, but they had a similar effect.

Why had I been coloring the networks? Because the colors made it easy to pick out the synchronous clusters. I had colored dozens of networks and never noticed what the colors were trying to tell me. The answer had been staring me in the face. But only when I stopped working on the problem did my subconscious have the freedom to sort it out.

It took a week or two to turn this insight into formal mathematics. But the visual thinking—the colors—came first, and my subconscious had to grapple with the problem before I was consciously aware of the answer. Only then did I start to reason symbolically.

There’s more to the tale. Once the formal system was sorted out, I noticed a deeper idea, which underlay the
whole thing. The similarities between colored cells formed a natural algebraic structure. In our previous work on symmetric systems we had put a similar structure in from the very start, because all mathematicians know how to formalize symmetries. The concept concerned is called a group. But Marcus’s network has no symmetry, so groups won’t help. The natural algebraic structure that replaces the symmetry group in my colored diagrams is something less well known, called a “groupoid.”

Pure mathematicians have been studying groupoids for years, for their own private reasons. Suddenly I realized that these esoteric structures are intimately connected with synchrony and phase shifts in networks of dynamical systems. It’s one of the best examples, among the topics that I’ve been involved with, of the mysterious process that turns pure math into applications.

Once you understand a problem, many aspects of it suddenly become much simpler. As mathematicians the world over say, everything is either impossible or trivial. We immediately found lots of simpler examples than Marcus’s. The simplest has just two nodes and two arrows.

Research is an ongoing activity, and I think we have to go further than Hadamard and Poincaré to understand the process of invention, or discovery, in math. Their three-stage description applies to a single “inventive step” or “advance in understanding.” Solving most research problems involves a whole series of such steps.
In fact, any step may break down into a series of sub-steps, and those substeps may also break down in a similar manner. So instead of a single three-stage process, we get a complicated network of such processes. Hadamard and Poincaré described a basic tactic of mathematical thought, but research is more like a strategic battle. The mathematician’s strategy employs that tactic over and over again, on different levels and in different ways.

How do you learn to become a strategist? You take a leaf from the generals’ book. Study the tactics and strategies of the great practitioners of the past and present. Observe, analyze, learn, and internalize. And one day, Meg—closer than you might think—other mathematicians will be learning from
you
.

Other books

Edge of the Heat 6 by Ladew, Lisa
A Duke's Scandalous Temptation by Char Marie Adles
Some of the Parts by Hannah Barnaby
Freeze Frame by B. David Warner
Wired for Love by Stan Tatkin
Lord Beaverbrook by David Adams Richards
Wail of the Banshee by Tommy Donbavand