Against the Gods: The Remarkable Story of Risk (24 page)

Jacob was not deterred by Leibniz's response, but he did change the manner in which he went about solving the problem. Leibniz's admonition in Greek would not be forgotten.

Jacob's effort to uncover probabilities from sample data appears in his Ars Conjectandi (The Art of Conjecture), the work that his nephew Nicolaus finally published in 1713, eight years after Jacob's death.' His interest was in demonstrating where the art of thinking-objective analysis-ends and the art of conjecture begins. In a sense, conjecture is the process of estimating the whole from the parts.

Jacob's analysis begins with the observation that probability theory had reached the point where, to arrive at a hypothesis about the likelihood of an event, "it is necessary only to calculate exactly the number of possible cases, and then to determine how much more likely it is that one case will occur than another." The difficulty, as Jacob goes on to
point out, is that the uses of probability are limited almost exclusively to
games of chance. Up to that point, Pascal's achievement had amounted
to little more than an intellectual curiosity.

For Jacob, this limitation was extremely serious, as he reveals in a
passage that echoes Leibniz's concerns:

But what mortal ... could ascertain the number of diseases, counting
all possible cases, that afflict the human body ... and how much
more likely one disease is to be fatal than another-plague than
dropsy ... or dropsy than fever-and on that basis make a prediction
about the relationship between life and death in future generations?

... [W[ho can pretend to have penetrated so deeply into the
nature of the human mind or the wonderful structure of the body
that in games which depend ... on the mental acuteness or physical
agility of the players he would venture to predict when this or that
player would win or lose?

Jacob is drawing a crucial distinction between reality and abstraction in applying the laws of probability. For example, Paccioli's incomplete game of balla and the unfinished hypothetical World Series that
we analyzed in the discussion of Pascal's Triangle bear no resemblance
to real-world situations. In the real world, the contestants in a game of
balla or in a World Series have differing "mental acuteness or physical
agility," qualities that I ignored in the oversimplified examples of how
to use probability to forecast outcomes. Pascal's Triangle can provide
only hints about how such real-life games will turn out.

The theory of probability can define the probabilities at the gaming
casino or in a lottery-there is no need to spin the roulette wheel or
count the lottery tickets to estimate the nature of the outcome-but in
real life relevant information is essential. And the bother is that we never
have all the information we would like. Nature has established patterns,
but only for the most part. Theory, which abstracts from nature, is
kinder: we either have the information we need or else we have no need
for information. As I quoted Fischer Black as saying in the Introduction,
the world looks neater from the precincts of MIT on the Charles River
than from the hurly-burly of Wall Street by the Hudson.

In our discussion of Paccioli's hypothetical game of balla and our
imaginary World Series, the long-term records, the physical capabil ities, and the I.Q.s of the players were irrelevant. Even the nature of
the game itself was irrelevant. Theory was a complete substitute for
information.

Real-life baseball fans, like aficionados of the stock market, assemble reams of statistics precisely because they need that information in
order to reach judgments about capabilities among the players and the
teams-or the outlook for the earning power of the companies trading
on the stock exchange. And even with thousands of facts, the track
record of the experts, in both athletics and finance, proves that their
estimates of the probabilities of the final outcomes are open to doubt
and uncertainty.

Pascal's Triangle and all the early work in probability answered only
one question: what is the probability of such-and-such an outcome? The
answer to that question has limited value in most cases, because it leaves
us with no sense of generality. What do we really know when we
reckon that Player A has a 60% chance of winning a particular game of
balla? Can that likelihood tell us whether he is skillful enough to win
60% of the time against Player B? Victory in one set of games is insufficient to confirm that expectation. How many times do Messrs. A and B
have to play before we can be confident that A is the superior player?
What does the outcome of this year's World Series tell us about the
probability that the winning team is the best team all the time not just
in that particular series? What does the high proportion of deaths from
lung cancer among smokers signify about the chances that smoking will
kill you before your time? What does the death of an elephant reveal
about the value of going to an air-raid shelter?

But real-life situations often require us to measure probability in
precisely this fashion-from sample to universe. In only rare cases does
life replicate games of chance, for which we can determine the probability of an outcome before an event even occurs-a priori, as Jacob
Bernoulli puts it. In most instances, we have to estimate probabilities
from what happened after the fact-a posteriori. The very notion of a
posteriori implies experimentation and changing degrees of belief. There
were seven million people in Moscow, but after one elephant was
killed by a Nazi bomb, the professor decided the time had come to go
to the air-raid shelter.

Jacob Bernoulli's contribution to the problem of developing probabilities from limited amounts of real-life information was twofold.
First, he defined the problem in this fashion before anyone else had
even recognized the need for a definition. Second, he suggested a solution that demands only one requirement. We must assume that "under
similar conditions, the occurrence (or non-occurrence) of an event in
the future will follow the same pattern as was observed in the past."5

This is a giant assumption. Jacob may have complained that in real
life there are too few cases in which the information is so complete that
we can use the simple rules of probability to predict the outcome. But
he admits that an estimate of probabilities after the fact also is impossible
unless we can assume that the past is a reliable guide to the future. The
difficulty of that assignment requires no elaboration.

The past, or whatever data we choose to analyze, is only a fragment of
reality. That fragmentary quality is crucial in going from data to a generalization. We never have all the information we need (or can afford to
acquire) to achieve the same confidence with which we know, beyond a
shadow of a doubt, that a die has six sides, each with a different number,
or that a European roulette wheel has 37 slots (American wheels have 38
slots), again each with a different number. Reality is a series of connected
events, each dependent on another, radically different from games of
chance in which the outcome of any single throw has zero influence on the
outcome of the next throw. Games of chance reduce everything to a hard
number, but in real life we use such measures as "a little," "a lot," or "not
too much, please" much more often than we use a precise quantitative
measure.

Jacob Bernoulli unwittingly defined the agenda for the remainder
of this book. From this point forward, the debate over managing risk
will converge on the uses of his three requisite assumptions-full information, independent trials, and the relevance of quantitative valuation.
The relevance of these assumptions is critical in determining how successfully we can apply measurement and information to predict the
future. Indeed, Jacob's assumptions shape the way we view the past
itself. after the fact, can we explain what happened, or must we ascribe
the event to just plain luck (which is merely another way of saying we
are unable to explain what happened)?

Despite all the obstacles, practicality demands that we assume, sometimes explicitly but more often implicitly, that Jacob's necessary conditions are met, even when we know full well that reality differs from the
ideal case. Our answers may be sloppy, but the methodology developed
by Jacob Bernoulli and the other mathematicians mentioned in this
chapter provides us with a powerful set of tools for developing probabilities of future outcomes on the basis of the limited data provided by
the past.

Jacob Bernoulli's theorem for calculating probabilities a posteriori is
known as the Law of Large Numbers. Contrary to the popular view,
this law does not provide a method for validating observed facts, which
are only an incomplete representation of the whole truth. Nor does it
say that an increasing number of observations will increase the probability that what you see is what you are going to get. The law is not a
design for improving the quality of empirical tests: Jacob took Leibniz's
advice to heart and rejected his original idea of finding firm answers by
means of empirical tests.

Jacob was searching for a different probability. Suppose you toss a
coin over and over. The Law of Large Numbers does not tell you that
the average of your throws will approach 50% as you increase the number of throws; simple mathematics can tell you that, sparing you the
tedious business of tossing the coin over and over. Rather, the law states
that increasing the number of throws will correspondingly increase the
probability that the ratio of heads thrown to total throws will vary from
50% by less than some stated amount, no matter how small. The word
"vary" is what matters. The search is not for the true mean of 50% but
for the probability that the error between the observed average and the
true average will be less than, say, 2%-in other words, that increasing
the number of throws will increase the probability that the observed
average will fall within 2% of the true average.

That does not mean that there will be no error after an infinite
number of throws; Jacob explicitly excludes that case. Nor does it mean
that the errors will of necessity become small enough to ignore. All the
law tells us is that the average of a large number of throws will be more likely than
the average of a small number of throws to differfrom the true average by less than
some stated amount. And there will always be a possibility that the observed result will differ from the true average by a larger amount than the specified bound. Seven million people in Moscow were apparently
not enough to satisfy the professor of statistics.

The Law of Large Numbers is not the same thing as the Law of
Averages. Mathematics tells us that the probability of heads coming up
on any individual coin toss is 50%-but the outcome of each toss is
independent of all the others. It is neither influenced by previous tosses
nor does it influence future tosses. Consequently, the Law of Large
Numbers cannot promise that the probability of heads will rise above
50% on any single toss if the first hundred, or million, tosses happen to
come up only 40% heads. There is nothing in the Law of Large
Numbers that promises to bail you out when you are caught in a losing
streak.

To illustrate his Law of Large Numbers, Jacob hypothesized a jar
filled with 3000 white pebbles and 2000 black pebbles, a device that has
been a favorite of probability theorists and inventors of mind-twisting
mathematical puzzles ever since. He stipulates that we must not know
how many pebbles there are of each color. We draw an increasing
number of pebbles from the jar, carefully noting the color of each pebble before returning it to the jar. If drawing more and more pebbles can
finally give us "moral certainty"-that is, certainty as a practical matter
rather than absolute certainty-that the ratio is 3:2, Jacob concludes
that "we can determine the number of instances a posteriori with almost
as great accuracy as if they were know to us a priori."6 His calculations
indicate that 25,550 drawings from the jar would suffice to show, with
a chance exceeding 1000/1001, that the result would be within 2% of
the true ratio of 3:2. That's moral certainty for you.

Other books

HauntedPassion by Tianna Xander
Mattress Mart Murder by Kayla Michelle
(Once) Again by Theresa Paolo
Bachelor's Puzzle by Judith Pella
Stealing Popular by Trudi Trueit
Seeder Saga by Adam Moon
The Longest Road by Jeanne Williams
Crossfire by Andy McNab