Antifragile: Things That Gain from Disorder (58 page)

I have used all my life a wonderfully simple heuristic: charlatans are recognizable in that they will give you positive advice, and only positive advice, exploiting our gullibility and sucker-proneness for recipes that hit you in a flash as just obvious, then evaporate later as you forget them. Just look at the “how to” books with, in their title, “Ten Steps for—” (fill in: enrichment, weight loss, making friends, innovation, getting elected, building muscles, finding a husband, running an orphanage, etc.). Yet in practice it is the negative that’s used by the pros, those selected by evolution: chess grandmasters usually win by not losing; people become rich by not going bust (particularly when others do); religions are mostly about interdicts; the learning of life is about what to
avoid. You reduce most of your personal risks of accident thanks to a small number of measures.

Further, being fooled by randomness is that in most circumstances fraught with a high degree of randomness, one cannot really tell if a successful person has skills, or if a person with skills will succeed—but we can pretty much predict the negative, that a person totally devoid of skills will eventually fail.

Subtractive Knowledge
 

Now when it comes to knowledge, the same applies. The greatest—and most robust—contribution to knowledge consists in removing what we think is wrong—subtractive epistemology.

In life, antifragility is reached by
not
being a sucker. In
Peri mystikes theologias,
Pseudo-Dionysos did not use these exact words, nor did he discuss disconfirmation, nor did he get the idea with clarity, but in my view he figured out this subtractive epistemology and asymmetries in knowledge. I have called “Platonicity” the love of some crisp abstract forms, the theoretical forms and universals that make us blind to the mess of reality and cause Black Swan effects. Then I realized that there was an asymmetry. I truly believe in Platonic ideas when they come in reverse, like negative universals.

So the central tenet of the epistemology I advocate is as follows: we know a lot more what is wrong than what is right, or, phrased according to the fragile/robust classification, negative knowledge (what is wrong, what does not work) is more robust to error than positive knowledge (what is right, what works). So knowledge grows by subtraction much more than by addition—given that what we know today might turn out to be wrong but what we know to be wrong cannot turn out to be right, at least not easily. If I spot a black swan (not capitalized), I can be quite certain that the statement “all swans are white” is wrong. But even if I have never seen a black swan, I can never hold such a statement to be true. Rephrasing it again: since one small observation can disprove a statement, while millions can hardly confirm it, disconfirmation is more rigorous than confirmation.

This idea has been associated in our times with the philosopher Karl Popper, and I quite mistakenly thought that he was its originator (though he is at the origin of an even more potent idea on the fundamental
inability to predict the course of history). The notion, it turned out, is vastly more ancient, and was one of the central tenets of the skeptical-empirical school of medicine of the postclassical era in the Eastern Mediterranean. It was well known to a group of nineteenth-century French scholars who rediscovered these works. And this idea of the power of disconfirmation permeates the way we do hard science.

As you can see, we can link this to the general tableaus of positive (additive) and negative (subtractive): negative knowledge is more robust. But it is not perfect. Popper has been criticized by philosophers for his treatment of disconfirmation as hard, unequivocal, black-and-white. It is not clear-cut: it is impossible to figure out whether an experiment failed to produce the intended results—hence “falsifying” the theory—because of the failure of the tools, because of bad luck, or because of fraud by the scientist. Say you saw a black swan. That would certainly invalidate the idea that all swans are white. But what if you had been drinking Lebanese wine, or hallucinating from spending too much time on the Web? What if it was a dark night, in which all swans look gray? Let us say that, in general, failure (and disconfirmation) are more informative than success and confirmation, which is why I claim that negative knowledge is just “more robust.”

Now, before starting to write this section, I spent some time scouring Popper’s complete works wondering how the great thinker, with his obsessive approach to falsification, completely missed the idea of fragility. His masterpiece,
The Poverty of Historicism,
in which he presents the limits of forecasting, shows the impossibility of an acceptable representation of the future. But he missed the point that if an incompetent surgeon is operating on a brain, one can safely predict serious damage, even the death of the patient. Yet such subtractive representation of the future is perfectly in line with his idea of disconfirmation, its logical second step. What he calls falsification of a theory should lead, in practice, to the breaking of the object of its application.

In political systems, a good mechanism is one that helps remove the bad guy; it’s not about what to do or who to put in. For the bad guy can cause more harm than the collective actions of good ones. Jon Elster goes further; he recently wrote a book with the telling title
Preventing Mischief
in which he bases negative action on Bentham’s idea that “the art of the legislator is limited to the prevention of everything that might
prevent the development of their [members of the assembly] liberty and their intelligence.”

And, as expected,
via negativa
is part of classical wisdom. For the Arab scholar and religious leader Ali Bin Abi-Taleb (no relation), keeping one’s distance from an ignorant person is equivalent to keeping company with a wise man.

Finally, consider this modernized version in a saying from Steve Jobs: “People think focus means saying yes to the thing you’ve got to focus on. But that’s not what it means at all. It means saying no to the hundred other good ideas that there are. You have to pick carefully. I’m actually as proud of the things we haven’t done as the things I have done. Innovation is saying no to 1,000 things.”

BARBELLS, AGAIN
 

Subtractive knowledge is a form of barbell. Critically, it is convex. What is wrong is quite robust, what you don’t know is fragile and speculative, but you do not take it seriously so you make sure it does not harm you in case it turns out to be false.

Now another application of
via negativa
lies in the less-is-more idea.

Less Is More
 

The less-is-more idea in decision making can be traced to Spyros Makridakis, Robyn Dawes, Dan Goldstein, and Gerd Gigerenzer, who have all found in various contexts that simpler methods for forecasting and inference can work much, much better than complicated ones. Their simple rules of thumb are not perfect, but are designed to not be perfect; adopting some intellectual humility and abandoning the aim at sophistication can yield powerful effects. The pair of Goldstein and Gigerenzer coined the notion of “fast and frugal” heuristics that make good decisions despite limited time, knowledge, and computing power.

I realized that the less-is-more heuristic fell squarely into my work in two places. First, extreme effects: there are domains in which the rare event (I repeat, good or bad) plays a disproportionate share and we tend to be blind to it, so focusing on the exploitation of such a rare event, or protection against it, changes a lot, a lot of the risky exposure. Just worry about Black Swan exposures, and life is easy.

Less is more
has proved to be shockingly easy to find and apply—and “robust” to mistakes and change of minds. There may not be an easily identifiable cause for a large share of the problems, but often there is an easy solution (not to all problems, but good enough; I mean really good enough), and such a solution is immediately identifiable, sometimes with the naked eye rather than the use of complicated analyses and highly fragile, error-prone, cause-ferreting nerdiness.

Some people are aware of the
eighty/twenty
idea, based on the discovery by Vilfredo Pareto more than a century ago that 20 percent of the people in Italy owned 80 percent of the land, and vice versa. Of these 20 percent, 20 percent (that is, 4 percent) would have owned around 80 percent of the 80 percent (that is, 64 percent). We end up with less than 1 percent representing about 50 percent of the total. These describe winner-take-all Extremistan effects. These effects are very general, from the distribution of wealth to book sales per author.

Few realize that we are moving into the far more uneven distribution of 99/1 across many things that used to be 80/20: 99 percent of Internet traffic is attributable to less than 1 percent of sites, 99 percent of book sales come from less than 1 percent of authors … and I need to stop because numbers are emotionally stirring. Almost everything contemporary has winner-take-all effects, which includes sources of harm and benefits. Accordingly, as I will show, 1 percent modification of systems can lower fragility (or increase antifragility) by about 99 percent—and all it takes is a few steps, very few steps, often at low cost, to make things better and safer.

For instance, a small number of homeless people cost the states a disproportionate share of the bills, which makes it obvious where to look for the savings. A small number of employees in a corporation cause the most problems, corrupt the general attitude—and vice versa—so getting rid of these is a great solution. A small number of customers generate a large share of the revenues. I get 95 percent of my smear postings from the same three obsessive persons, all representing the same prototypes of failure (one of whom has written, I estimate, close to one hundred thousand words in posts—he needs to write more and more and find more and more stuff to critique in my work and personality to get the same effect). When it comes to health care, Ezekiel Emanuel showed that half the population accounts for less than 3 percent of the costs, with the sickest 10 percent consuming 64 percent of the total pie. Bent Flyvbjerg (of
Chapter 18
) showed in his
Black Swan management
idea that the bulk of cost overruns by corporations are simply attributable to large technology projects—implying that that’s what we need to focus on instead of talking and talking and writing complicated papers.

As they say in the mafia, just work on removing the pebble in your shoe.

There are some domains, like, say, real estate, in which problems and solutions are crisply summarized by a heuristic, a rule of thumb to look for the three most important properties: “location, location, and location”—much of the rest is supposed to be chickensh***t. Not quite and not always true, but it shows the central thing to worry about, as the rest takes care of itself.

Yet people want more data to “solve problems.” I once testified in Congress against a project to fund a crisis forecasting project. The people involved were blind to the paradox that we have never had more data than we have now, yet have less predictability than ever. More data—such as paying attention to the eye colors of the people around when crossing the street—can make you miss the big truck. When you cross the street, you remove data, anything but the essential threat.
1
As Paul Valéry once wrote:
que de choses il faut ignorer pour agir
—how many things one should disregard in order to act.

Convincing—and confident—disciplines, say, physics, tend to use little statistical backup, while political science and economics, which have never produced anything of note, are full of elaborate statistics and statistical “evidence” (and you know that once you remove the smoke, the evidence is not evidence). The situation in science is similar to detective novels in which the person with the largest number of alibis turns out to be to be the guilty one. And you do not need reams of paper full of data to destroy the megatons of papers using statistics in economics: the simple argument that Black Swans and tail events run the socioeconomic world—and these events cannot be predicted—is sufficient to invalidate their statistics.

We have further evidence of the potency of less-is-more from the following experiment. Christopher Chabris and Daniel Simons, in their book
The Invisible Gorilla,
show how people watching a video of a
basketball game, when diverted with attention-absorbing details such as counting passes, can completely miss a gorilla stepping into the middle of the court.

I discovered that I had been intuitively using the less-is-more idea as an aid in decision making (contrary to the method of putting a series of pros and cons side by side on a computer screen). For instance, if you have more than one reason to do something (choose a doctor or veterinarian, hire a gardener or an employee, marry a person, go on a trip), just don’t do it. It does not mean that one reason is better than two, just that by invoking more than one reason you are trying to convince yourself to do something. Obvious decisions (robust to error)
require
no more than a single reason. Likewise the French army had a heuristic to reject excuses for absenteeism for more than one reason, like death of grandmother, cold virus, and being bitten by a boar. If someone attacks a book or idea using more than one argument, you know it is not real: nobody says “he is a criminal, he killed many people, and he also has bad table manners and bad breath and is a very poor driver.”

I have often followed what I call Bergson’s razor: “A philosopher should be known for one single idea, not more” (I can’t source it to Bergson, but the rule is good enough). The French essayist and poet Paul Valéry once asked Einstein if he carried a notebook to write down ideas. “I never have ideas” was the reply (in fact he just did not have chickens***t ideas). So, a heuristic: if someone has a long bio, I skip him—at a conference a friend invited me to have lunch with an overachieving hotshot whose résumé “can cover more than two or three lives”; I skipped to sit at a table with the trainees and stage engineers.
2
Likewise when I am told that someone has three hundred academic papers and twenty-two honorary doctorates, but no other single compelling contribution or main idea behind it, I avoid him like the bubonic plague.

Other books

The Lost Souls by Madeline Sheehan
A Creed Country Christmas by Linda Lael Miller
Cut Back by Todd Strasser
Supplice by T. Zachary Cotler
Mugged by Ann Coulter
Outta the Bag by MaryJanice Davidson