Antifragile: Things That Gain from Disorder (30 page)

 

In the fall of 2009, I found myself in Korea with a collection of suit-and-tie-wearing hotshots. On a panel sat one Takatoshi Kato, then the deputy managing director of a powerful international institution. Before the panel discussion, he gave us a rapid PowerPoint presentation showing his and his department’s economic projections for 2010, 2011, 2012, 2013, and 2014.

These were the days before I decided to climb up the mountain, speak slowly and in a priestly tone, and try shaming people rather than insulting them. Listening to Kato’s presentation, I could not control myself and flew into a rage in front of two thousand Koreans—I was so angry that I almost started shouting in French, forgetting that I was in Korea. I ran to the podium and told the audience that the next time someone in a suit and tie gave them projections for some dates in the future, they should ask him to show what he had projected in the past—in this case, what he had been forecasting for 2008 and 2009 (the crisis years) two to five years earlier, in 2004, 2005, 2006, and 2007. They would then verify that Highly Venerable Kato-san and his colleagues are, to put it mildly, not very good at this predictionizing business. And it is not just Mr. Kato: our track record in figuring out significant rare events in politics and economics is not close to zero; it is
zero
. I improvised,
on the spot, my solution. We can’t put all false predictors in jail; we can’t stop people from asking for predictions; we can’t tell people not to hire the next person who makes promises about the future. “All I want is to live in a world in which predictions such as those by Mr. Kato do not harm you. And such a world has unique attributes: robustness.”

The idea of proposing the Triad was born there and then as an answer to my frustration: Fragility-Robustness-Antifragility as a replacement for predictive methods.

Ms. Bré Has Competitors
 

What was getting me in that state of anger was my realization that forecasting was not neutral. It is all in the iatrogenics. Forecasting can be downright injurious to risk-takers—no different from giving people snake oil medicine in place of cancer treatment, or bleeding, as in the story of George Washington. And there was evidence. Danny Kahneman—rightfully—kept admonishing me for my fits of anger and outbursts at respectable members of the establishment (respectable for now), unbecoming of the wise member of the intelligentsia I was supposed to have become. Yet he stoked my frustration and sense of outrage the most by showing me the evidence of iatrogenics. There are ample empirical findings to the effect that providing someone with a random numerical forecast increases his risk taking, even if the person
knows
the projections are random.

All I hear is complaints about forecasters, when the next step is obvious yet rarely taken: avoidance of iatrogenics from forecasting. We understand childproofing, but not forecaster-hubris-proofing.

The Predictive
 

What makes life simple is that the robust and antifragile don’t have to have as accurate a comprehension of the world as the fragile—and they do not need forecasting. To see how redundancy is a nonpredictive, or rather a less predictive, mode of action, let us use the argument of
Chapter 2
: if you have extra cash in the bank (in addition to stockpiles of tradable goods such as cans of Spam and hummus and gold bars in the basement), you don’t need to know with precision which event will
cause potential difficulties.
1
It could be a war, a revolution, an earthquake, a recession, an epidemic, a terrorist attack, the secession of the state of New Jersey, anything—you do not need to predict much, unlike those who are in the opposite situation, namely, in debt. Those, because of their fragility, need to predict with more, a lot more, accuracy.

Plus or Minus Bad Teeth
 

You can control fragility a lot more than you think. So let us refine in three points:

(i) Since detecting (anti)fragility—or, actually, smelling it, as Fat Tony will show us in the next few chapters—is easier, much easier, than prediction and understanding the dynamics of events, the entire mission reduces to the central principle of what to do to minimize harm (and maximize gain) from forecasting errors, that is, to have things that don’t fall apart, or even benefit, when we make a mistake.

(ii) We do not want to change the world for now (leave that to the Soviet-Harvard utopists and other fragilistas); we should first make things more robust to defects and forecast errors, or even exploit these errors, making lemonade out of the lemons.

(iii) As for the lemonade, it looks as if history is in the business of making it out of lemons; antifragility is necessarily how things move forward under the mother of all stressors, called time.

 

Further, after the occurrence of an event, we need to switch the blame from the inability to see an event coming (say a tsunami, an Arabo-Semitic spring or similar riots, an earthquake, a war, or a financial crisis) to the failure to understand (anti)fragility, namely, “why did we build something so fragile to these types of events?” Not seeing a tsunami or an economic event coming is excusable; building something fragile to them is not.

Also, as to the naive type of utopianism, that is, blindness to history, we cannot afford to rely on the rationalistic elimination of greed and other human defects that fragilize society. Humanity has been trying to
do so for thousands of years and humans remain the same, plus or minus bad teeth, so the last thing we need is even more dangerous moralizers (those who look in a permanent state of gastrointestinal distress). Rather, the more intelligent (and practical) action is to make the world greed-proof, or even hopefully make society benefit from the greed and other perceived defects of the human race.

In spite of their bad press, some people in the nuclear industry seem to be among the rare ones to have gotten the point and taken it to its logical consequence. In the wake of the Fukushima disaster, instead of predicting failure and the probabilities of disaster, these intelligent nuclear firms are now aware that they should instead focus on
exposure to failure
—making the prediction or nonprediction of failure quite irrelevant. This approach leads to building small enough reactors and embedding them deep enough in the ground with enough layers of protection around them that a failure would not affect us much should it happen—costly, but still better than nothing.

Another illustration, this time in economics, is the Swedish government’s focus on total fiscal responsibility after their budget troubles in 1991—it makes them much less dependent on economic forecasts. This allowed them to shrug off later crises.
2

The Idea of Becoming a Non-Turkey
 

It is obvious to anyone before drinking time that we can put a man, a family, a village with a mini town hall on the moon, and predict the trajectory of planets or the most minute effect in quantum physics, yet governments with equally sophisticated models cannot forecast revolutions, crises, budget deficits, climate change. Or even the closing prices of the stock market a few hours from now.

There are two different domains, one in which we can predict (to some extent), the other—the Black Swan domain—in which we should only let turkeys and turkified people operate. And the demarcation is as visible (to non-turkeys) as the one between the cat and the washing machine.

Social, economic, and cultural life lie in the Black Swan domain,
physical life much less so. Further, the idea is to separate domains into those in which these Black Swans are both unpredictable and consequential, and those in which rare events are of no serious concern, either because they are predictable or because they are inconsequential.

I mentioned in the Prologue that randomness in the Black Swan domain is intractable. I will repeat it till I get hoarse. The limit is mathematical, period, and there is no way around it on this planet. What is nonmeasurable and nonpredictable will remain nonmeasurable and nonpredictable, no matter how many PhDs with Russian and Indian names you put on the job—and no matter how much hate mail I get. There is, in the Black Swan zone, a limit to knowledge that can never be reached, no matter how sophisticated statistical and risk management science ever gets.

The involvement of this author has not been so much in asserting this impossibility to ever know anything about these matters—the general skeptical problem has been raised throughout history by a long tradition of philosophers, including Sextus Empiricus, Algazel, Hume, and many more skeptics and skeptical empiricists—as in formalizing and modernizing as a background and footnote to my anti-turkey argument. So my work is about
where
one should be skeptical, and where one should not be so. In other words, focus on getting out of the f*** Fourth Quadrant—the Fourth Quadrant is the scientific name I gave to the Black Swan domain, the one in which we have a high exposure to rare, “tail” events
and
these events are incomputable.
3

Now, what is worse, because of modernity, the share of Extremistan is increasing. Winner-take-all effects are worsening: success for an author, a company, an idea, a musician, an athlete is planetary, or nothing. These worsen predictability since almost everything in socioeconomic life now is dominated by Black Swans. Our sophistication continuously puts us ahead of ourselves, creating things we are less and less capable of understanding.

No More Black Swans
 

Meanwhile, over the past few years, the world has also gone the other way, upon the discovery of the Black Swan idea. Opportunists are now into predicting, predictioning, and predictionizing Black Swans with even more complicated models coming from chaos-complexity-catastrophe-fractal theory. Yet, again, the answer is simple:
less is more;
move the discourse to (anti)fragility.

1
From my experiences of the Lebanese war and a couple of storms with power outages in Westchester County, New York, I suggest stocking up on novels, as we tend to underestimate the boredom of these long hours waiting for the trouble to dissipate. And books, being robust, are immune to power outages.

2
A related idea is expressed in a (perhaps apocryphal) statement by the financier Warren Buffett that he tries to invest in businesses that are “so wonderful that an idiot can run them. Because sooner or later, one will.”

3
A technical footnote (to skip): What are the Quadrants? Combining exposures and types of randomness we get four combinations: Mediocristan randomness, low exposure to extreme events (First Quadrant); Mediocristan randomness, high exposure to extreme events (Second Quadrant); Extremistan randomness, low exposure to extreme events (Third Quadrant); Extremistan randomness, high exposure to extreme events (Fourth Quadrant). The first three quadrants are ones in which knowledge or lack of it bring inconsequential errors. “Robustification” is the modification of exposures to make a switch from the fourth to the third quadrant.

BOOK III
 
A Nonpredictive View of the World
 
 

W
elcome, reader, to the nonpredictive view of the world.

Chapter 10
presents Seneca’s stoicism as a starting point for understanding antifragility, with applications from philosophy and religion to engineering.
Chapter 11
introduces the barbell strategy and explains why the dual strategy of mixing high risks and highly conservative actions is preferable to just a simple medium-risk approach to things.

But first, we open
Book III
with the story of our two friends who derive some great entertainment from, and make a living by, detecting fragility and playing with the ills of fragilistas.

CHAPTER 9
 

Other books

Plan B by Jonathan Tropper
Captive Scorpio by Alan Burt Akers
Sister of Silence by Daleen Berry
If I Could Do It Again by Ashley Stoyanoff
Winning by Jack Welch, Suzy Welch
The Second Coming by Fritschi, J.
The Importance of Wings by Robin Friedman