The book posits a unique viewpoint to understand randomness and unpredictability in the world around us. Rather than trying to predict the improbable black swans, it focuses more on how not to be adversely impacted by them.
Table of Contents
Mediocristan vs Extremistan
This is the central concept of the book. Let’s illustrate this with human height vs. human wealth. If you are trying to find the average height of humans by sampling 100 humans, then no single sample will have a significant impact. The tallest human is about 8 ft tall; the shortest is about 2 ft. That’s a difference of only 4x. A single sample of 8 ft where the average is 6 ft can skew the overall ratio by only 0.08 ft (1 inch). Once your sample set is large, no single instance will significantly change the aggregate.
Now, consider the wealth; you can encounter 99 humans with a net worth of ~100$, and if you meet Bill Gates next, the average will become $1 Billion!!! A single sample can skew the average drastically. The average net worth of an American household is ~100,000. Bill Gates’s net worth is a million times more than that. One single observation can disproportionately impact the aggregate.
- Wealth comes from extremistan. Height comes from mediocristan. Most outcomes of the biological process like height, weight, age expectations belong to mediocristan. Standard statistics like Gaussians are valid here. The regression to mean is standard here. That’s not true of extremistan. If you encounter someone 7 ft tall, the likelihood of encountering another 7 ft tall person is low. However, if the stock market, which belongs to extremistan, goes down by 20%, then the likelihood of another 20% fall the next day isn’t any lower. Mediocristan does not care about extremes. They severely impact Extremistan.
- We see Black Swans in extremistan, single incidents which have huge influences on history.
- It isn’t that whether Black Swans are more frequent than our imagination or not. The critical fact is that they are more consequential.
- Positive black swans take time to show up while the negative ones (destruction) happens very quickly. This perception of causation has a biological foundation.
- Mediocristan has a “typical” extreme. For example, a 7 ft human is a typical extreme in terms of height. There is no “typical” rich, 100 million, 1 billion, 10 billion, all have a very different lifestyle. Similarly, there is no “typical” war. Since we don’t know what a typical extreme in extremistan is, we cannot estimate its impact.
- We overestimate the positive and negative effects of significant events like buying a car, the death of a loved one, etc. on our lives. But that’s OK, these errors are in mediocristan and don’t cost much. However, such self-deceptions can cost a lot in extremistan.
- In mediocristan, samples regress to mean. Consider, for example, if the total height of two men is 14 ft, then you would imagine that both are about 7 ft (and not 12 ft and 2 ft). While if there are two authors with book sales with the total book sales of 100 million, then it is more likely than one of them had ~100 million in book sales, and the other one has none. The effect is even more dominant if the total book sales are 1 billion since very few books reach that scale.
- Preferential attachment and cumulative advantage play a significant role in extremistan. People read a famous author even more, and the benefits cumulate. Winning today increases the chance of winning in the future.
- People love using Gaussian since once you assume gaussian, it yields its properties with sufficient sampling. Gaussian does not apply in extremistan. Gaussian can be used in mediocristan, and even if it is wrong, the harm done is usually not devastating.
- Classical thermodynamics produces Gaussian variations, while informational variations are from Extremistan. For example, food is not just calorie in-calorie out, but it is complex signaling to your body.
- When dealing with simple binary payoffs in mediocristan, for example, casino bets, then the standard statistical methods work best. When dealing with complex binary payoffs in mediocristan, for example, epidemics, the usual statistical methods might work with some limitations.
- When dealing with simple payoffs in extremistan, since the harm is limited, it is fine to be wrong. The real danger is with complex payoffs in extremistan. Avoid any adverse outcomes here. Do not model the probability of the outcome in this case. No models are better than the wrong models for forecasting. If possible, cut your exposure by say buying insurance.
We are vulnerable to overinterpretation. We love compact stories over accepting raw truth. We should enjoy history as an enumeration of accounts instead of theorizing it.
We not only form hypothesis early on but try to only look at the data in a way that it only confirms our existing belief.
Belief Perseverance is our tendency to discard any data that discredits our existing hypothesis as outliers. Combine this with Confirmation Bias, and it can quickly create a scenario where someone presented with more data (and hence more noise) ends up forming strong incorrect opinions, which are harder to refute later. For example, consider people spinning a wheel and then predicting the number of countries in Africa. Those that got a lower number, despite knowing no relation, estimated a smaller number of countries in Africa and vice-versa.
We only look at the evidence which is visible to us. For example, we look at successful authors and try to create a narrative of what traits lead to success. We don’t stop and think that maybe the other 99% of authors who failed might have the same characteristics as well. We may enjoy what we see, but there is no point in reading too much into the success stories since we don’t see the full picture. Consider, for example, radiating rats, which will kill the weak ones, and the surviving dominant rats will be left to roam in New York City. It might make people feel that the radiation made those rats powerful when in itself, radiation just filtered the most powerful ones out. Or consider bubonic plague it is “the most dangerous epidemic we know of” and not the “most dangerous epidemic”. There might have been a more dangerous one, but no observer ever survived to narrate it.
Beginner’s luck does exist, and it’s because of the silent evidence. If 100 people try gambling for the first time, the ones who win big will try again, the ones who don’t drop out almost immediately. And those who continue will narrate the story of the good old days when they were lucky.
The ludic fallacy is the mistaken belief that one can use gamified models with well-defined probabilities for modeling real-life situations.
Most problems we encounter in life are inverse. For example, imagine a few ice cubes which are melting on a floor, and you finally came in to see the puddle of water, your job is to find out the number and the shapes of ice cubes. This inverse problem is much harder than trying to imagine the puddle, which will be formed given a set of ice cubes. Similarly, we don’t see the underlying rules of the stock market, we only see a pattern, and we try to form a thesis out of what could be happening.
- When you are employed, hence, dependent on other people’s judgment, then looking busy can help you claim the results of a random outcome.
- The more information you give someone, the more hypotheses they will form along the way, and the worse off they will be. They see more random noise and mistake it for information.
- The more routine the task is, the better you learn to forecast.
- Professions that deal with the future base their studies on the non-repeatable past have an expert problem.
- There is a difference between techne (technical) and episteme (Epistemological) knowledge. The first one can have experts, for example, brain surgeons, the second one do not, for example, it is not obvious who will make a better economic forecast, a finance Ph.D. vs. a business writer. Tetlock studied and concluded that those who had a more prominent reputation in Economics were worse predictors. Experts underestimate the duration of the war. While human history is full of violent conflicts, large-scale wars are events new to human history.
- Corporations survive not because they make good forecasts but because they may have been the lucky ones.
- Luck both made and unmade Carthage. It made and unmade Rome.
- Randomness in mediocristan is predictable. It isn’t in extremistan.
- Mother nature did not attend high school courses on Geometry or read the books of Euclid at the library of Alexandria.
- Mother nature does not like anything too big for resilience. For example, killing an elephant does not impact nature, while a falling bank can crumble the whole financial system.
- Mother nature does not like too much connectivity or globalization.
- Mother nature loves redundancies. Defensive redundancy which allows one to survive under adversities. Functional redundancy which allows one multiple different structures to perform the same task.
There are three significant issues with forecasting.
- The accuracy or the variance of the projection matters more than just the projected value.
- The duration matters. The longer the duration of the forecast, the lower is the accuracy.
- The unexpected outcomes. Randomness is extremistan allows for far major optimistic or pessimistic scenarios than one can imagine, especially when the worst-case outcome is more consequential than the forecast itself.
- Generalizing a theory out of samples requires intuition and common sense. We don’t have a well-developed intuition for Black Swans.
- In practice, randomness due to lack of information is the same as randomness due to unpredictability.
- Borrowing money makes you more vulnerable to forecast errors.
- Regular events from the past can predict regular events. While extreme events are so rare and acute, that the past can never be used to predict them. The next bestseller might beat all the existing ones, and the subsequent stock market loss might be worse than any loss seen before.
- A Gray Swan is about modelable extremes while a Black Swan is about unknown unknowns.
- The 1987 crash of the stock market was a Gray Swan. The 9/11 attack on the World center was a Black Swan.
- Fractals are scale-invariant measures (unlike Gaussian), which can be used for modeling Gray Swans.
- A rug at an eye-level corresponds to Mediocristan. A coastline, however, is a fractal, it looks the same from the ground level or the airplane.
- Black Swan has three attributes – unpredictability, consequences, and retrospective explainability. The last one allows experts to look back and pretend that they expected it.
- The history is going to be dominated by an improbable event. I just don’t know which one.
- We build toys. Some of them like Laser, Internet, Compact Disks, change the world.
- Rank beliefs not based on their plausibility but by the harm they may cause. It is fine to trust weather predictions for the picnic even if they might turn out to be wrong but don’t trust the government’s forecast for social security in 2040.
- It is better to govern society based on awareness of ignorance than knowledge.
- The government keeps putting money in the market, and nothing happens, and then suddenly, one day, it is hyperinflation.
- We need to avoid exposure to small probabilities in specific domains. We simply cannot compute them.
- It is a contagion that determines the fate of a theory in social science, not its validity.
- Life exists in pre-asymptote. Many theories that are right in the long-run (asymptote) are meaningless from an applicability perspective.
- Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive. It prevents people from considering the fuzziness of boundaries, let alone revising those categories.
- Good news is good first. How good matters rather little.
- Most of the world around us is non-linear. We study linearity in the classroom since it is easier.