top of page
Search
Writer's pictureLucian@going2paris.net

The Black Swan


Charlottesville, Virginia

April 26, 2020


Wow. This book was great in that it taught me a lot and made me think. This summary is another long one - sorry. The punchline is that due to the our brains work, we fundamentally misunderstand the world around us. Taleb walks us through our thinking flaws (similar to what Kahneman did in Thinking Fast and Slow) and explains how we are so wrong to use the normal distribution to understand uncertainty in many areas.


What my summary leaves out are many of the "intellectual" discussions throughout the book as he discusses many famous "thinkers" from history. I actually found those to distract me from what I was trying to learn. This summary contains the "kernels of wisdom" that I identified in the book.


If you are not up to reading my entire summary, I have a section at the beginning entitled "Key Takeaways." I'd like to think I capture at least 80 percent of the highlights in summary form in that section.


Overview


Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb was published in 2007. The book focuses on the extreme impact of certain kinds of rare and unpredictable events (outliers) and humans' tendency to find simplistic explanations for these events retrospectively. The author is a Lebanese-American essayist, scholar, statistician, and risk analyst. He has his MBA from Wharton School (1982) and a PhD from the University of Paris. He spent 20 years on Wall Street as a derivatives trader, before changing careers to focus on the nature of risk, uncertainty and probability. He is currently a professor of Risk Engineering at NYU’s Polytechnic Institute, a scientific adviser at Universa Investments and the IMF and co-editor in chief of the journal Risk and Decision Analysis. Taleb is the author of the Incerto, a five-volume philosophical essay on uncertainty published between 2001 and 2018 (of which the most known books are The Black Swan and Antifragile).

Taleb defines a “Black Swan” event is a highly improbable event with three characteristics:

1. it is unpredictable to the observer (rarity)

2. it carries a massive impact (can be positive or negative) (extreme impact); and

3. after the fact, people concoct explanations that makes the Black Swan event appear less random, and more predictable, than it was (retrospective predictability).

Taleb borrowed the term “black swan” from history. Prior to 1697, no one in Western civilization had seen a black swan; all swans were assumed to be white. This gave rise to the notion that such creatures didn’t exist. “Black swan” became a term used to describe situations of impossibility. After a black swan was observed in western Australia in 1697, the notion was disproved. Since then, “black swan” has come to describe situations where perceived impossibilities have been disproven and paradigms have been shattered.

Examples of Black Swan events that Taleb cites include the rise of the Internet, the personal computer, World War I, the dissolution of the Soviet Union and the September 11, 2001 terrorist attacks. He emphasizes that the Black Swan event depends upon the observer. For example, the Thanksgiving turkey sees his demise as a Black Swan event, but the butcher does not.

Key Takeaways

Why do we not identify Black Swan events until after they occur?

- We humans are hardwired to learn specifics when we should be focused on generalities.

- We concentrate on things we already know and fail to take into consideration what we don't know.

Our objective with respect to Black Swan events should be to "avoid being the turkey" by identifying areas of vulnerability in order to turn the Black Swans white (or at least gray).

Predicting the future, unless it is the near term, is, in many cases, impossible although we continue to try. It is in our nature to want to predict the future.

People significantly underestimate the significance and likelihood of extreme events (Black Swans) when considering the future; we extrapolate from the past to predict the future. The future is non-linear and the past has little bearing on the future with respect to major events.

We want to, and therefore do, construct simplistic explanations to largely random events after the fact. We accept our explanations as "true" when they may not be and are certainly only part of the story.

Rare events occur much more often than we expect. Our minds are programmed to deal with what we’ve seen before, to “expect the expected.”

Our tendency to discard rare events happens in part because we underestimate our ignorance. There is a great deal we don’t know, but since feeling ignorant isn’t pleasant, we tend to put those rare events out of our minds.

We tend to invent explanations where there are none. In other words, after the fact, we like to invent explanations for why things happened the way they did, which is much more comforting than staring at sheer randomness.

“Platonicity” is Taleb’s term for “our tendency to mistake the map for the territory, to focus on pure and well-defined “forms,” whether objects, like triangles, or social notions, like utopias (societies built according to some blueprint of what “makes sense”), even nationalities. When these ideas and crisp constructs inhabit our minds, we privilege them over other less elegant objects, those with messier and less tractable structures. The world is messy.

Platonicity makes us think that we understand more than we actually do.

The “Platonic fold” is the boundary where the Platonic mindset enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan event is produced.

Black Swan logic makes what you don't know far more relevant than what you do know.

Experts: The inability to predict outliers implies the inability to predict the course of history and that makes experts very uncomfortable.

Disproportionate Payoff by Black Swans: In some domains such as scientific discovery and venture capital investments, there is a disproportionate payoff from the unknown, since you typically have little to lose and plenty to gain from a rare event.

The strategy for discoverers and entrepreneurs is to rely less on top-down planning and focus on maximum tinkering and recognizing opportunities when they present themselves.

Learning to Learn – We tend to learn the precise, not the general. We don't learn that we don't learn. Our minds do not seem made to think and introspect. We do much less thinking than we believe we do.

Our intuitions are made for an environment with simpler causes and effects and slowly moving information, but we live in an environment where information flows very rapidly, making our intuitions that much more inaccurate.

We need more prevention than treatment, but few reward acts of prevention. Recognition can be quite a pump and such recognition is reserved for those who treat.

The Platonic fold is the boundary where the Platonic mindset enters in contact with messy reality, where the gap between what you know and what you think you know becomes dangerously wide. It is here that the Black Swan is produced.

By searching, you can always find someone who made a well-sounding statement that confirms your point of view -- and, on every topic, it is possible to find another dead thinker who said the exact opposite.

The central idea of this book concerns our blindness with respect to randomness, particularly the large deviations.

What is surprising is not the magnitude of our forecast errors, but our absence of awareness of it.

The purpose of this book is not to attempt to predict Black Swan events, but to build robustness to negative ones that occur and being able to exploit positive ones.

Banks and trading firms are very vulnerable to hazardous Black Swan events and are exposed to losses beyond those that are predicted by their defective financial models.

Black Swans, being unpredictable, we need to adjust to their existence.

Among many other benefits, you can set yourself up to collect serendipitous Black Swans (of the positive kind) by maximizing your exposure to them.

Contrary to social-science wisdom, almost no discovery, no technologies of note, came from design and planning -- they were Black Swans.

The problem lies in the structure of our minds: we don't learn rules, just facts, and only facts.

To understand a phenomenon, one needs first to consider the extremes -- particularly if, like the Black Swan, they carry an extraordinary cumulative effect.

Almost everything in social life is produced by rare but consequential shocks and jumps;

The bell curve – the Gaussian (normal) distribution - ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty. While the normal distribution is appropriate for analyzing some areas, it is completely inappropriate for others. The normal distribution does not account for the outliers that we have experienced. In those areas (like the stock market), we need to use fractals and power distributions.

Detailed Summary:

Prologue

A small number of Black Swans explain almost everything in our world, from the success of ideas and religions, to the dynamics of historical events, to elements of our own personal lives. Ever since we left the Pleistocene, some ten millennia ago, the effect of these Black Swans has been increasing.

Fads, epidemics, fashion, ideas, the emergence of art genres and schools. All follow these Black Swan dynamics. Literally, just about everything of significance around you might qualify.

The central idea of this book concerns our blindness with respect to randomness, particularly the large deviations: Why do we, scientists or nonscientists, hotshots or regular Joes, tend to see the pennies instead of the dollars? Why do we keep focusing on the minutiae, not the possible significant large events, in spite of the obvious evidence of their huge influence? And, if you follow my argument, why does reading the newspaper actually decrease your knowledge of the world?

Black Swan logic makes what you don’t know far more relevant than what you do know.*

The next killing in the restaurant industry needs to be an idea that is not easily conceived of by the current population of restaurateurs. It has to be at some distance from expectations.

The more unexpected the success of such a venture, the smaller the number of competitors, and the more successful the entrepreneur who implements the idea.


The payoff of a human venture is, in general, inversely proportional to what it is expected to be.

We produce 30-year projections of social security deficits and oil prices without realizing that we cannot even predict these for next summer -- our cumulative prediction errors for political and economic events are so monstrous that every time I look at the empirical record I have to pinch myself to verify that I am not dreaming. What is surprising is not the magnitude of our forecast errors, but our absence of awareness of it.

Our inability to predict in environments subjected to the Black Swan, coupled with a general lack of the awareness of this state of affairs, means that certain professionals, while believing they are experts, are in fact not. Based on their empirical record, they do not know more about their subject matter than the general population, but they are much better at narrating -- or, worse, at smoking you with complicated mathematical models. They are also more likely to wear a tie.

The strategy for the discoverers and entrepreneurs is to rely less on top-down planning and focus on maximum tinkering and recognizing opportunities when they present themselves. So I disagree with the followers of Marx and those of Adam Smith: the reason free markets work is because they allow people to be lucky, thanks to aggressive trial and error, not by giving rewards or “incentives” for skill. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can.

Almost everything in social life is produced by rare but consequential shocks and jumps; all the while almost everything studied about social life focuses on the “normal,” particularly with “bell curve” methods of inference that tell you close to nothing. Why? Because the bell curve ignores large deviations, cannot handle them, yet makes us confident that we have tamed uncertainty.

The Black Swan idea is based on the structure of randomness in empirical reality. To summarize: in this book I make a claim, against many of our habits of thought, that our world is dominated by the extreme, the unknown, and the very improbable (improbable according to our current knowledge) -- and all the while we spend our time engaged in small talk, focusing on the known, and the repeated. This implies the need to use the extreme event as a starting point and not treat it as an exception to be pushed under the rug. I also make the bolder claim that in spite of our progress and the growth in knowledge, or perhaps because of such progress and growth, the future will be increasingly less predictable, while both human nature and social “science” seem to conspire to hide the idea from us.

The Black Swan is the result of collective and individual epistemic limitations (or distortions), mostly confidence in knowledge; it is not an objective phenomenon. The most severe mistake made in the interpretation of my Black Swan is to try to define an “objective Black Swan” that would be invariant in the eyes of all observers. The events of September 11, 2001, were a Black Swan for the victims, but certainly not to the perpetrators.

😀😀😀😀😀

Part One

---------------------------------------------------------------------------------------------------------------------------

Part One, "Umberto Eco's Antilibrary," is mostly about how we perceive historical and current events and what distortions are present in such perception. Most of this discussion deals with psychology.

Chapter 1 - Emperical Skeptic

Taleb discusses his approach to historical analysis. He describes history as opaque, essentially a black box of cause and effect. One sees events go in and events go out, but one has no way of determining which produced what effect. Taleb argues this is due to "The Triplet of Opacity."

The human mind suffers from three ailments as it comes into contact with history, what I call the triplet of opacity. They are: (1) the illusion of understanding, or how everyone thinks he knows what is going on in a world that is more complicated (or random) than they realize; (2) the retrospective distortion, or how we can assess matters only after the fact, as if they were in a rearview mirror (history seems clearer and more organized in history books than in empirical reality); and (3) the overvaluation of factual information and the handicap of authoritative and learned people, particularly when they create categories—when they “Platonify.”

Our minds are wonderful explanation machines, capable of making sense out of almost anything, capable of mounting explanations for all manner of phenomena, and generally incapable of accepting the idea of unpredictability. These events were unexplainable, but intelligent people thought they were capable of providing convincing explanations for them -- after the fact. Furthermore, the more intelligent the person, the better sounding the explanation. What’s more worrisome is that all these beliefs and accounts appeared to be logically coherent and devoid of inconsistencies.

History and societies do not crawl. They make jumps. They go from fracture to fracture, with a few vibrations in between. Yet we (and historians) like to believe in the predictable, small incremental progression. We are just a great machine for looking backward, and that humans are great at self-delusion.

Events present themselves to us in a distorted way. Consider the nature of information: of the millions, maybe even trillions, of small facts that prevail before an event occurs, only a few will turn out to be relevant later to your understanding of what happened. Because your memory is limited and filtered, you will be inclined to remember those data that subsequently match the facts.

Very intelligent and informed persons were at no advantage over cabdrivers in their predictions, but there was a crucial difference. Cabdrivers did not believe that they understood as much as learned people -- they were not the experts and they knew it. Nobody knew anything, but elite thinkers thought that they knew more than the rest because they were elite thinkers, and if you’re a member of the elite, you automatically know more than the nonelite.

I also noticed during the Lebanese war that journalists tended to cluster not necessarily around the same opinions but frequently around the same framework of analyses. They assign the same importance to the same sets of circumstances and cut reality into the same categories -- once again the manifestation of Platonicity, the desire to cut reality into crisp shapes.

Categorizing is necessary for humans, but it becomes pathological when the category is seen as definitive, preventing people from considering the fuzziness of boundaries, let alone revising their categories. Contagion was the culprit.

Categorizing always produces reduction in true complexity. It is a manifestation of the Black Swan generator, that unshakable Platonicity. Any reduction of the world around us can have explosive consequences since it rules out some sources of uncertainty; it drives us to a misunderstanding of the fabric of the world.

During the one or two years after my arrival at Wharton, I had developed a precise but strange specialty: betting on rare and unexpected events, those that were on the Platonic fold, and considered “inconceivable” by the Platonic “experts.” Recall that the Platonic fold is where our representation of reality ceases to apply -- but we do not know it.

I was convinced that I was totally incompetent in predicting market prices -- but that others were generally incompetent also but did not know it, or did not know that they were taking massive risks.

Chapter 2

Taleb discusses a neuroscientist named Yevgenia and her book A Story of Recursion. She published her book on the web and was discovered by a small publishing company; they published her unedited work and the book became an international bestseller. The small publishing firm became a big corporation, and Yevgenia became famous. This incident is described as a Black Swan event. (Yevgenia is a work of fiction.)

Chapter 3 – Winner-Take-All Effect

Taleb introduces the concepts of Extremistan and Mediocristan. He uses them as guides to define how predictable is the environment one's studying. Mediocristan environments safely can use Gaussian distribution. In Extremistan environments, a Gaussian distribution is used at one's peril (he later in the book discusses how power laws distributions are more appropriate in many cases)

A second-year Wharton student told me to get a profession that is “scalable,” that is, one in which you are not paid by the hour and thus subject to the limitations of the amount of your labor. It was a very simple way to discriminate among professions and, from that, to generalize a separation between types of uncertainty -- and it led me to the major philosophical problem, the problem of induction, which is the technical name for the Black Swan. (As used here, induction is the inference of a general law from specifics).

In a scalable job, if you are an idea person, you do not have to work hard, only think intensely. You do the same work whether you produce a hundred units or a thousand.

So the distinction between writer and baker, speculator and doctor, fraudster and prostitute, is a helpful way to look at the world of activities. It separates those professions in which one can add zeroes of income with no greater labor from those in which one needs to add labor and time (both of which are in limited supply) -- in other words, those subjected to gravity.

But….

I would recommend someone pick a profession that is not scalable. A scalable profession is good only if you are successful; they are more competitive, produce monstrous inequalities, and are far more random, with huge disparities between efforts and rewards -- a few can take a large share of the pie, leaving others out entirely at no fault of their own. One category of profession is driven by the mediocre, the average, and the middle-of-the-road. In it, the mediocre is collectively consequential. (This is Mediocristan.) The other has either giants or dwarves -- more precisely, a very small number of giants and a huge number of dwarves. (This is Extremistan.)

In the arts -- say the cinema -- things are far more vicious. What we call “talent” generally comes from success, rather than its opposite. A great deal of empiricism has been done on the subject. Art De Vany studied the wild uncertainty in the movies and showed that much of what we ascribe to skills is an after-the-fact attribution. The movie makes the actor, he claims -- and a large dose of nonlinear luck makes the movie…. This discussion shows the difficulty in predicting outcomes in an environment of concentrated success.

America is currently far, far more creative than nations of museumgoers and equation solvers. It is also far more tolerant of bottom-up tinkering and undirected trial and error. And globalization has allowed the United States to specialize in the creative aspect of things, the production of concepts and ideas, that is, the scalable part of the products, and, increasingly, by exporting jobs, separate the less scalable components and assign them to those happy to be paid by the hour. There is more money in designing a shoe than in actually making it: Nike, Dell, and Boeing can get paid for just thinking, organizing, and leveraging their know-how and ideas while subcontracted factories in developing countries do the grunt work and engineers in cultured and mathematical states do the noncreative technical grind. The American economy has leveraged itself heavily on the idea generation, which explains why losing manufacturing jobs can be coupled with a rising standard of living. Clearly the drawback of a world economy where the payoff goes to ideas is higher inequality among the idea generators together with a greater role for both opportunity and luck.

The supreme law of Mediocristan is as follows: When your sample is large, no single instance will significantly change the aggregate or the total. The largest observation will remain impressive, but eventually insignificant, to the sum. (This is a description of the Gaussian or normal distribution.)

In Extremistan, inequalities are such that one single observation can disproportionately impact the aggregate, or the total. So while weight, height, and calorie consumption are from Mediocristan, wealth is not. Almost all social matters are from Extremistan (and therefore do not conform to the normal distribution).

Extremistan can produce Black Swans, and does, since a few occurrences have had huge influences on history. This is the main idea of this book.

If you are dealing with quantities from Extremistan, you will have trouble figuring out the average from any sample since it can depend so much on one single observation. The idea is not more difficult than that. In Extremistan, one unit can easily affect the total in a disproportionate way. In this world, you should always be suspicious of the knowledge you derive from data. This is a very simple test of uncertainty that allows you to distinguish between the two kinds of randomness (Mediocristan and Extremistan).

What you can know from data in Mediocristan augments very rapidly with the supply of information. But knowledge in Extremistan grows slowly and erratically with the addition of data, some of it extreme, possibly at an unknown rate.

Matters that seem to belong to Mediocristan (subjected to what we call type 1 randomness): height, weight, calorie consumption, income for a baker, a small restaurant owner, a prostitute, or an orthodontist; gambling profits (in the very special case, assuming the person goes to a casino and maintains a constant betting size), car accidents, mortality rates, “IQ” (as measured). Matters that seem to belong to Extremistan (subjected to what we call type 2 randomness): wealth, income, book sales per author, book citations per author, name recognition as a “celebrity,” number of references on Google, populations of cities, uses of words in a vocabulary, numbers of speakers per language, damage caused by earthquakes, deaths in war, deaths from terrorist incidents, sizes of planets, sizes of companies, stock ownership, height between species (consider elephants and mice), financial markets (but your investment manager does not know it), commodity prices, inflation rates, economic data. The Extremistan list is much longer than the prior one.

Mediocristan is where we must endure the tyranny of the collective, the routine, the obvious, and the predicted; Extremistan is where we are subjected to the tyranny of the singular, the accidental, the unseen, and the unpredicted.

Extremistan does not always imply Black Swans. Some events can be rare and consequential, but somewhat predictable, particularly to those who are prepared for them and have the tools to understand them (instead of listening to statisticians, economists, and charlatans of the bell-curve variety).

You can still experience severe Black Swans in Mediocristan, though not easily. How? You may forget that something is random, think that it is deterministic, then have a surprise. Or you can "tunnel" and miss on a source of uncertainty, whether mild or wild, owing to lack of imagination -- most Black Swans result from this “tunneling” disease,

Chapter 4 - More on Mediocristan and Extremistan

Taleb brings together the topics discussed earlier in the narrative, about a turkey. He uses it to illustrate the philosophical problem of induction (the inference of a general law from particular instances) and how past performance is no indicator of future performance. He then takes the reader into the history of skepticism.

There are traps built into any kind of knowledge gained from observation.

This chapter will outline the Black Swan problem in its original form: How can we know the future, given knowledge of the past; or, more generally, how can we figure out properties of the (infinite) unknown based on the (finite) known? What can a turkey learn about what is in store for it tomorrow from the events of yesterday? A lot, perhaps, but certainly a little less than it thinks, and it is just that “little less” that may make all the difference.

Induction’s most worrisome aspect: learning backward.

Something has worked in the past, until -- well, it unexpectedly no longer does, and what we have learned from the past turns out to be at best irrelevant or false, at worst viciously misleading.

We worry too late -- ex post. Mistaking a naïve observation of the past as something definitive or representative of the future is the one and only cause of our inability to understand the Black Swan.

Those who believe in the unconditional benefits of past experience should consider this pearl of wisdom allegedly voiced by a famous ship’s captain:

But in all my experience, I have never been in any accident… of any sort worth speaking about. I have seen but one vessel in distress in all my years at sea. I never saw a wreck and never have been wrecked nor was I ever in any predicament that threatened to end in disaster of any sort. E. J. Smith, 1907, Captain, RMS Titanic

Regarding banks … I have no problem with risk taking, just please, please, do not call yourself conservative and act superior to other businesses who are not as vulnerable to Black Swans.


Taleb tells the story of a turkey that eats and lives for 1,000 days and on the 1,001th day is killed by the butcher.


From the standpoint of the turkey, the nonfeeding of the one thousand and first day is a Black Swan. For the butcher, it is not, since its occurrence is not unexpected. So you can see here that the Black Swan is a sucker’s problem. In other words, it occurs relative to your expectation. You realize that you can eliminate a Black Swan by science (if you’re able), or by keeping an open mind. Of course, you can create Black Swans with science, by giving people confidence that the Black Swan cannot happen -- this is when science turns normal citizens into suckers.

Matters should be seen on some relative, not absolute, timescale: earthquakes last minutes, 9/11 lasted hours, but historical changes and technological implementations are Black Swans that can take decades. In general, positive Black Swans take time to show their effect while negative ones happen very quickly -- it is much easier and much faster to destroy than to build.

This turkey problem (a.k.a. the problem of induction) is a very old one, but for some reason it is likely to be called “Hume’s problem” by your local philosophy professor.

While the ancient skeptics advocated learned ignorance as the first step in honest inquiries toward truth, later medieval skeptics, both Moslems and Christians, used skepticism as a tool to avoid accepting what today we call science.

I am interested in deeds and true empiricism. So, this book was not written by a Sufi mystic, or even by a skeptic in the ancient or medieval sense, or even (we will see) in a philosophical sense, but by a practitioner whose principal aim is to not be a sucker in things that matter, period.

All I will be showing you in this book is how to avoid crossing the street blindfolded.

I have just presented the Black Swan problem in its historical form: the central difficulty of generalizing from available information, or of learning from the past, the known, and the seen. You can see that it is extremely convenient for us to assume that we live in Mediocristan. Why? Because it allows you to rule out these Black Swan surprises! The Black Swan problem either does not exist or is of small consequence if you live in Mediocristan. Such an assumption magically drives away the problem of induction, which since Sextus Empiricus has been plaguing the history of thinking. The statistician can do away with epistemology. Wishful thinking! We do not live in Mediocristan, so the Black Swan needs a different mentality. As we cannot push the problem under the rug, we will have to dig deeper into it. This is not a terminal difficulty -- and we can even benefit from it.

Now, there are other themes arising from our blindness to the Black Swan:

- We focus on preselected segments of the seen and generalize from it to the unseen: the error of confirmation.

- We fool ourselves with stories that cater to our Platonic thirst for distinct patterns: the narrative fallacy.

- We behave as if the Black Swan does not exist: human nature is not programmed for Black Swans.

- What we see is not necessarily all that is there. History hides Black Swans from us and gives us a mistaken idea about the odds of these events: this is the distortion of silent evidence.

- We “tunnel”: that is, we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans (at the expense of the others that do not easily come to mind).

The main tragedy of the high impact-low probability event comes from the mismatch between the time taken to compensate someone and the time one needs to be comfortable that he is not making a bet against the rare event. People have an incentive to bet against it, or to game the system since they can be paid a bonus reflecting their yearly performance when in fact all they are doing is producing illusory profits that they will lose back one day. Indeed, the tragedy of capitalism is that since the quality of the returns is not observable from past data, owners of companies, namely shareholders, can be taken for a ride by the managers who show returns and cosmetic profitability but in fact might be taking hidden risks.

Chapter 5 – Confirmation Bias

Taleb introduces the round-trip fallacy, the human tendency to conflate two similar-sounding statements that are in fact convey very different information. Taleb uses the round-trip fallacy primarily to underscore his point that no evidence of something -- say, a black swan -- is not the same as evidence of no black swans. Lack of precedent for an occurrence (an earthquake hitting Washington, D.C., perhaps) is not the same thing as evidence that that thing will not occur. “our natural tendency to look only for … … corroboration.” It is an idea that is always readily abundant, and by focusing on those things that support it we ignore a wealth of information that do not support it. It is more valuable and yields more conclusive findings to look for evidence that does not support our line of reasoning rather than that which does.

“There is no evidence of Black Swans” and “There is evidence of no Black Swans.” I call this confusion the round-trip fallacy, since these statements are not interchangeable.

Unless we concentrate very hard, we are likely to unwittingly simplify the problem because our minds routinely do so without our knowing it.

This problem is chronic: if you tell people that the key to success is not always skills, they think that you are telling them that it is never skills, always luck. Our inferential machinery, that which we use in daily life, is not made for a complicated environment in which a statement changes markedly when its wording is slightly modified. Consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers. There is an error here, but it is almost inconsequential. Our statistical intuitions have not evolved for a habitat in which these subtleties can make a big difference.

This inability to automatically transfer knowledge and sophistication from one situation to another, or from theory to practice, is a quite disturbing attribute of human nature. Let us call it the domain specificity of our reactions. By domain-specific I mean that our reactions, our mode of thinking, our intuitions, depend on the context in which the matter is presented, what evolutionary psychologists call the “domain” of the object or the event. The classroom is a domain; real life is another. We react to a piece of information not on its logical merit, but on the basis of which framework surrounds it, and how it registers with our social-emotional system. Logical problems approached one way in the classroom might be treated differently in daily life. Indeed they are treated differently in daily life. Knowledge, even when it is exact, does not often lead to appropriate actions because we tend to forget what we know, or forget how to process it properly if we do not pay attention, even when we are experts.

By a mental mechanism I call naïve empiricism, we have a natural tendency to look for instances that confirm our story and our vision of the world -- these instances are always easy to find. Alas, with tools, and fools, anything can be easy to find. You take past instances that corroborate your theories and you treat them as evidence. For instance, a diplomat will show you his “accomplishments,” not what he failed to do. Mathematicians will try to convince you that their science is useful to society by pointing out instances where it proved helpful, not those where it was a waste of time, or, worse, those numerous mathematical applications that inflicted a severe cost on society owing to the highly unempirical nature of elegant mathematical theories.

A series of corroborative facts is not necessarily evidence. Seeing white swans does not confirm the nonexistence of black swans.

We can get closer to the truth by negative instances, not by verification. It is misleading to build a general rule from observed facts. Contrary to conventional wisdom, our body of knowledge does not increase from a series of confirmatory observations, like the turkey’s. But there are some things I can remain skeptical about, and others I can safely consider certain. This makes the consequences of observations one-sided. It is not much more difficult than that.

Popper’s far more powerful and original idea is the “open” society, one that relies on skepticism as a modus operandi, refusing and resisting definitive truths. He accused Plato of closing our minds, according to the arguments I described in the Prologue. But Popper’s biggest idea was his insight concerning the fundamental, severe, and incurable unpredictability of the world.

It remains the case that you know what is wrong with a lot more confidence than you know what is right. All pieces of information are not equal in importance.

Cognitive scientists have studied our natural tendency to look only for corroboration; they call this vulnerability to the corroboration error the confirmation bias.

The notion of corroboration is rooted in our intellectual habits and discourse.

It seems that we are endowed with specific and elaborate inductive instincts showing us the way. Contrary to the opinion held by the great David Hume, and that of the British empiricist tradition, that belief arises from custom, as they assumed that we learn generalizations solely from experience and empirical observations, it was shown from studies of infant behavior that we come equipped with mental machinery that causes us to selectively generalize from experiences (i.e., to selectively acquire inductive learning in some domains but remain skeptical in others). By doing so, we are not learning from a mere thousand days, but benefiting, thanks to evolution, from the learning of our ancestors -- which found its way into our biology.

Back to Mediocristan

And we may have learned things wrong from our ancestors. I speculate here that we probably inherited the instincts adequate for survival in the East African Great Lakes region where we presumably hail from, but these instincts are certainly not well adapted to the present, post-alphabet, intensely informational, and statistically complex environment.


Indeed our environment is a bit more complex than we (and our institutions) seem to realize. How? The modern world, being Extremistan, is dominated by rare -- very rare -- events. It can deliver a Black Swan after thousands and thousands of white ones, so we need to withhold judgment for longer than we are inclined to. It is impossible -- biologically impossible -- to run into a human several hundred miles tall, so our intuitions rule these events out. But the sales of a book or the magnitude of social events do not follow such strictures. It takes a lot more than a thousand days to accept that a writer is ungifted, a market will not crash, a war will not happen, a project is hopeless, a country is “our ally,” a company will not go bust, a brokerage-house security analyst is not a charlatan, or a neighbor will not attack us. In the distant past, humans could make inferences far more accurately and quickly. Furthermore, the sources of Black Swans today have multiplied beyond measurability. In the primitive environment they were limited to newly encountered wild animals, new enemies, and abrupt weather changes. These events were repeatable enough for us to have built an innate fear of them. This instinct to make inferences rather quickly, and to “tunnel” (i.e., focus on a small number of sources of uncertainty, or causes of known Black Swans) remains rather ingrained in us. This instinct, in a word, is our predicament.

Chapter 6 – The Narrative Fallacy

Taleb explores our inclination to assign a narrative of causality to a collection of otherwise random occurrences.

We like stories, we like to summarize, and we like to simplify, i.e., to reduce the dimension of matters. One of the problems of human nature is what I call the narrative fallacy. The fallacy is associated with our vulnerability to overinterpretation and our predilection for compact stories over raw truths. It severely distorts our mental representation of the world; it is particularly acute when it comes to the rare event.

The narrative fallacy addresses our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship, upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense. Where this propensity can go wrong is when it increases our impression of understanding.

Summary: in studying the problem of induction in the previous chapter, we examined what could be inferred about the unseen, what lies outside our information set. Here, we look at the seen, what lies within the information set, and we examine the distortions in the act of processing it. The angle I take concerns narrativity’s simplification of the world around us and its effects on our perception of the Black Swan and wild uncertainty.

It takes considerable effort to see facts (and remember them) while withholding judgment and resisting explanations. And this theorizing disease is rarely under our control: it is largely anatomical, part of our biology, so fighting it requires fighting one’s own self. So the ancient skeptics’ precepts to withhold judgment go against our nature.

From an anatomical perspective, it is impossible for our brain to see anything in raw form without some interpretation. We may not even always be conscious of it.

There appears to be a sense-making organ in us -- though it may not be easy to zoom in on it with any precision.

We have a biological basis of this tendency toward causality,

Our propensity to impose meaning and concepts blocks our awareness of the details making up the concept.

Why is it hard to avoid interpretation? It is key that brain functions often operate outside our awareness. You interpret pretty much as you perform other activities deemed automatic and outside your control, like breathing.

Our minds are like inmates, captive to our biology, unless we manage a cunning escape. It is the lack of our control of such inferences that I am stressing.

There is another, even deeper reason for our inclination to narrate, and it is not psychological. It has to do with the effect of order on information storage and retrieval in any system, and it’s worth explaining here because of what I consider the central problems of probability and information theory. The first problem is that information is costly to obtain. The second problem is that information is also costly to store -- like real estate in New York. The more orderly, less random, patterned, and narratized a series of words or symbols, the easier it is to store that series in one’s mind or jot it down in a book so your grandchildren can read it someday. Finally, information is costly to manipulate and retrieve.

Consider that our working memory has difficulty holding a mere phone number longer than seven digits.

We, members of the human variety of primates, have a hunger for rules because we need to reduce the dimension of matters so they can get into our heads. The more random information is, the greater the dimensionality, and thus the more difficult to summarize. The more you summarize, the more order you put in, the less randomness. Hence the same condition that makes us simplify pushes us to think that the world is less random than it actually is. And the Black Swan is what we leave out of simplification. Both the artistic and scientific enterprises are the product of our need to reduce dimensions and inflict some order on things. Think of the world around you, laden with trillions of details. Try to describe it and you will find yourself tempted to weave a thread into what you are saying. A novel, a story, a myth, or a tale, all have the same function: they spare us from the complexity of the world and shield us from its randomness. Myths impart order to the disorder of human perception and the perceived “chaos of human experience.” Indeed, many severe psychological disorders accompany the feeling of loss of control of -- being able to “make sense” of -- one’s environment.

Our tendency to perceive -- to impose -- narrativity and causality are symptoms of the same disease -- dimension reduction. Moreover, like causality, narrativity has a chronological dimension and leads to the perception of the flow of time. Causality makes time flow in a single direction, and so does narrativity. But memory and the arrow of time can get mixed up. Narrativity can viciously affect the remembrance of past events as follows: we will tend to more easily remember those facts from our past that fit a narrative, while we tend to neglect others that do not appear to play a causal role in that narrative. Consider that we recall events in our memory all the while knowing the answer of what happened subsequently. It is literally impossible to ignore posterior information when solving a problem. This simple inability to remember not the true sequence of events but a reconstructed one will make history appear in hindsight to be far more explainable than it actually was -- or is.

The predisposition to impose narratives, is an ingrained, biologically-based behavior that helps the brain assimilate information. By creating patterns and, by extension, narratives, we are able to condense a collection of individual details into a single, unified story.

By a process called reverberation, a memory corresponds to the strengthening of connections from an increase of brain activity in a given sector of the brain -- the more activity, the stronger the memory. While we believe that the memory is fixed, constant, and connected, all this is very far from truth. What makes sense according to information obtained subsequently will be remembered more vividly. We invent some of our memories -- a sore point in courts of law since it has been shown that plenty of people have invented child-abuse stories by dint of listening to theories.

If you work in a randomness-laden profession, as we see, you are likely to suffer burnout effects from that constant second-guessing of your past actions in terms of what played out subsequently. Keeping a diary is the least you can do in these circumstances.

We harbor a crippling dislike for the abstract. Whenever there is a market move, the news media feel obligated to give the “reason.” A cause is proposed to make you swallow the news and make matters more concrete.

We want to be told stories, and there is nothing wrong with that -- except that we should check more thoroughly whether the story provides consequential distortions of reality.

How is it that some Black Swans are overblown in our minds when the topic of this book is that we mainly neglect Black Swans? The answer is that there are two varieties of rare events: (a) the narrated Black Swans, those that are present in the current discourse and that you are likely to hear about on television, and (b) those nobody talks about, since they escape models -- those that you would feel ashamed discussing in public because they do not seem plausible. It is entirely compatible with human nature that the incidences of Black Swans would be overestimated in the first case, but severely underestimated in the second one.

We learn from repetition -- at the expense of events that have not happened before. Events that are nonrepeatable are ignored before their occurrence, and overestimated after (for a while). After a Black Swan, such as September 11, 2001, people expect it to recur when in fact the odds of that happening have arguably been lowered. We like to think about specific and known Black Swans when in fact the very nature of randomness lies in its abstraction.

We are effectively not skilled at intuitively gauging the impact of the improbable, such as the contribution of a blockbuster to total book sales. In one experiment they underestimated by thirty-three times the effect of a rare event.

These researchers have mapped our activities into (roughly) a dual mode of thinking, which they separate as “System 1” and “System 2,” or the experiential and the cogitative. The distinction is straightforward. System 1, the experiential one, is effortless, automatic, fast, opaque (we do not know that we are using it), parallel-processed, and can lend itself to errors. It is what we call “intuition,” and performs these quick acts of prowess that became popular under the name blink, after the title of Malcolm Gladwell’s bestselling book. System 1 is highly emotional, precisely because it is quick. It produces shortcuts, called “heuristics,” that allow us to function rapidly and effectively. Dan Goldstein calls these heuristics “fast and frugal.” Others prefer to call them “quick and dirty.” Now, these shortcuts are certainly virtuous, since they are rapid, but, at times, they can lead us into some severe mistakes. This main idea generated an entire school of research called the heuristics and biases approach (heuristics corresponds to the study of shortcuts, biases stand for mistakes). System 2, the cogitative one, is what we normally call thinking. It is what you use in a classroom, as it is effortful, reasoned, slow, logical, serial, progressive, and self-aware (you can follow the steps in your reasoning). It makes fewer mistakes than the experiential system, and, since you know how you derived your result, you can retrace your steps and correct them in an adaptive manner. Most of our mistakes in reasoning come from using System 1 when we are in fact thinking that we are using System 2. How? Since we react without thinking and introspection, the main property of System 1 is our lack of awareness of using it!

Emotions are assumed to be the weapon System 1 uses to direct us and force us to act quickly. It mediates risk avoidance far more effectively than our cognitive system. Indeed, neurobiologists who have studied the emotional system show how it often reacts to the presence of danger long before we are consciously aware of it—we experience fear and start reacting a few milliseconds before we realize that we are facing a snake. Much of the trouble with human nature resides in our inability to use much of System 2, or to use it in a prolonged way without having to take a long beach vacation. In addition, we often just forget to use it. Beware the Brain Note that neurobiologists make, roughly, a similar distinction to that between System 1 and System 2, except that they operate along anatomical lines. Their distinction differentiates between parts of the brain, the cortical part, which we are supposed to use for thinking, and which distinguishes us from other animals, and the fast-reacting limbic brain, which is the center of emotions, and which we share with other mammals.

How to Avert the Narrative Fallacy:

Our misunderstanding of the Black Swan can be largely attributed to our using System 1, i.e., narratives, and the sensational -- as well as the emotional -- which imposes on us a wrong map of the likelihood of events. On a day-to-day basis, we are not introspective enough to realize that we understand what is going on a little less than warranted from a dispassionate observation of our experiences. We also tend to forget about the notion of Black Swans immediately after one occurs -- since they are too abstract for us -- focusing, rather, on the precise and vivid events that easily come to our minds. We do worry about Black Swans, just the wrong ones. Let me bring Mediocristan into this. In Mediocristan, narratives seem to work -- the past is likely to yield to our inquisition. But not in Extremistan, where you do not have repetition, and where you need to remain suspicious of the sneaky past and avoid the easy and obvious narrative.

The way to avoid the ills of the narrative fallacy is to favor experimentation over storytelling, experience over history, and clinical knowledge over theories. Certainly the newspaper cannot perform an experiment, but it can choose one report over another -- there is plenty of empirical research to present and interpret from -- as I am doing in this book. Being empirical does not mean running a laboratory in one’s basement: it is just a mind-set that favors a certain class of knowledge over others. I do not forbid myself from using the word cause, but the causes I discuss are either bold speculations (presented as such) or the result of experiments, not stories.

Chapter 7 – Our Preference for the Linear

Taleb explores the contradiction between pursuing activities that depend on Black Swans -- endeavoring to become a bestselling novelist, for example -- and what Taleb asserts is a biological need for tangible, regular results. Taleb further posits that a series of small rewards often brings greater happiness than a single, extreme reward. Here Taleb also draws the distinction between linear, incremental results and nonlinear results that occur in leaps and bounds. While we might prefer to believe that the world operates in a linear fashion, Taleb assures us this is not so. Says Taleb, “Nonlinear relationships are ubiquitous in life. Linear relationships are truly the exception; we focus on them in classrooms and textbooks because they are easier to understand (p. 89).”

Intellectual, scientific, and artistic activities belong to the province of Extremistan, where there is a severe concentration of success, with a very small number of winners claiming a large share of the pot.

The world has changed too fast for our genetic makeup. We are alienated from our environment.

Our intuitions are not cut out for nonlinearities. Consider our life in a primitive environment where process and result are closely connected. You are thirsty; drinking brings you adequate satisfaction. Or even in a not-so-primitive environment, when you engage in building, say, a bridge or a stone house, more work will lead to more apparent results, so your mood is propped up by visible continuous feedback.

Our emotional apparatus is designed for linear causality.

These nonlinear relationships are ubiquitous in life. Linear relationships are truly the exception; we only focus on them in classrooms and textbooks because they are easier to understand. Yesterday afternoon I tried to take a fresh look around me to catalog what I could see during my day that was linear. I could not find anything, no more than someone hunting for squares or triangles could find them in the rain forest -- or any more than someone looking for bell-shape randomness finding it in socioeconomic phenomena.

We favor the sensational and the extremely visible. This affects the way we judge heroes. There is little room in our consciousness for heroes who do not deliver visible results -- or those heroes who focus on process rather than results.

Some blindness to the odds or an obsession with their own positive Black Swan is necessary for entrepreneurs to function.

Our happiness depends far more on the number of instances of positive feelings, what psychologists call “positive affect,” than on their intensity when they hit. In other words, good news is good news first; how good matters rather little. So to have a pleasant life you should spread these small “affects” across time as evenly as possible. Plenty of mildly good news is preferable to one single lump of great news.

Mother Nature destined us to derive enjoyment from a steady flow of pleasant small, but frequent, rewards.

It is unfortunate that the right strategy for our current environment may not offer internal rewards and positive feedback. The same property in reverse applies to our unhappiness. It is better to lump all your pain into a brief period rather than have it spread out over a longer one. But some people find it possible to transcend the asymmetry of pains and joys, escape the hedonic deficit, set themselves outside that game -- and live with hope. There is some good news, as we see next.

One of the attributes of a Black Swan is an asymmetry in consequences -- either positive or negative.

It may be a banality that we need others for many things, but we need them far more than we realize, particularly for dignity and respect. Indeed, we have very few historical records of people who have achieved anything extraordinary without such peer validation -- but we have the freedom to choose our peers.


If you engage in a Black Swan–dependent activity, it is better to be part of a group.

Let us separate the world into two categories. Some people are like the turkey, exposed to a major blowup without being aware of it, while others play reverse turkey, prepared for big events that might surprise others. In some strategies and life situations, you gamble dollars to win a succession of pennies while appearing to be winning all the time. In others, you risk a succession of pennies to win dollars. In other words, you bet either that the Black Swan will happen or that it will never happen, two strategies that require completely different mind-sets.

We have seen that we (humans) have a marked preference for making a little bit of income at a time.

Some business bets in which one wins big but infrequently, yet loses small but frequently, are worth making if others are suckers for them and if you have the personal and intellectual stamina. But you need such stamina.

[He] discovered that the losses went to his emotional brain, bypassing his higher cortical structures and slowly affecting his hippocampus and weakening his memory. The hippocampus is the structure where memory is supposedly controlled. It is the most plastic part of the brain; it is also the part that is assumed to absorb all the damage from repeated insults like the chronic stress we experience daily from small doses of negative feelings -- as opposed to the invigorating “good stress” of the tiger popping up occasionally in your living room. You can rationalize all you want; the hippocampus takes the insult of chronic stress seriously, incurring irreversible atrophy. Contrary to popular belief, these small, seemingly harmless stressors do not strengthen you; they can amputate part of your self.


Chapter 8 – The Distortion Bias (Silent Evidence)

Taleb introduces the concept of silent evidence. Silent evidence emphasizes what is not known over what is. Essentially, silent evidence are those instances which do not produce a Black Swan and thus do not receive acknowledgement. As an example, Taleb points to the many talented writers who never get their big break and whose work, therefore, is never inducted into the literary canon. Since such works are generally inaccessible, we tend to discount their relevance and focus solely on the Black Swan works that did, through some combination of talent and luck, secure their place in literature. In essence we tend to give disproportionate weight to the stories of those who succeed in some manner or another -- by making a medical breakthrough, by becoming a millionaire, or, in some cases, by simply surviving. The tendency to ignore silent evidence (the failures), Taleb says, results in a distortion bias, “the difference between what you see and what is there." (p. 102)

Another fallacy in the way we understand events is that of silent evidence. History hides both Black Swans and its Black Swan–generating ability from us.

The drowned worshippers, being dead, would have a lot of trouble advertising their experiences from the bottom of the sea. This can fool the casual observer into believing in miracles.

This bias extends to the ascription of factors in the success of ideas and religions, to the illusion of skill in many professions, to success in artistic occupations, to the nature versus nurture debate, to mistakes in using evidence in the court of law, to illusions about the “logic” of history -- and of course, most severely, in our perception of the nature of extreme events.

It is a problem with the way we construct samples and gather evidence in every domain. We shall call this distortion a bias, i.e., the difference between what you see and what is there. By bias I mean a systematic error consistently showing a more positive, or negative, effect from the phenomenon, like a scale that unfailingly shows you a few pounds heavier or lighter than your true weight, or a video camera that adds a few sizes to your waistline. This bias has been rediscovered here and there throughout the past century across disciplines, often to be rapidly forgotten. As drowned worshippers do not write histories of their experiences (it is better to be alive for that), so it is with the losers in history, whether people or ideas. Remarkably, historians and other scholars in the humanities who need to understand silent evidence the most do not seem to have a name for it (and I looked hard). As for journalists, fuhgedaboudit! They are industrial producers of the distortion. The term bias also indicates the condition’s potentially quantifiable nature: you may be able to calculate the distortion, and to correct for it by taking into account both the dead and the living, instead of only the living. Silent evidence is what events use to conceal their own randomness, particularly the Black Swan type of randomness.

The neglect of silent evidence is endemic to the way we study comparative talent, particularly in activities that are plagued with winner-take-all attributes. We may enjoy what we see, but there is no point reading too much into success stories because we do not see the full picture.

Consider the number of actors who have never passed an audition but would have done very well had they had that lucky break in life.

I mentioned earlier that to understand successes and analyze what caused them, we need to study the traits present in failures. It is to a more general version of this point that I turn next.

The entire notion of biography is grounded in the arbitrary ascription of a causal relation between specified traits and subsequent events. Now consider the cemetery. The graveyard of failed persons will be full of people who shared the following traits: courage, risk taking, optimism, et cetera. Just like the population of millionaires. There may be some differences in skills, but what truly separates the two is for the most part a single factor: luck. Plain luck.

Recall the distinction between Mediocristan and Extremistan. I said that taking a “scalable” profession is not a good idea, simply because there are far too few winners in these professions. Well, these professions produce a large cemetery: the pool of starving actors is larger than the one of starving accountants, even if you assume that, on average, they earn the same income.

There is a vicious attribute to the bias: it can hide best when its impact is largest.

A ramification of the idea concerns our decision making under a cloud of possibilities. We see the obvious and visible consequences, not the invisible and less obvious ones. Yet those unseen consequences can be -- nay, generally are -- more meaningful.

Often an action’s positive consequences benefit only its author, since they are visible, while the negative consequences, being invisible, apply to others, with a net cost to society.

This brings us to gravest of all manifestations of silent evidence, the illusion of stability. The bias lowers our perception of the risks we incurred in the past, particularly for those of us who were lucky to have survived them. Your life came under a serious threat but, having survived it, you retrospectively underestimate how risky the situation actually was.

Consider the restaurant business in a competitive place like New York City. One has indeed to be foolish to open one, owing to the enormous risks involved and the harrying quantity of work to get anywhere in the business, not counting the finicky fashion-minded clients. The cemetery of failed restaurants is very silent: walk around Midtown Manhattan and you will see these warm patron-filled restaurants with limos waiting outside for the diners to come out with their second, trophy, spouses. The owner is overworked but happy to have all these important people patronize his eatery. Does this mean that it makes sense to open a restaurant in such a competitive neighborhood? Certainly not, yet people do it out of the foolish risk-taking trait that pushes us to jump into such adventures blinded by the outcome.

Casanova Bias - Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility. This can lead to false conclusions in several different ways. It is a form of selection bias.

The reference point argument is as follows: do not compute odds from the vantage point of the winning gambler (or the lucky Casanova, or the endlessly bouncing back New York City, or the invincible Carthage), but from all those who started in the cohort.

We are explanation-seeking animals who tend to think that everything has an identifiable cause and grab the most apparent one as the explanation. Yet there may not be a visible because; to the contrary, frequently there is nothing, not even a spectrum of possible explanations. But silent evidence masks this fact. Whenever our survival is in play, the very notion of because is severely weakened. The condition of survival drowns all possible explanations. The Aristotelian “because” is not there to account for a solid link between two items, but rather to cater to our hidden weakness for imparting explanations.

The main identifiable reason for our survival of such diseases might simply be inaccessible to us: we are here since, Casanova-style, the “rosy” scenario played out, and if it seems too hard to understand it is because we are too brainwashed by notions of causality and we think that it is smarter to say "because" than to accept randomness.

I am not saying causes do not exist; do not use this argument to avoid trying to learn from history. All I am saying is that it is not so simple; be suspicious of the “because” and handle it with care -- particularly in situations where you suspect silent evidence.

We have seen several varieties of the silent evidence that cause deformations in our perception of empirical reality, making it appear more explainable (and more stable) than it actually is. In addition to the confirmation error and the narrative fallacy, the manifestations of silent evidence further distort the role and importance of Black Swans. In fact, they cause a gross overestimation at times (say, with literary success), and underestimation at others (the stability of history; the stability of our human species). Our perceptual system may not react to what does not lie in front of our eyes, or what does not arouse our emotional attention. We are made to be superficial, to heed what we see and not heed what does not vividly come to mind. We wage a double war against silent evidence. The unconscious part of our inferential mechanism (and there is one) will ignore the cemetery, even if we are intellectually aware of the need to take it into account. Out of sight, out of mind: we harbor a natural, even physical, scorn of the abstract.

Silent evidence can actually bias matters to look less stable and riskier than they actually are. Take cancer. We are in the habit of counting survival rates from diagnosed cancer cases -- which should overestimate the danger from cancer. Many people develop cancer that remains undiagnosed, and go on to live a long and comfortable life, then die of something else, either because their cancer was not lethal or because it went into spontaneous remission. Not counting these cases biases the risks upward.

Chapter 9 – The Ludic Fallacy

Taleb outlines the multiple topics he previously has described and connects them as a single basic idea. He asserts that all of the topics discussed in Part One are essentially the same; concepts of confirmation bias, narrative fallacy, silent evidence, etc., all underscore the same basic point that human thinking overemphasizes the visible and discounts that which is not immediately obvious. Says Taleb, “It is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not (131).” The ludic fallacy refers to the misguided application of classroom reasoning to a world that is considerably messier and more random the sterile classroom setting.

Platonicity – the mistake of simplifying complex topics with the result of missing key information


The ludic fallacy is the misuse of games to model real-life situations. The fallacy is basing studies of chance on the narrow world of games and dice. The fallacy is a rebuttal of the predictive mathematical models used to predict the future –- as well as an attack on the idea of applying naïve and simplified statistical models in complex domains. Statistics is applicable only in some domains, for instance casinos in which the odds are visible and defined. Predictive models are based on Platonified forms, gravitating towards mathematical purity and failing to take various aspects into account:

· It is impossible to be in possession of the entirety of available information.

· Small unknown variations in the data could have a huge impact. Taleb differentiates his idea from that of mathematical notions in chaos theory (e.g., the butterfly effect).

· Theories or models based on empirical data are claimed to be flawed as they may not be able to predict events which are previously unobserved, but have tremendous impact (e.g., the 9/11 terrorist attacks or the invention of the automobile).

In real life you do not know the odds; you need to discover them, and the sources of uncertainty are not defined. Economists, who do not consider what was discovered by noneconomists worthwhile, draw an artificial distinction between Knightian risks (which you can compute) and Knightian uncertainty (which you cannot compute),

What can be mathematized is usually not Gaussian, but Mandelbrotian (fractal, power law distrinbution).

By the confirmation error, we use the example of games, which probability theory was successful at tracking, and claim that this is a general case. Furthermore, just as we tend to underestimate the role of luck in life in general, we tend to overestimate it in games of chance. “This building (a casino) is inside the Platonic fold; life stands outside of it,” I wanted to shout.

A back-of-the-envelope calculation shows that the dollar value of these Black Swans, the off-model hits and potential hits I’ve just outlined, swamp the on-model risks by a factor of close to 1,000 to 1. The casino spent hundreds of millions of dollars on gambling theory and high-tech surveillance while the bulk of their risks came from outside their models.

This is also the problem of silent evidence. It is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not. It is why we Platonify, liking known schemas and well-organized knowledge -- to the point of blindness to reality. It is why we fall for the problem of induction, why we confirm. It is why those who “study” and fare well in school have a tendency to be suckers for the ludic fallacy. And it is why we have Black Swans and never learn from their occurrence, because the ones that did not happen were too abstract. We love the tangible, the confirmation, the palpable, the real, the visible, the concrete, the known, the seen, the vivid, the visual, the social, the embedded, the emotionally laden, the salient, the stereotypical, the moving, the theatrical, the romanced, the cosmetic, the official, the scholarly-sounding verbiage (b******t), the pompous Gaussian economist, the mathematicized crap, the pomp, the Académie Française, Harvard Business School, the Nobel Prize, dark business suits with white shirts and Ferragamo ties, the moving discourse, and the lurid. Most of all we favor the narrated. Alas, we are not manufactured, in our current edition of the human race, to understand abstract matters—we need context. Randomness and uncertainty are abstractions. We respect what has happened, ignoring what could have happened. In other words, we are naturally shallow and superficial—and we do not know it. This is not a psychological problem; it comes from the main property of information. The dark side of the moon is harder to see; beaming light on it costs energy. In the same way, beaming light on the unseen is costly in both computational and mental effort.

I propose that if you want a simple step to a higher form of life, as distant from the animal as you can get, then you may have to denarrate, that is, shut down the television set, minimize time spent reading newspapers, ignore the blogs. Train your reasoning abilities to control your decisions; nudge System 1 (the heuristic or experiential system) out of the important ones. Train yourself to spot the difference between the sensational and the empirical. This insulation from the toxicity of the world will have an additional benefit: it will improve your well-being. Also, bear in mind how shallow we are with probability, the mother of all abstract notions. You do not have to do much more in order to gain a deeper understanding of things around you. Above all, learn to avoid “tunneling.” A bridge here to what is to come. The Platonic blindness I illustrated with the casino story has another manifestation: focusing. To be able to focus is a great virtue if you are a watch repairman, a brain surgeon, or a chess player. But the last thing you need to do when you deal with uncertainty is to “focus” (you should tell uncertainty to focus, not us). This “focus” makes you a sucker; it translates into prediction problems, as we will see in the next section. Prediction, not narration, is the real test of our understanding of the world.

Tunneling - we focus on a few well-defined sources of uncertainty, on too specific a list of Black Swans (at the expense of the others that do not easily come to mind).

😃😃😄😀😀


Part Two

--------------------------------------------------------------------------------------------------------------------------


Part Two, "We Just Can't Predict," is about our errors in dealing with the future and the unadvertised limitations of some "sciences" -- and what to do about these limitations. This section is a combination of psychology, business and natural science. Part Two continues to explore the mental processes which inhibit our ability to understand and prepare for Black Swans. While Part One mainly investigates our inability to perceive Black Swans, Part Two focuses on our inability to assess the limits of our own knowledge, which, in turn, limits our ability to prepare for the unexpected.

The world is far, far more complicated than we think, which is not a problem, except when most of us don’t know it. We tend to “tunnel” while looking into the future, making it business as usual, Black Swan–free, when in fact there is nothing usual about the future. It is not a Platonic category! We have seen how good we are at narrating backward, at inventing stories that convince us that we understand the past. For many people, knowledge has the remarkable power of producing confidence instead of measurable aptitude. Another problem: the focus on the (inconsequential) regular, the Platonification that makes the forecasting “inside the box.”

In spite of the empirical record, we continue to project into the future as if we were good at it, using tools and methods that exclude rare events. Prediction is firmly institutionalized in our world. We are suckers for those who help us navigate uncertainty, whether the fortune-teller or the “well-published” (dull) academics or civil servants using phony mathematics.

“The future ain’t what it used to be,” Yogi Berra said. He seems to have been right: the gains in our ability to model (and predict) the world may be dwarfed by the increases in its complexity -- implying a greater and greater role for the unpredicted. The larger the role of the Black Swan, the harder it will be for us to predict. Sorry. Before going into the limits of prediction, we will discuss our track record in forecasting and the relation between gains in knowledge and the offsetting gains in confidence.

Chapter 10 - Epistemic Arrogance

Taleb introduces the concept of epistemic arrogance, “Our hubris concerning the limits of our knowledge, we are demonstrably arrogant about what we think we know”. The result of epistemic arrogance is a limited ability to account for the unforeseeable. “Epistemic arrogance bears a double effect: we overestimate what we know, and underestimate uncertainty, by compressing the range of possible uncertain states. Our human race is affected by a chronic underestimation of the future straying from the course initially envisioned.” Taleb frequently refers to the instinct to compress the range of possible outcomes as tunneling, an inability to account for factors outside of a pre-defined set of parameters. Taleb warns against trusting the predictions of so-called experts, who are especially susceptible to the double effect. His assessment of the value of expertise can be neatly summed up by a quote by Zen master Suzuki who noted, “In the beginner’s mind there are many possibilities, but in the expert’s there are few.”

To our failure to predict, to plan, and to come to grips with our unknowledge of the future—our systematic underestimation of what the future has in store.

This chapter has two topics. First, we are demonstrably arrogant about what we think we know. We certainly know a lot, but we have a built-in tendency to think that we know a little bit more than we actually do, enough of that little bit to occasionally get into serious trouble. We shall see how you can verify, even measure, such arrogance in your own living room. Second, we will look at the implications of this arrogance for all the activities involving prediction. Why on earth do we predict so much? Worse, even, and more interesting: Why don’t we talk about our record in predicting? Why don’t we see how we (almost) always miss the big events? I call this the scandal of prediction.

Our knowledge does grow, but it is threatened by greater increases in confidence, which make our increase in knowledge at the same time an increase in confusion, ignorance, and conceit.

Our human race is affected by a chronic underestimation of the possibility of the future straying from the course initially envisioned (in addition to other biases that sometimes exert a compounding effect).

The simple test above suggests the presence of an ingrained tendency in humans to underestimate outliers -- or Black Swans. Left to our own devices, we tend to think that what happens every decade in fact only happens once every century, and, furthermore, that we know what’s going on.

The longer the odds, the larger the epistemic arrogance.

Note here one particularity of our intuitive judgment: even if we lived in Mediocristan, in which large events are rare (and, mostly, inconsequential), we would still underestimate extremes—we would think that they are even rarer. We underestimate our error rate even with Gaussian variables. Our intuitions are sub-Mediocristani. But we do not live in Mediocristan. The numbers we are likely to estimate on a daily basis belong largely in Extremistan, i.e., they are run by concentration and subjected to Black Swans.

There is no effective difference between my guessing a variable that is not random, but for which my information is partial or deficient, and predicting a random one, like tomorrow’s unemployment rate or next year’s stock market. In this sense, guessing (what I don’t know, but what someone else may know) and predicting (what has not taken place yet) are the same thing.

When you are employed, hence dependent on other people’s judgment, looking busy can help you claim responsibility for the results in a random environment. The appearance of busyness reinforces the perception of causality, of the link between results and one’s role in them.

The more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. They see more random noise and mistake it for information. The problem is that our ideas are sticky: once we produce a theory, we are not likely to change our minds -- so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate. Two mechanisms are at play here: the confirmation bias, and belief perseverance, the tendency not to reverse opinions you already have. Remember that we treat ideas like possessions, and it will be hard for us to part with them.

The more detailed knowledge one gets of empirical reality, the more one will see the noise (i.e., the anecdote) and mistake it for actual information. Remember that we are swayed by the sensational. Listening to the news on the radio every hour is far worse for you than reading a weekly magazine, because the longer interval allows information to be filtered a bit.

No matter what anyone tells you, it is a good idea to question the error rate of an expert’s procedure. Do not question his procedure, only his confidence. (As someone who was burned by the medical establishment, I learned to be cautious, and I urge everyone to be: if you walk into a doctor’s office with a symptom, do not listen to his odds of its not being cancer.)

Experts who tend to be experts: livestock judges, astronomers, test pilots, soil judges, chess masters, physicists, mathematicians (when they deal with mathematical problems, not empirical ones), accountants, grain inspectors, photo interpreters, insurance analysts (dealing with bell curve–style statistics). Experts who tend to be … not experts: stockbrokers, clinical psychologists, psychiatrists, college admissions officers, court judges, councilors, personnel selectors, intelligence analysts (the CIA’s record, in spite of its costs, is pitiful), unless one takes into account some great dose of invisible prevention.

Professions that deal with the future and base their studies on the nonrepeatable past have an expert problem (with the exception of the weather and businesses involving short-term physical processes, not socioeconomic ones).

Another way to see it is that things that move are often Black Swan–prone. Experts are narrowly focused persons who need to “tunnel.” In situations where tunneling is safe, because Black Swans are not consequential, the expert will do well.

The problem with experts is that they do not know what they do not know. Lack of knowledge and delusion about the quality of your knowledge come together—the same process that makes you know less also makes you satisfied with your knowledge.

The data vendors allow you to take a peek at forecasts by “leading economists,” people (in suits) who work for the venerable institutions, such as J. P. Morgan Chase or Morgan Stanley. You can watch these economists talk, theorizing eloquently and convincingly. Most of them earn seven figures and they rank as stars, with teams of researchers crunching numbers and projections. But the stars are foolish enough to publish their projected numbers, right there, for posterity to observe and assess their degree of competence.

The problem with prediction is a little more subtle. It comes mainly from the fact that we are living in Extremistan, not Mediocristan. Our predictors may be good at predicting the ordinary, but not the irregular, and this is where they ultimately fail. All you need to do is miss one interest-rates move, from 6 percent to 1 percent in a longer-term projection (what happened between 2000 and 2001) to have all your subsequent forecasts rendered completely ineffectual in correcting your cumulative track record. What matters is not how often you are right, but how large your cumulative errors are.

And these cumulative errors depend largely on the big surprises, the big opportunities. Not only do economic, financial, and political predictors miss them, but they are quite ashamed to say anything outlandish to their clients—and yet events, it turns out, are almost always outlandish. Furthermore, as we will see in the next section, economic forecasters tend to fall closer to one another than to the resulting outcome. Nobody wants to be off the wall.

A few researchers have examined the work and attitude of security analysts, with amazing results, particularly when one considers the epistemic arrogance of these operators. In a study comparing them with weather forecasters, Tadeusz Tyszka and Piotr Zielonka document that the analysts are worse at predicting, while having a greater faith in their own skills. Somehow, the analysts’ self-evaluation did not decrease their error margin after their failures to forecast.

What it showed was that these brokerage-house analysts predicted nothing—a naïve forecast made by someone who takes the figures from one period as predictors of the next would not do markedly worse. Yet analysts are informed about companies’ orders, forthcoming contracts, and planned expenditures, so this advanced knowledge should help them do considerably better than a naïve forecaster looking at the past data without further information. Worse yet, the forecasters’ errors were significantly larger than the average difference between individual forecasts, which indicates herding. Normally, forecasts should be as far from one another as they are from the predicted number. But to understand how they manage to stay in business, and why they don’t develop severe nervous breakdowns (with weight loss, erratic behavior, or acute alcoholism), we must look at the work of the psychologist Philip Tetlock.

The study revealed that experts’ error rates were clearly many times what they had estimated. His study exposed an expert problem: there was no difference in results whether one had a PhD or an undergraduate degree. Well-published professors had no advantage over journalists. The only regularity Tetlock found was the negative effect of reputation on prediction: those who had a big reputation were worse predictors than those who had none. But Tetlock’s focus was not so much to show the real competence of experts (although the study was quite convincing with respect to that) as to investigate why the experts did not realize that they were not so good at their own business, in other words, how they spun their stories. There seemed to be a logic to such incompetence, mostly in the form of belief defense, or the protection of self-esteem.

We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. We feel responsible for the good stuff, but not for the bad. This causes us to think that we are better than others at whatever we do for a living.

The hedgehog knows one thing, the fox knows many things -- these are the adaptable types you need in daily life. Many of the prediction failures come from hedgehogs who are mentally married to a single big Black Swan event, a big bet that is not likely to play out. The hedgehog is someone focusing on a single, improbable, and consequential event, falling for the narrative fallacy that makes us so blinded by one single outcome that we cannot imagine others.

Most fail to get my point about the error of specificity, the narrative fallacy, and the idea of prediction. Contrary to what people might expect, I am not recommending that anyone become a hedgehog -- rather, be a fox with an open mind. I know that history is going to be dominated by an improbable event, I just don’t know what that event will be.

When an economist fails to predict outliers he often invokes the issue of earthquakes or revolutions, claiming that he is not into geodesics, atmospheric sciences, or political science, instead of incorporating these fields into his studies and accepting that his field does not exist in isolation. Economics is the most insular of fields; it is the one that quotes least from outside itself! Economics is perhaps the subject that currently has the highest number of philistine scholars—scholarship without erudition and natural curiosity can close your mind and lead to the fragmentation of disciplines.

We will now address another constant in human nature: a systematic error made by project planners, coming from a mixture of human nature, the complexity of the world, or the structure of organizations. In order to survive, institutions may need to give themselves and others the appearance of having a “vision.” Plans fail because of what we have called tunneling, the neglect of sources of uncertainty outside the plan itself.


The unexpected has a one-sided effect with projects. Consider the track records of builders, paper writers, and contractors. The unexpected almost always pushes in a single direction: higher costs and a longer time to completion.

With projects of great novelty, such as a military invasion, an all-out war, or something entirely new, errors explode upward. In fact, the more routine the task, the better you learn to forecast.

We are too narrow-minded a species to consider the possibility of events straying from our mental projections, but furthermore, we are too focused on matters internal to the project to take into account external uncertainty, the “unknown unknown,” so to speak, the contents of the unread books. There is also the nerd effect, which stems from the mental elimination of off-model risks, or focusing on what you know. You view the world from within a model. Consider that most delays and cost overruns arise from unexpected elements that did not enter into the plan—that is, they lay outside the model at hand -- such as strikes, electricity shortages, accidents, bad weather, or rumors of Martian invasions. These small Black Swans that threaten to hamper our projects do not seem to be taken into account. They are too abstract -- we don’t know how they look and cannot talk about them intelligently. We cannot truly plan, because we do not understand the future -- but this is not necessarily bad news. We could plan while bearing in mind such limitations. It just takes guts.

Once on a page or on a computer screen, or, worse, in a PowerPoint presentation, the projection takes on a life of its own, losing its vagueness and abstraction and becoming what philosophers call reified, invested with concreteness; it takes on a new life as a tangible object.

A classical mental mechanism, called anchoring, seems to be at work here. You lower your anxiety about uncertainty by producing a number, then you “anchor” on it, like an object to hold on to in the middle of a vacuum. This anchoring mechanism was discovered by the fathers of the psychology of uncertainty, Danny Kahneman and Amos Tversky, early in their heuristics and biases project.

We use reference points in our heads, say sales projections, and start building beliefs around them because less mental effort is needed to compare an idea to a reference point than to evaluate it in the absolute (System 1 at work!). We cannot work without a point of reference.

With human projects and ventures we have another story. These are often scalable. With scalable variables, the ones from Extremistan, you will witness the exact opposite effect. Let’s say a project is expected to terminate in 79 days, the same expectation in days as the newborn female has in years. On the 79th day, if the project is not finished, it will be expected to take another 25 days to complete. But on the 90th day, if the project is still not completed, it should have about 58 days to go. On the 100th, it should have 89 days to go. On the 119th, it should have an extra 149 days. On day 600, if the project is not done, you will be expected to need an extra 1,590 days. As you see, the longer you wait, the longer you will be expected to wait.

Forecasting without incorporating an error rate uncovers three fallacies, all arising from the same misconception about the nature of uncertainty. The first fallacy: variability matters. Don’t cross a river if it is four feet deep on average. The policies we need to make decisions on should depend far more on the range of possible outcomes than on the expected final number. The second fallacy lies in failing to take into account forecast degradation as the projected period lengthens. We do not realize the full extent of the difference between near and far futures. Yet the degradation in such forecasting through time becomes evident through simple introspective examination -- without even recourse to scientific papers, which on this topic are suspiciously rare. Forecasting by bureaucrats tends to be used for anxiety relief rather than for adequate policy making. The third fallacy, and perhaps the gravest, concerns a misunderstanding of the random character of the variables being forecast. Owing to the Black Swan, these variables can accommodate far more optimistic—or far more pessimistic—scenarios than are currently expected. Recall from my experiment with Dan Goldstein testing the domain-specificity of our intuitions, how we tend to make no mistakes in Mediocristan, but make large ones in Extremistan as we do not realize the consequences of the rare event.

I would go even further and, using the argument about the depth of the river, state that it is the lower bound of estimates (i.e., the worst case) that matters when engaging in a policy—the worst case is far more consequential than the forecast itself. This is particularly true if the bad scenario is not acceptable. Yet the current phraseology makes no allowance for that. None. It is often said that “is wise he who can see things coming.” Perhaps the wise one is the one who knows that he cannot see things far away.

Chapter 11 - Serendipity and The Problem with Predicting The Future

Taleb emphasizes the importance of serendipity. “The classic model of discovery is that you search for something you know (say, a new way to reach India) and find something you didn’t know was there (America).” He argues that remaining aware of the possibility of the unintended, serendipitous results of our activities can help us recognize and take advantage of positive Black Swans.

The element of serendipity present in most major discoveries illustrates a fundamental problem in attempting to predict the future, known in statistics as the law of iterated expectations, which essentially states that the expectation of knowledge is equivalent to the knowledge itself. “If you know about the discovery you are about to make in the future, then you have almost made it.”

Taleb explores the massive amounts of data that need to be accounted for in order to predict something. Such predictions, do not even need to factor in trickier elements such as free will. All of these observations ultimately serve to underscore for the futility of attempting to predict the future.

We’ve seen that (a) we tend to both tunnel and think “narrowly” (epistemic arrogance), and (b) our prediction record is highly overestimated -- many people who think they can predict actually can’t (at least not accurately!). We will now go deeper into the unadvertised structural limitations on our ability to predict. These limitations may arise not from us but from the nature of the activity itself -- too complicated, not just for us, but for any tools we have or can conceivably obtain. Some Black Swans will remain elusive, enough to kill our forecasts.

Being an executive does not require very developed frontal lobes, but rather a combination of charisma, a capacity to sustain boredom, and the ability to shallowly perform on harrying schedules.

You find something you are not looking for and it changes the world, while wondering after its discovery why it “took so long” to arrive at something so obvious.

Sir Francis Bacon commented that the most important advances are the least predictable ones, those “lying out of the path of the imagination.”

The law of iterated knowledge -- To understand the future to the point of being able to predict it, you need to incorporate elements from this future itself.

Prediction requires knowing about technologies that will be discovered in the future. But that very knowledge would almost automatically allow us to start developing those technologies right away. Ergo, we do not know what we will know.

We see flaws in others and not in ourselves. Once again we seem to be wonderful at self-deceit machines.

Corporations survive not because they have made good forecasts, but because, like the CEOs visiting Wharton, they may have been the lucky ones. And, like a restaurant owner, they may be hurting themselves, not us -- perhaps helping us and subsidizing our consumption by giving us goods in the process, like cheap telephone calls to the rest of the world funded by the overinvestment during the dotcom era. We consumers can let them forecast all they want if that’s what is necessary for them to get into business. Let them go hang themselves if they wish.

To clarify, Platonic is top-down, formulaic, closed-minded, self-serving, and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical.

Legions of empirical psychologists of the heuristics and biases school have shown that the model of rational behavior under uncertainty is not just grossly inaccurate but plain wrong as a description of reality.

This is what the philosopher Nelson Goodman called the riddle of induction: We project a straight line only because we have a linear model in our head -- the fact that a number has risen for 1,000 days straight should make you more confident that it will rise in the future. But if you have a nonlinear model in your head, it might confirm that the number should decline on day 1,001.

Our brains are “anticipation machines;” for him the human mind and consciousness are emerging properties, those properties necessary for our accelerated development.

We have a natural tendency to listen to the expert, even in fields where there may be no experts.

Chapter 12 - Epistemocracy

Taleb describes his vision of utopia as an epistemocracy, “[A] society governed from the basis of awareness of ignorance, not knowledge.” He reiterates his cautions against assuming the past or present is somehow an indicator of the future and explores the concept of future blindness. Our future blindness is, in part, a symptom of our blindness of the past. Taleb notes that interpretation of the past relies on a backward process in which we attempt to reconstruct the past based on its outcome, which is difficult compared to forward process. Finally, Taleb distinguishes between “true randomness” and “deterministic chaos,” a system without predictable properties and a system with properties that are difficult to identify, respectively. In practice, Taleb argues, the two are indistinguishable, as they both result in incomplete knowledge. The fact that something is possible to predict is not useful if we do not know how to predict it.


Epistemology - the theory of knowledge, especially with regard to its methods, validity, and scope. Epistemology is the investigation of what distinguishes justified belief from opinion.


An epistemocrat - someone who holds his own knowledge to be suspect. The province where the laws are structured with this kind of human fallibility in mind I will call an epistemocracy.

The only way you can imagine a future “similar” to the past is by assuming that it will be an exact projection of it, hence predictable. Just as you know with some precision when you were born, you would then know with equal precision when you will die. The notion of future mixed with chance, not a deterministic extension of your perception of the past, is a mental operation that our mind cannot perform. Chance is too fuzzy for us to be a category by itself. There is an asymmetry between past and future, and it is too subtle for us to understand naturally. The first consequence of this asymmetry is that, in people’s minds, the relationship between the past and the future does not learn from the relationship between the past and the past previous to it. There is a blind spot: when we think of tomorrow we do not frame it in terms of what we thought about yesterday on the day before yesterday. Because of this introspective defect we fail to learn about the difference between our past predictions and the subsequent outcomes. When we think of tomorrow, we just project it as another yesterday.

Psychologists have studied this kind of misprediction with respect to both pleasant and unpleasant events. We overestimate the effects of both kinds of future events on our lives. We seem to be in a psychological predicament that makes us do so. This predicament is called “anticipated utility” by Danny Kahneman and “affective forecasting” by Dan Gilbert. The point is not so much that we tend to mispredict our future happiness, but rather that we do not learn recursively from past experiences. We have evidence of a mental block and distortions in the way we fail to learn from our past errors in projecting the future of our affective states. We grossly overestimate the length of the effect of misfortune on our lives. You think that the loss of your fortune or current position will be devastating, but you are probably wrong. More likely, you will adapt to anything, as you probably did after past misfortunes. You may feel a sting, but it will not be as bad as you expect. This kind of misprediction may have a purpose: to motivate us to perform important acts (like buying new cars or getting rich) and to prevent us from taking certain unnecessary risks. And it is part of a more general problem: we humans are supposed to fool ourselves a little bit here and there. According to Trivers’s theory of self-deception, this is supposed to orient us favorably toward the future. But self-deception is not a desirable feature outside of its natural domain. It prevents us from taking some unnecessary risks -- but we saw in Chapter 6 how it does not as readily cover a spate of modern risks that we do not fear because they are not vivid, such as investment risks, environmental dangers, or long-term security.

Our problem is not just that we do not know the future, we do not know much of the past either.

A true random system is in fact random and does not have predictable properties. A chaotic system has entirely predictable properties, but they are hard to know.

Learn to read history, get all the knowledge you can, do not frown on the anecdote, but do not draw any causal links, do not try to reverse engineer too much -- but if you do, do not make big scientific claims. Remember that the empirical skeptics had respect for custom: they used it as a default, a basis for action, but not for more than that. This clean approach to the past they called epilogism.


Epilogism is a style of inference invented by the ancient Empiric school of medicine. It is a theory-free method of looking at history by accumulating fact with minimal generalization and being conscious of the side effects of making causal claims. Epilogism is an inference which moves entirely within the domain of visible and evident things, it tries not to invoke unobservables.

After this discussion about future (and past) blindness, let us see what to do about it. Remarkably, there are extremely practical measures we can take.

Chapter 13 - What Do You Do If You Cannot Predict?

Taleb offers some practical guidelines for coping with life’s randomness. He encourages us to focus on the potential consequences of the unexpected rather than on the perceived probability of the unexpected occurring. Taleb advises us to “[k]now how to rank beliefs not according to their plausibility but to the harm they may cause.” Instead of putting your money in “medium risk” investments (how do you know it is medium risk? by listening to tenure-seeking “experts?”), you need to put a portion, say 85 to 90 percent, in extremely safe instruments, like Treasury bills -- as safe a class of instruments as you can manage to find on this planet. The remaining 10 to 15 percent you put in extremely speculative bets -- preferably venture capital-style portfolios. That way you do not depend on errors of risk management; no Black Swan can hurt you at all, beyond your “floor,” the nest egg that you have in maximally safe investments.

In essence, the barbell strategy advocates for, “taking maximum exposure to the positive Black Swans while remaining paranoid about the negative ones.”

My advice is to keep an open mind about both negative and positive possibilities, and to not discount the possibility of something happening just because it seems unlikely. Maximize your exposure to potential positive Black Swans by putting yourself in situations (like social gatherings) that have the potential to produce fortuitous encounters. While I advocate skepticism toward any definitive (or even generalized) predictions of the future, trying to deter those making the predictions is a waste of energy.

😃😃😄😀😀

Part Three

---------------------------------------------------------------------------------------------------------------------------

Part Three goes deeper into the topic of extreme events, explains how the bell curve is generated, and reviews the ideas in the natural and social sciences loosely lumped under the label "complexity." This section deals mostly with business and natural science.

It’s time to deal in some depth with four final items that bear on our Black Swan. First, I have said earlier that the world is moving deeper into Extremistan, that it is less and less governed by Mediocristan -- in fact, this idea is more subtle than that. I will show how and present the various ideas we have about the formation of inequality. Second, I have been describing the Gaussian bell curve as a contagious and severe delusion, and it is time to get into that point in some depth. Third, I will present what I call Mandelbrotian, or fractal, randomness. Remember that for an event to be a Black Swan, it does not just have to be rare, or just wild; it has to be unexpected, has to lie outside our tunnel of possibilities. You must be a sucker for it. As it happens, many rare events can yield their structure to us: it is not easy to compute their probability, but it is easy to get a general idea about the possibility of their occurrence. We can turn these Black Swans into Gray Swans, so to speak, reducing their surprise effect. A person aware of the possibility of such events can come to belong to the non-sucker variety. Finally, I will present the ideas of those philosophers who focus on phony uncertainty.

Chapter 14 - From Mediocristan and Extremistan - And Back


Taleb explores preferential-attachment theories which deal with inequalities that result from a cumulative effect. Sociologist Robert K. Merton proposed a model called the Matthew Effect, noting that those at the top are more likely to continue to gain resources, accolades, or whatever advantage is relevant to the scenario. Taleb points out that these models fail to take into account the role of luck in one’s ascent to the top and “do not account for the possibility of being supplanted by newcomers.” Not so in Extremistan, Taleb says, where everyone is vulnerable to the effects of the Black Swan which can manifest itself as a stroke of extremely good or extremely bad luck, depending on one’s perspective.


The discounting of Black Swans becomes a more dire problem in today’s globalized economy. It is here that Taleb’s insights borderline on prophetic. [Globalization] creates interlocking fragility … Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated … When one falls, they all fall. The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they are more global in scale and hit us very hard.


The Matthew effect of accumulated advantage, Matthew principle, or Matthew effect for short, is sometimes summarized by the adage "the rich get richer and the poor get poorer.” The concept is applicable to matters of fame or status, but may also be applied literally to cumulative advantage of economic capital. In the beginning, Matthew effects were primarily focused on the inequality in the way scientists were recognized for their work. However, Norman Storer led a new wave of research. He believed he discovered that the inequality that existed in the social sciences also existed in other institutions. The term was coined by sociologist Robert K. Merton in 1968 and takes its name from the Parable of the talents in the Gospel of Matthew.

The role of luck is missing. The problem here is the notion of “better,” this focus on skills as leading to success. Random outcomes, or an arbitrary situation, can also explain success, and provide the initial push that leads to a winner-take-all result. A person can get slightly ahead for entirely random reasons; because we like to imitate one another, we will flock to him. The world of contagion is so underestimated.


In sociology, Matthew effects bear the less literary name “cumulative advantage.” This theory can easily apply to companies, businessmen, actors, writers, and anyone else who benefits from past success. If you get published in The New Yorker because the color of your letterhead attracted the attention of the editor, who was daydreaming of daisies, the resultant reward can follow you for life. More significantly, it will follow others for life. Failure is also cumulative; losers are likely to also lose in the future, even if we don’t take into account the mechanism of demoralization that might exacerbate it and cause additional failure.


Failure is also cumulative; losers are likely to also lose in the future, even if we don’t take into account the mechanism of demoralization that might exacerbate it and cause additional failure.


Note that art, because of its dependence on word of mouth, is extremely prone to these cumulative-advantage effects.


A preferential attachment process is any of a class of processes in which some quantity, typically some form of wealth or credit, is distributed among a number of individuals or objects according to how much they already have, so that those who are already wealthy receive more than those who are not. "Preferential attachment" is only the most recent of many names that have been given to such processes. They are also referred to under the names "Yule process", "cumulative advantage", "the rich get richer", and, less correctly, the "Matthew effect". They are also related to Gibrat's law. The principal reason for scientific interest in preferential attachment is that it can, under suitable circumstances, generate power law distributions.


The more you use a word, the less effortful you will find it to use that word again, so you borrow words from your private dictionary in proportion to their past use. This explains why out of the 60,000 main words in English, only a few hundred constitute the bulk of what is used in writings, and even fewer appear regularly in conversation. Likewise, the more people aggregate in a particular city, the more likely a stranger will be to pick that city as his destination. The big get bigger and the small stay small, or get relatively smaller.


Preferential-attachment theories are intuitively appealing, but they do not account for the possibility of being supplanted by newcomers -- what every schoolchild knows as the decline of civilizations. Consider the logic of cities: How did Rome, with a population of 1.2 million in the first century A.D., end up with a population of twelve thousand in the third? How did Baltimore, once a principal American city, become a relic? And how did Philadelphia come to be overshadowed by New York?

If you leave companies alone, they tend to get eaten up. Those in favor of economic freedom claim that beastly and greedy corporations pose no threat because competition keeps them in check. What I saw at the Wharton School convinced me that the real reason includes a large share of something else: chance. But when people discuss chance (which they rarely do), they usually only look at their own luck. The luck of others counts greatly. Another corporation may luck out thanks to a blockbuster product and displace the current winners.


I said earlier that randomness is bad, but it is not always so. Luck is far more egalitarian than even intelligence. If people were rewarded strictly according to their abilities, things would still be unfair—people don’t choose their abilities. Randomness has the beneficial effect of reshuffling society’s cards, knocking down the big guy.


The dynamics of fractal concentration has another layer of randomness.


True, the Web produces acute concentration. A large number of users visit just a few sites, such as Google, which, at the time of this writing, has total market dominance. At no time in history has a company grown so dominant so quickly -- Google can service people from Nicaragua to southwestern Mongolia to the American West Coast, without having to worry about phone operators, shipping, delivery, and manufacturing. This is the ultimate winner-take-all case study.


The idea of the long tail, which seems to be the exact opposite of the concentration implied by scalability. The long tail implies that the small guys, collectively, should control a large segment of culture and commerce, thanks to the niches and subspecialties that can now survive thanks to the Internet. But, strangely, it can also imply a large measure of inequality: a large base of small guys and a very small number of supergiants, together representing a share of the world’s culture -- with some of the small guys, on occasion, rising to knock out the winners. (This is the “double tail”: a large tail of the small guys, a small tail of the big guys.) The role of the long tail is fundamental in changing the dynamics of success, destabilizing the well-seated winner, and bringing about another winner. In a snapshot this will always be Extremistan, always ruled by the concentration of type-2 randomness; but it will be an ever-changing Extremistan.

By subverting the big structures we also get rid of the Platonified one way of doing things—in the end, the bottom-up theory-free empiricist should prevail. In sum, the long tail is a by-product of Extremistan that makes it somewhat less unfair: the world is made no less unfair for the little guy, but it now becomes extremely unfair for the big man. Nobody is truly established. The little guy is very subversive.


Globalization -- it is here, but it is not all for the good: it creates interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans.


We have never lived before under the threat of a global collapse. Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks (often Gaussianized in their risk measurement)—when one falls, they all fall. The increased concentration among banks seems to have the effect of making financial crisis less likely, but when they happen they are more global in scale and hit us very hard. We have moved from a diversified ecology of small banks, with varied lending policies, to a more homogeneous framework of firms that all resemble one another. True, we now have fewer failures, but when they occur … I shiver at the thought. I rephrase here: we will have fewer but more severe crises. The rarer the event, the less we know about its odds. It means that we know less and less about the possibility of a crisis.


They all understand Extremistan mathematics and the inadequacy of the Gaussian bell curve. They have uncovered the following property of networks: there is a concentration among a few nodes that serve as central connections. Networks have a natural tendency to organize themselves around an extremely concentrated architecture: a few nodes are extremely connected; others barely so. The distribution of these connections has a scalable structure of the kind we will discuss in Chapters 15 and 16. Concentration of this kind is not limited to the Internet; it appears in social life (a small number of people are connected to others), in electricity grids, in communications networks. This seems to make networks more robust: random insults to most parts of the network will not be consequential since they are likely to hit a poorly connected spot. But it also makes networks more vulnerable to Black Swans.

Chapter 15 - The Bell Curve, The Great Intellectual Fraud

Taleb fleshes out his revulsion for the bell curve or the practice of applying the bell curve to scenarios that fall within the realm of Extremistan randomness i.e., those scenarios that define our lives. Taleb writes “The main point of the Gaussian (bell curve) is, that most observations hover around the average; the odds of a deviation decline faster and faster (exponentially) as you move away from the average.”

Taleb’s main qualm with the bell curve is that it “allows you to ignore outliers” by reducing their likelihood to nearly zero. With Mediocristan matters, such as human height, weight, etc, the bell curve is applicable. But applying the same principle to Extremistan matters, such as finances, carries considerable danger. In these matters, outliers carry significant consequences and ignoring the possibility of such outliers can have disastrous results. Moreover, what occurs on average in Extremistan is not a reliable indicator of what might occur tomorrow; outliers may not be as unlikely as the bell curve would lead us to believe.

This precipitous decline in the odds of encountering something (using the bell curve) is what allows you to ignore outliers. Only one curve can deliver this decline, and it is the bell curve (and its nonscalable siblings).

Scalability means that there is no headwind to slow you down.

Another term for the scalable is power laws.

Remember this: the Gaussian–bell curve variations face a headwind that makes probabilities drop at a faster and faster rate as you move away from the mean, while “scalables,” or Mandelbrotian variations, do not have such a restriction. That’s pretty much most of what you need to know.

The 80/20 rule is only metaphorical; it is not a rule, even less a rigid law. In the U.S. book business, the proportions are more like 97/20 (i.e., 97 percent of book sales are made by 20 percent of the authors); it’s even worse if you focus on literary nonfiction (twenty books of close to eight thousand represent half the sales).

I’ll summarize here and repeat the arguments previously made throughout the book. Measures of uncertainty that are based on the bell curve simply disregard the possibility, and the impact, of sharp jumps or discontinuities and are, therefore, inapplicable in Extremistan. Using them is like focusing on the grass and missing out on the (gigantic) trees. Although unpredictable large deviations are rare, they cannot be dismissed as outliers because, cumulatively, their impact is so dramatic. The traditional Gaussian way of looking at the world begins by focusing on the ordinary, and then deals with exceptions or so-called outliers as ancillaries. But there is a second way, which takes the exceptional as a starting point and treats the ordinary as subordinate. I have emphasized that there are two varieties of randomness, qualitatively different, like air and water. One does not care about extremes; the other is severely impacted by them. One does not generate Black Swans; the other does. We cannot use the same techniques to discuss a gas as we would use with a liquid. And if we could, we wouldn’t call the approach “an approximation.” A gas does not “approximate” a liquid. We can make good use of the Gaussian approach in variables for which there is a rational reason for the largest not to be too far away from the average. If there is gravity pulling numbers down, or if there are physical limitations preventing very large observations, we end up in Mediocristan. If there are strong forces of equilibrium bringing things back rather rapidly after conditions diverge from equilibrium, then again you can use the Gaussian approach. Otherwise, fuhgedaboudit. This is why much of economics is based on the notion of equilibrium: among other benefits, it allows you to treat economic phenomena as Gaussian. Note that I am not telling you that the Mediocristan type of randomness does not allow for some extremes. But it tells you that they are so rare that they do not play a significant role in the total. The effect of such extremes is pitifully small and decreases as your population gets larger.

Taleb argues that the law of large numbers (LLN) only applies in Mediocritan. LLN is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed. The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. It is important to remember that the law only applies (as the name indicates) when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy).


The notion of standard deviation is meaningless outside of Mediocristan.


The Gaussian family (which includes various friends and relatives, such as the Poisson law) are the only class of distributions that the standard deviation (and the average) is sufficient to describe. You need nothing else. The bell curve satisfies the reductionism of the deluded. There are other notions that have little or no significance outside of the Gaussian: correlation and, worse, regression. Yet they are deeply ingrained in our methods; it is hard to have a business conversation without hearing the word correlation.


If you use the term "statistically significant," beware of the illusions of certainties. Odds are that someone has looked at his observation errors and assumed that they were Gaussian, which necessitates a Gaussian context, namely, Mediocristan, for it to be acceptable.

But if you are dealing with aggregates, where magnitudes do matter, such as income, your wealth, return on a portfolio, or book sales, then you will have a problem and get the wrong distribution if you use the Gaussian, as it does not belong there. One single number can disrupt all your averages; one single loss can eradicate a century of profits. You can no longer say “this is an exception.” The statement “Well, I can lose money” is not informational unless you can attach a quantity to that loss. You can lose all your net worth or you can lose a fraction of your daily income; there is a difference.

The main point of the Gaussian bell curve is that most observations hover around the mediocre, the mean, while the odds of a deviation decline faster and faster (exponentially) as you move away from the mean. If you need to retain one single piece of information, just remember this dramatic speed of decrease in the odds as you move away from the average. Outliers are increasingly unlikely. You can safely ignore them.

This ubiquity of the Gaussian is not a property of the world, but a problem in our minds, stemming from the way we look at it.

Many people accepted my Black Swan idea but could not take it to its logical conclusion, which is that you cannot use one single measure for randomness called standard deviation (and call it “risk”); you cannot expect a simple answer to characterize uncertainty. To go the extra step requires courage, commitment, an ability to connect the dots, a desire to understand randomness fully. It also means not accepting other people’s wisdom as gospel.

Chapter 16 - Randomness

Taleb introduces us to Benoit Mandelbrot, whom he has made reference to throughout the book. Mandelbrot introduced the idea of fractals. Fractality is the repetition of geometric patterns at different scales, revealing smaller and smaller versions of themselves. Small parts resemble, to some degree, the whole … This character of self-affinity implies that one deceptively short and simple rule of iteration can be used, either by a computer or, more randomly, by Mother Nature, to build shapes of seemingly great complexity … The shapes are never the same, yet they bear an affinity to one another, a strong family resemblance. Fractals contradict the traditional concept of circles and squares, neat geometric shapes that rarely occur in nature.

To demonstrate the relevance of fractal randomness to the Black Swan problem, Taleb explains, “the fractal has numerical or statistical measures that are (somewhat) preserved across scales -- the ratio is the same, unlike the Gaussian [bell curve].” Fractal randomness is imprecise. Its usefulness lies in the fact that, unlike the bell curve, it does not reduce the likelihood of a dramatic event to outlier status. To be extremely simplistic, it implies that future events will mimic -- though not repeat -- past ones. If something has occurred before then there is a possibility that something of a similar nature will occur in the future. By emphasizing what is possible over what is likely, fractal randomness can help us be better prepared for Black Swans. And since these events are no longer inconceivable, they are no longer true Black Swans; they are gray swans.

We are either blind, or illiterate, or both. That nature’s geometry is not Euclid’s was so obvious, and nobody, almost nobody, saw it. This (physical) blindness is identical to the ludic fallacy that makes us think casinos represent randomness.

What does fractal geometry have to do with the distribution of wealth, the size of cities, returns in the financial markets, the number of casualties in war, or the size of planets? Let us connect the dots. The key here is that the fractal has numerical or statistical measures that are (somewhat) preserved across scales -- the ratio is the same, unlike the Gaussian.

The world, epistemologically, is literally a different place to a bottom - up empiricist. We don’t have the luxury of sitting down to read the equation that governs the universe; we just observe data and make an assumption about what the real process might be, and “calibrate” by adjusting our equation in accordance with additional information. As events present themselves to us, we compare what we see to what we expected to see. It is usually a humbling process, particularly for someone aware of the narrative fallacy, to discover that history runs forward, not backward. As much as one thinks that businessmen have big egos, these people are often humbled by of the differences between decision and results, between precise models and reality. What I am talking about is opacity, incompleteness of information, the invisibility of the generator of the world. History does not reveal its mind to us -- we need to guess what’s inside of it.

The above idea links all the parts of this book.

You will see the sneaky manifestations of the narrative fallacy, the ludic fallacy, and the great errors of Platonicity, of going from representation to reality.

But not for historical data of unknown attributes and not for matters from Extremistan.

Thus Mandelbrot’s fractals allow us to account for a few Black Swans, but not all. I said earlier that some Black Swans arise because we ignore sources of randomness. Others arise when we overestimate the fractal exponent. A gray swan concerns modelable extreme events, a black swan is about unknown unknowns.

Mandelbrot deals with gray swans; I deal with the Black Swan. So Mandelbrot domesticated many of my Black Swans, but not all of them, not completely. But he shows us a glimmer of hope with his method, a way to start thinking about the problems of uncertainty. You are indeed much safer if you know where the wild animals are.

Chapter 17 - Bell Curves In The Wrong Places

Taleb laments the problems of domain specificity and how quickly Black Swan lessons are forgotten. He recalls how, following the stock market crash of 1987, an event that illustrated the inappropriateness of applying the bell curve to economic matters, “people accepted that rare events take place and are the main source of uncertainty. They were just unwilling to give up on the Gaussian as a central measurement tool.” Taleb also expounds on his disdain for the Nobel Prize in economics, which he derides on numerous occasions throughout the book. He notes how many winners of the prize have based their ideas on the Gaussian model and essentially blames the prize for the widespread application of the bell curve to business and economic matters. Taleb recounts the satisfaction he’s derived from needling so-called experts and devoted supporters of Gaussian economics.

We handle matters that belong to Extremistan, but treated as if they belonged to Mediocristan, as an “approximation.”

In the last fifty years the ten most extreme days in the financial markets represent half the returns. Ten days in fifty years.

Dozens of papers show the inadequacy of the Gaussian family of distributions and the scalable nature of markets.

According to the circumstances of 1987, people accepted that rare events take place and are the main source of uncertainty.

The Gaussian pervaded our business and scientific cultures, and terms such as sigma, variance, standard deviation, correlation, R square, the Sharpe ratio, all directly linked to it, pervaded the lingo.

Because the Gaussian bell curve disallows large deviations, but tools of Extremistan, the alternative, do not disallow long quiet stretches.

Since their models ruled out the possibility of large deviations, they allowed themselves to take a monstrous amount of risk.

Chapter 18 - The Uncertainty of The Phony

Taleb proposes that the most easily recognizable trait of a phony is that he attempts to take models of uncertainty that apply to one domain -- such as the behavior of subatomic particles -- and apply them to other domains, such as finances, that do not resemble the models’ original purpose. Taleb observes a similar problem of domain specificity in those who can ponder and find fault with theological matters but blindly accept standard models of uncertainty as presented to them by the “experts” (tunneling revisited). He argues that philosophers have an additional responsibility to question any and all accepted standards, including those which govern statistics and finances, because, “These people are professionally employed in the business of questioning what we take for granted”


😃😃😄😀😀

Part Four

---------------------------------------------------------------------------------------------------------------------------

Part Four is a brief discussion of how the knowledge Black Swan events should influence our philosophy.

Chapter 19 - How To Get Even With The Black Swan

Taleb sums up his guiding principles in line with regards to skepticism and uncertainty and offers readers some final words of advice. Namely, he advocates for taking control of one’s life saying, “You are exposed to the improbable only if you let it control you. You always control what you do; so make this your end.”


80 views0 comments

Recent Posts

See All

Ellie

Note To Self

From my riding buddy, Janet: "What is my purpose in life?" I asked the void. "What if I told you that you fulfilled it when you took an...

Comments


bottom of page