Knowledge is (probably) toxic

Just like interacting with radioactive material without proper protective gear, being around "knowledge" without having safeguards in place is likelier to leave us more vulnerable to the world, rather than less.

Knowledge is (probably) toxic

We would do well to remember that knowledge is (probably) toxic.

N. N. Taleb in his book The Black Swan plays the glib prophet warning us about the dangers of information. Not just too much information, and not simply wrong information, but rather the danger inherent to the very act of acquiring information.

Just like interacting with radioactive material without proper protective gear, being around "knowledge" without having safeguards in place is likelier to leave us more vulnerable to the world, rather than less.

Consider Taleb's parable of the turkey – the turkey has noticed that every single day of its life so far it has woken up surrounded by its turkey friends and fed to its heart's content. On the basis of the information it has acquired to this point in its life, the turkey might feel justified in believing that tomorrow will be exactly the same.

But what if tomorrow is Thanksgiving Day?

This is the long tail on the probability graph. The unknown unknown.

This is the black swan that destroys our neat little theories.

Taleb's genealogy of knowledge

From my reading, Taleb's insight that knowledge is toxic relies on an underlying biological story about the development of the brain and its function. While he hints at this story in places, I believe that he leaves it underdeveloped, so I've decided to creatively flesh out that account here.

In order to survive, organisms need to acquire information about their environment, process that information, and adapt themselves accordingly. Even very simple single celled organisms can sense that they are being touched, and recoil in response. They intake a constant stream of data through the numerous cilia covering their outer membrane.

This data requires a certain interpretive paradigm in order for it to be meaningful, which is to say that the organism can exercise judgment in valuing a phenomenon as good or bad. A piece of data must be deemed conducive to survival or not conducive to survival in order for the organism to properly react, but this requires a system of valuation and judgments.

At the single celled level, this paradigm remains instinctual and simple, while nonetheless necessary. But, as both prey and predator became more complex, they need more sophisticated apparatuses for acquiring and evaluating data about their environment.

Karl Friston has theorized the "free energy principle" in which an organisms's central processing center constantly projects a model of the world which it uses to orient itself, but that it's constantly adjusting this model based on incoming information through its senses. Like a cybernetic learning system which constantly updates its model through a dynamic stream of action and feedback, the organism endeavors to reduce the "free energy" in the system, which Friston identifies as variations between the organisms's projected model and the incoming data.

Consider the example of a monkey suddenly becoming aware of a nearby rustling bush. The rustling bush might be a fellow monkey, and thus no threat at all. But if that's not the case, the monkey needs to ascertain that state of affairs lightning quick in order to react accordingly if it has any hope of survival. The more sophisticated the apparatus the monkey has at its disposal the better models it can work with and the more information it can process to make that critical judgment.

Synthesizing information takes time, but it can't take too much time, otherwise the monkey will become tiger food. It's a delicate balancing act. If the monkey miscalculates that the rustling bush is a friendly monkey, but the bush actually conceals a tiger, he's done for. If the monkey deliberates too long in order to come to the most accurate conclusion, he's also done for. So the monkey's brain has to make both the right call and use the least amount of time and energy.

This is where probability enters the scene, and Taleb's insights start to flow. In order to make the right call in the least amount of time, the brain takes a short cut.

By employing an interpretive paradigm which allows it to make the right choice more often than not, the brain can satisfy the necessary conditions for survival. It bakes some assumptions into its synthesis process, and while these assumptions aren't 100% accurate 100% of the time, they are accurate enough often enough that the net benefit is overall positive.

Notice then that what we call "knowledge" developed in order to achieve a certain goal within a certain set of parameters. The monkey's brain had to make the right call to survive, but it had a number of other conditions it also had to satisfy. The monkey needed to make this judgment quickly, otherwise it would be eaten while it was pondering the next best decision.

The monkey has a brain with a certain processing power, and this processing power relies on the monkey's gathering of nutrients to support a brain of this specific capacity. Thus, the brain's software could only develop in tandem with its hardware (and the limitations of that hardware!)

This means that the development of knowledge was constrained by the need to make accurate judgment calls in life-or-death situations while also taking into account the energy demands of the brain to process that information.

Modern Maladaptation

Taleb wants us to see that these situations in which our ancestors first developed and employed knowledge were very simple, or rather, much much simpler than the situations we modern humans inhabit on a daily basis. He points out how in these scenarios no single variable would emerge which could cause the utter breakdown of the organism's interpretive paradigm. These situations do not have "long probability tails."

To illustrate this difference, Taleb provides two examples, the first of which proposes that you are asked to guess the average weight of each person in a crowd of 100 people. It's likely that no single person in that group could compose a significant enough percentage of the total for your guess to be wildly off. In fact, you could go into the problem already possessing a fairly accurate range of possible guesses. Even an extreme outlier like the world's heaviest man (1,400 lbs) would only compose 10% of the total weight of a crowd of average adult humans.

Like the example above, the situation in which our evolutionary ancestors operated, and in which their knowledge apparatuses developed, were situations in which the most disastrous variable likely fell within a defined and predictable range of possibilities, and was never bad enough to truly confound their models. This meant that the monkey brain could employ crude probabilistic paradigms with a high degree of effectiveness.

But what if the organism finds itself in a situation where the most disastrous variable doesn't fall within a predictable range? Further, what if the effect that the disastrous variable can have is amplified by the nature of feedback loops or complex interlocking systems? The organism will find that its probabilistic paradigms are maladapted for such fields of interaction. The organism will find itself waltzing into mine fields which it thought were just fields.

Imagine again the crowd of humans, but this time we're measuring the crowd's collective financial net worth (again, I am employing Taleb's example). In this situation, it's very possible that a single individual in the crowd could compose 99% of the total net worth of the entire crowd. If Bill Gates was standing in a crowd of 99 Chilean peasants, his net worth would approach 100% of the crowd's financial net worth.

In the first situation, the situation is structured such that missing information does not significantly alter how you would act, but in the second situation, that single piece of missing information completely changes the entire situation.

Taleb wants us to see that our world is increasingly more structured by systems structured more like the example of Bill Gates standing in a crowd of Chilean peasants, and that our knowledge functions are maladapted for reasoning and making judgment calls in situations like this.

Taleb calls these paradigm-shattering variables "black swans," because a black swan is a single variable which can disprove your entire hypothesis.

Many people believed for a long time that all swans were white, because all they had ever seen were white swans. But all it took was a documented sighting of one black swan to undo this long held theory.

The asymmetry here is palpable. A paradigm requires a mountain of data in order to be convincing, but only one point of data for it to be entirely disproven.

Thus, no paradigm can be definitively proven, but a paradigm can certainly be definitively disproven.

In a complex and interconnected global society, we have put ourselves at the mercy of monstrously unpredictable and fragile systems which our mental models are inadequate to properly conceptualize. Think of the financial crash of 2008. Think of the ship Ever Given which blocked the Suez Canal, and by extension the entire international transportation system, for 6 days. Examples abound of how the interconnected systems and feedback loops of our civilization can produce wild (and disastrous) outcomes.

We have become more susceptible to black swans than ever, but our brains are bad at accounting for them, and really good at discounting their probability.

We are maladapted to the world which we have created for ourselves.

A free piece of theory in your inbox once a month