Safety Insights

Focusing on the human contributions to risk

Vic Riley

2. Some models are harmful

The British statistician George Box famously said, “All models are wrong but some are useful.” This overlooks the possibility that some models can actually be harmful.

In The Black Swan, Taleb refers to a Lehman Brothers employee who claimed, in 2007, that the developing housing crisis was a “one in ten thousand year event.” Now, which is more likely? That it really was a one-in-ten-thousand-year event? Or that the models that said it was were wrong?

Because it was those very same models that enabled the crisis in the first place. Those models, used to price derivative financial products from bundles of subprime loans, predicted that the loans were independently exposed to failure and the expected failure distribution would follow a normal bell curve. But, in fact, the underlying loans were highly coupled because they were almost all exposed to the same set of risks:

  • A large set of borrowers who had been encouraged to take on the most debt they could qualify for;
  • Mortgage issuers with lax borrower qualifications standards;
  • Regulators who rated the risks of bundled assets to be prime when the underlying assets were subprime;
  • A rising interest rate environment that caused many loans to fail all at once.

Finance companies had such faith in the models that overlooked these dependencies that some of them leveraged their own assets by thirty or forty times in order to amplify the profits they were getting. And when the loan failures started cascading, this leverage amplified their losses to the extent that several large firms, including Lehman Brothers, failed as well.

Why start with this topic? Because a model is a set of beliefs, assumptions, and expectations about the world: in other words, a theory about it. And every organization that designs, builds, approves, operates, or administers products necessarily has expectations for and assumptions about the end users of those products, and when that organization tries to assess the risk revealed by a serious incident, they'll likely do so using their existing expectations and assumptions.

Expressing these expectations in formal models can give organizations a false sense of security that they fully understand the issues and have contained the risks. This can cause them to focus on managing the risks they recognize and stop considering whether they’ve really even recognized them all. And this is why so many accidents are surprising: because they reveal risks that previously weren’t recognized.

For example, you might think that a casino would be the most competent organization to manage risk, since that’s essentially their business model. So what are the existential risks a casino might face? A large lucky bet by a high roller? A highly improbable run of outcomes against the house?

Taleb points to some of the truly existential risks casinos have experienced but hadn’t anticipated:

  • The tiger attack on performer Roy Horn which was estimated to cost the hosting casino in the range of a hundred million dollars;
  • A casino owner’s daughter who was kidnapped and held for ransom;
  • A disgruntled contractor who tried to blow up the casino he had worked at;
  • An administrator who was supposed to file tax forms with the IRS but instead filed them in his desk, exposing the casino to the loss of its gambling license and large fines.

I suspect that many accidents in transportation, process control, medicine, and other domains that were ultimately attributed to human error were similar surprises to their industries when they happened, because the expectations in those industries didn’t account for those “errors”. And those expectations are based on what they think their risks are and how they model them. But once those surprises happen, the next important thing is to decide whether to dismiss them as anomalies and persist in one’s prior beliefs, or to learn from them. In other words, which do we believe more: the model, or the reality?

Back