5. Probabilities and possibilities
This is why it’s usually impractical to predict human behaviors in complex environments such as aviation probabilistically. Not only are there an unbounded number of variables that might influence behaviors, but which of those variables will weigh more heavily than others is also unknown. While you can take a piece of equipment into a lab and subject it to stressors until it fails, you usually can’t do that with people. And while the range of variables that can affect equipment is usually small (temperature, vibration, cycles or hours of use), the range of variables that can affect human behaviors can’t be tested to any acceptable statistical power. Trying to put a number on human behavior probabilities invites the overconfidence in a model discussed earlier.
Furthermore, trying to predict human behaviors probabilistically can tempt one into favoring expectations over evidence (or models over reality). If someone does something unexpected in an incident or accident, it’s tempting to dismiss it as an anomaly and continue to believe in the prior expectations. In these cases, one could examine the behavior and try to explain it in order to determine if it actually presents a systemic risk, or one could dismiss it until enough of them happen to persuade a skeptic that it’s real. Since a statistical analysis requires more than one data point, and every accident (in aviation, at least) is unique, it’s illogical to try to extrapolate a trend or make a probabilistic prediction from it.
Taleb makes the distinction between favoring theory over evidence in his distinction between economists (favoring theory) and traders (favoring evidence). With no real skin in the game, it’s easy for economists to carry theoretical expectations beyond the point where they’re no longer valid, while traders are penalized if they ignore immediate evidence. (This may be why it took economics so long to evolve from assuming everyone was a rational, optimizing economic actor to recognizing the tenets of behavioral economics.)
As an example of this, he asks us to imagine a fair coin being tossed 99 times with the result being “heads” on every toss. What’s the probability of getting “tails” on the 100th toss? A frequentist would say it’s 50%, because the coin is fair and every toss is independent, and anyone who thinks otherwise is subject to the “gambler’s fallacy”. But now we have a conflict between the premise (the coin is fair) and the observation of a highly improbable result. Which do we believe? Our prior expectation, or the evidence?
The evidence shows that the coin really isn’t fair and the actual probability of “tails” on the next toss is actually zero. (A Bayesian calculation would confirm this.) The statement that the coin was fair sets up an assumption that the evidence is contradicting, and revising one’s conclusion requires recognizing and questioning one’s assumptions.
Similarly, we may expect that, for example, all airline pilots are fully trained and proficient, so we're surprised when an accident occurs that challenges this assumption. Do we contnue with our expectations and treat the accident as an anomaly? Or do we question the assumption? And if we intend to question the assumption, how do we recognize the need to do so without needing overwhelming evidence against it?
To me, this has deep implications for how we should treat evidence from incidents and accidents. Even if there’s only one unexpected event, it probably has something to tell us. It shows us a new possibility, or demonstrates that something we may have thought highly improbable is actually plausible enough to be taken seriously.
If you follow aviation, you’ve probably been surprised by the pilot actions that led to some high-profile accidents. You might have never expected a pilot to raise the nose in response to stickshaker as the Colgan Air 3407 pilot did in 2009. You might have never expected an experienced transport pilot to crash a 777 during a visual approach into a large airport on a clear day as the pilot of Asiana flight 214 did in 2013. When you’re surprised by such an event, it’s because that event violates your assumptions.
Back Next