10. Predictions
When we analyze a condition to decide whether it presents a safety risk or not, we’re essentially making a prediction about its potential future outcomes. This is also true when a regulator approves a product under the belief that it has been satisfactorily proven as safe. The people making these decisions are subject to many of the same influences that can affect how people operate systems: expectation bias, confirmation bias, channeled attention, erroneous assumptions or beliefs, etc. So I think it’s useful to think about how people in the organizations that design, build, approve, and operate complex systems decide whether they should be considered safe or not.
Two of the biggest influences are the personalities and internal politics of those organizations, and here, the work of Philip Tetlock is relevant. Tetlock is a psychologist who, several decades ago, got interested in the question of how good the predictions made by various pundits on TV actually were. After all, the role of an expert pundit is typically to predict how the news of the day will drive the news of tomorrow. To study this, Tetlock ran a multi-decades-long experiment with over a thousand people to measure how well they made predictions, both within and outside their professed areas of expertise, and their personality traits.
To summarize his findings at a very high level, he distinguishes between two primary personality types: foxes and hedgehogs (following an ancient Greek saying that, “The fox knows many things but the hedgehog knows one big thing.”) Foxes tend to seek out information from more diverse sources and are more self-critical and therefore less self confident in their predictions. Hedgehogs consider themselves experts and are less self-critical and more self confident in their predictions. Where foxes may waffle, hedgehogs are decisive. The “one big thing” the hedgehog knows is a global theory through which they interpret events and make predictions. Examples in the world of punditry include neoconservatism and trickle-down economics.
The elevator speech version of Tetlock’s most interesting finding is this: in areas of their expertise, the more a fox knew about their subject, the better their predictions. But the more a hedgehog knew about their subject, the worse their predictions. Since hedgehogs tend to be more confident, one implication is that the more confident you are about your predictions, the more likely they are to be wrong.
The reason for this is that where foxes favor evidence, hedgehogs favor theory. Their commitment to a strong, global theory of the case drives their predictions to the exclusion of countervailing evidence. The more they rely on their worldview to drive their predictions, and the more they interpret new evidence through the lens of that worldview, the worse their predictions are.
So who gets on TV? The fox with their caution and caveats, or the firm, decisive, compelling, and self-confident hedgehog? Of course, it’s the latter, and this is why pundit predictions are almost universally unreliable. It turns out that most pundit predictions are wrong.
Tetlock doesn’t extend this thought to organizational dynamics, but I think it applies. Depending on organizational culture, the self-confident hedgehogs may advance into leadership positions while the more cautious foxes get left in the trenches. I think this may partly account for the common disconnect between the lower and upper layers of large organizations: why the lower layers often resent the upper layers, and the upper layers often discount the lower layers.
I suggest that it’s very useful for anyone involved in safety in a large organization to be aware of this dynamic and be able to identify when a strong personality may be driving a decision based more on their own beliefs and expectations than on all the available evidence. I think a hedgehog is more likely to dismiss a surprising, unexpected action in an incident or accident as an anomaly because their global theory doesn’t allow them to accept that such an action should have been possible, let alone that it might happen again.
Back Next