The hot hand in airline pilots
I read the following quote in an article about the near-crash of a 747 plane in San Francisco:
The only problem is that - according to Gilovich, Vallone & Tversky, who studied the effect - it doesn't exist. They found that statistically, a player who has just made a series of successful shots is no more likely to make the next shot.
Lots of people have researched this area since, and though the results aren't completely conclusive, most psychologists agree that the effect is in the mind of the viewer. We seek to understand the world and this leads us to see patterns in random events, whether in basketball, coin tosses or roulette spins. It may seem natural that a hot hand effect could exist, but it seems that professional sports players know how to operate consistently near the limit of their ability, and any fluctuations from this are random. Most players, close as they are to their peak, are competing against other players also at their peak, so small and unconnected changes in conditions can lead to apparent randomness in outcomes.
Back to the airlines: my assumption was that the pilot quote was seeing patterns where none really exist. When there is a crash, it's easy to look back and see a series of "seemingly unrelated" incidents in recent times - and draw the conclusion that they are related after all.
But in fact it might not be an illusion. The illusion hypothesis is most likely when the events are genuinely random - that is, where there is no real underlying cause which could link them together. This might not be true in the case of pilot error.
The article linked presents a potential explanation of what that cause might be. Pilot training procedures are likely to be correlated across a company. If there is a weakness in these procedures (such as a lack of experience in 747 takeoffs, or training which is not frequent enough) then this could cause multiple incidents which have no obvious connection.
If the issue is specifically related to 747s, then one would expect the incidents to be related to 747s. But if the cause is a more general training failure - for instance people being trained only once a year, with their skills declining during the year - then the pattern would be different, perhaps with incidents showing up near the end of an individual pilot's yearly cycle.
This means there are no instant answers - when we think we see a phenomenon in the world it might be a result of a cognitive bias, or it might be real. Cognitive biases can give us one explanation, but if the effect is important, it's worth checking against real factors too. Statistics is a powerful tool to distinguish between these two possibilities: if we have enough data, and we find correlations in it that are statistically convincing, these can often show us where to look.
This doesn't mean a statistical correlation guarantees that mental factors are not involved. If the correlations seem to be related to the context or manner in which we make the observations, then a cognitive bias remains plausible. If we can observe the data in a different way and the effect still stands, we may then be able to eliminate observation, interpretation and other cognitive factors as a part of the correlation. At that point it's more likely that there may be something real going on underneath.
"In the past months, we have had several operational incidents," airline jargon for close calls, W.J. Carter, chief of United's Honolulu-based pilots, wrote in a Feb. 23 internal memo to his flight crews. "Major accidents historically are preceded by a series of these seemingly unrelated incidents. This disturbing trend is cause for concern"I was immediately skeptical, because patterns like this are often not real. The "hot hand" effect - often seen in sports, especially basketball - is a kind of momentum effect, where a player who has scored lots of baskets in the last few minutes is thought to be more likely to score again. It intuitively makes sense that someone could be "on a streak" where they are playing at the peak of their ability - those times when every shot you attempt seems to go in. Equally there could be times when you just keep missing and missing.
The only problem is that - according to Gilovich, Vallone & Tversky, who studied the effect - it doesn't exist. They found that statistically, a player who has just made a series of successful shots is no more likely to make the next shot.
Lots of people have researched this area since, and though the results aren't completely conclusive, most psychologists agree that the effect is in the mind of the viewer. We seek to understand the world and this leads us to see patterns in random events, whether in basketball, coin tosses or roulette spins. It may seem natural that a hot hand effect could exist, but it seems that professional sports players know how to operate consistently near the limit of their ability, and any fluctuations from this are random. Most players, close as they are to their peak, are competing against other players also at their peak, so small and unconnected changes in conditions can lead to apparent randomness in outcomes.
Back to the airlines: my assumption was that the pilot quote was seeing patterns where none really exist. When there is a crash, it's easy to look back and see a series of "seemingly unrelated" incidents in recent times - and draw the conclusion that they are related after all.
But in fact it might not be an illusion. The illusion hypothesis is most likely when the events are genuinely random - that is, where there is no real underlying cause which could link them together. This might not be true in the case of pilot error.
The article linked presents a potential explanation of what that cause might be. Pilot training procedures are likely to be correlated across a company. If there is a weakness in these procedures (such as a lack of experience in 747 takeoffs, or training which is not frequent enough) then this could cause multiple incidents which have no obvious connection.
If the issue is specifically related to 747s, then one would expect the incidents to be related to 747s. But if the cause is a more general training failure - for instance people being trained only once a year, with their skills declining during the year - then the pattern would be different, perhaps with incidents showing up near the end of an individual pilot's yearly cycle.
This means there are no instant answers - when we think we see a phenomenon in the world it might be a result of a cognitive bias, or it might be real. Cognitive biases can give us one explanation, but if the effect is important, it's worth checking against real factors too. Statistics is a powerful tool to distinguish between these two possibilities: if we have enough data, and we find correlations in it that are statistically convincing, these can often show us where to look.
This doesn't mean a statistical correlation guarantees that mental factors are not involved. If the correlations seem to be related to the context or manner in which we make the observations, then a cognitive bias remains plausible. If we can observe the data in a different way and the effect still stands, we may then be able to eliminate observation, interpretation and other cognitive factors as a part of the correlation. At that point it's more likely that there may be something real going on underneath.
Comments
I enjoyed this post for exposing typical muddled human thinking (including my own).