Posted: January 4, 2015
This page is categorized as:    link to Precaution Advocacy index
Hover here for
Article SummaryWhen you do something that might have caused an accident but doesn’t, there are two ways to interpret the near miss: as a warning that you should be more careful, or as evidence that the behavior in question is actually pretty safe. Safety professionals tend toward the first interpretation; everybody else favors the second. This column discusses several factors that affect near miss perception: hindsight bias, the gambler’s fallacy, learned overconfidence, “resilient near misses” versus “vulnerable near misses,” and vividness. It hypothesizes that a crucial distinction is whether you know the behavior in question often causes an accident (so you see the near miss as a warning) or you don’t know how dangerous the behavior is (so you see the near miss as evidence the risk is low). The column ends with advice for safety communicators trying to use near misses as warnings.

Warning, or False Alarm:
Why Safety Professionals See Near Misses
Differently than Everybody Else

This is the 30th in a series of risk communication columns I have been asked to write for The Synergist, the journal of the American Industrial Hygiene Association. The columns appear both in the journal and on this website. This one can be found (significantly condensed, and with minor copyediting changes) in the January 2015 issue of The Synergist, pp. 18–19.

Every industrial hygienist knows that near misses teach important safety lessons. An organization that harvests those lessons is likelier to be able to avoid actual accidents. A near miss is a warning that deserves to be heeded.

But not everybody sees it that way. In fact, safety professionals tend to see near misses differently than everybody else. In this column I want to explore the differences, and their implication for safety communication.

As I write this in mid-October 2014, Americans are still getting used to the new and scary risk of Ebola. In recent days, Ebola fears have led to a number of airline passengers being yanked off planes because they exhibited flu-like symptoms and had some connection, however remote, to Africa. So far they’ve all tested negative for Ebola. If that remains true, the number of such disruptions will soon decline precipitously.

So are these “near misses” that we should continue to take seriously, “casting a wide net” to reduce the odds of missing an actual Ebola case onboard? Or are they “false alarms” that we should learn to stop worrying about? Most experts, officials, and journalists say they’re false alarms. But that answer will change in hindsight if a traveler from West Africa ever infects some fellow passengers with Ebola.

Hindsight bias and the gambler’s fallacy

Hindsight is part of the problem – what psychologists call “hindsight bias.” After every serious accident, experts, officials, and journalists look for precursors – near misses whose lessons weren’t learned. They almost always find some.

But other near misses that weren’t followed by accidents don’t get catalogued. Precursor events that didn’t look bad beforehand look bad in hindsight. Did the tank that blew up yesterday have more near misses in prior years than the tanks that didn’t? If not, were those near misses really meaningful warnings that this particular tank might blow up? Did tanks in general have more near misses than other aspects of your operation? If not, why should you have known to focus on tanks?

After something goes wrong, most people, including safety professionals, interpret precursor near misses as warnings we should have heeded – sometimes this is wise, while other times it’s unwise, unduly influenced by hindsight bias. But if nothing has gone wrong, the rest of us may very well see these near misses (precursors-to-be) as false alarms, while safety professionals tend to see them as warnings.

Figuring out whether near misses are actually warnings or false alarms is surprisingly difficult, both before an accident and after one.

Hindsight bias isn’t the only fallacy affecting how people interpret near misses. Another is the so-called “gambler’s fallacy.” There are actually two gambler’s fallacies, which distort judgment in opposite directions.

One – the one to which safety professionals are prone – says a series of near misses means you’re on borrowed time, “overdue” for an accident. At the roulette wheel, this takes the form of “Number 27 hasn’t come up all night. It’s due!”

The opposite gambler’s fallacy says a series of near misses means you’re bound to keep having near misses, rather than accidents. At the roulette table: “Number 27 hasn’t come up all night, so it’s not worth betting on. Number 19 is hot!”

Learned overconfidence

Absolutely nobody in your current workforce has ever been the victim of a fatal accident. That may sound like a foolish point to make, but it has real psychological implications. Everybody’s life experience, until the day we die, tells us that we are immortal.

Similarly, if we’ve never had a serious accident – and most people haven’t – then our experience with safety precautions tells us that the precautions are unnecessary. We’re snapping our fingers to keep away the elephants, but there aren’t any elephants.

Until something bad happens, in short, near misses look to most people like false alarms. Safety professionals are nearly alone in routinely seeing near misses as warnings even beforehand, not just afterwards.

Take hospital infection control, for example. Dallas has so far discovered two nurses who were infected with Ebola while treating an Ebola sufferer at their hospital. Questions are swirling. Did the nurses breach PPE protocols? Were the protocols insufficiently protective in the first place? Is it realistic to expect healthcare workers to be 100% meticulous in following such protocols?

One relevant fact: Every nurse has considerable experience with breaches of infection control protocols that didn’t end in infection. And all too often the lesson learned isn’t that “We need to be more meticulous.” It is that “Infection control is pretty forgiving. Even when we mess up, it doesn’t usually do any harm.” Then along comes a lot less forgiving pathogen, Ebola, and learned overconfidence becomes life-threatening.

Near miss perception

The research on near miss perception is unequivocal: People are much likelier to find near misses reassuring than alarming.

The leading researchers in this area – Robin Dillon-Merrill, Catherine H. Tinsley, and Matthew A. Cronin – distinguish “resilient near misses” (you did a good job of preventing the accident) from “vulnerable near misses” (dumb luck prevented the accident). These aren’t so much kinds of near misses as they are ways of seeing the near miss: Does your information about it stress resilience or vulnerability?

Resilient near misses, their research shows, reduce people’s precaution-taking substantially. Vulnerable near misses don’t. But they don’t help either. People told about a vulnerable near miss are about as likely to take precautions as people with no near-miss information at all.

If you’re trying to use a near miss as a lesson to convince people to take more precautions, the odds are against you. At the very least, you need to stress the vulnerability aspect of the near miss, and downplay its resilience aspect.

Vivid near misses

Of course a really vivid, really scary near miss is a different story. One of the first crises I ever worked on was the 1979 Three Mile Island (TMI) nuclear power plant accident. Or should I say the 1979 Three Mile Island nuclear power near miss? Or even the 1979 Three Mile Island false alarm?

Because it was so vivid and so scary, the public saw TMI as a serious accident. It wasn’t just a warning that nuclear technology could go badly wrong; it was an example of nuclear technology actually going badly wrong.

For the nuclear industry, on the other hand, TMI was a false alarm. Despite lots of errors and lots of damage to the plant itself, very little radiation escaped and nobody died. The “lesson” to be learned, industry spokespeople say, is how safe nuclear power really is.

I suspect most safety professionals would see TMI as a near miss, a warning to be harvested for lessons. If those lessons had been learned better, might we have prevented Fukushima? That’s a question for another day.

Near miss statistics

Why do safety professionals see near misses differently than everybody else? I think I know the answer, though I haven’t found a study that addresses my hypothesis.

Consider two scenarios.

In Scenario One, you know (or at least you have an intuitive rough idea) what percentage of the time a particular dangerous behavior ends in a near miss and what percentage of the time it ends in an accident. The ratio of near misses to accidents resulting from that behavior is a constant you have in mind. And that ratio is a pretty small number.

It follows that an increase in the number of near misses is a warning: the more near misses, the higher the probability of an accident. “We almost exploded the tank!”

It also follows that reducing the number of near misses meaningfully reduces the probability of an accident (since the ratio between the two is a known, fairly low constant). So it’s worth the effort to strive to prevent near misses, even though near misses don’t do any actual harm.

That’s the way safety professionals think. They know the behavior is dangerous, so the near miss is a powerful reminder of the danger.

In Scenario Two, on the other hand, you have little or no prior knowledge about how often the behavior leads to a near miss and how often it leads to a real accident. The ratio is unknown. Every near miss you experience without an accident is data about that ratio.

If you’re trying to figure out how high the accident risk is, it’s natural and rational to see each near miss as another piece of evidence that the behavior in question rarely leads to an accident. “We’ve ‘almost exploded the tank’ hundreds of times and yet the tank has never exploded. Defense-in-depth must be working. Those ‘near misses’ aren't so near after all!”

And if that’s true, then reducing the number of near misses by modifying the behavior is a low-priority task.

That’s the way most people think. They don’t know whether the behavior is dangerous, so the near miss is evidence that it’s not.

Learning the wrong lesson

When a safety professional tells employees about a near miss, the professional is probably talking about Scenario One. But the workforce may be hearing Scenario Two.

What is intended by the communicator as a warning can easily be experienced as reassuring by the audience.

The safety expert knows the ratio of near misses to accidents, and thus rationally sees each near miss as a symptom of a systemic problem that may well cause an accident if it isn’t corrected.

The workforce doesn’t know that ratio, and thus just as rationally sees each near miss as evidence that near misses happen a lot but rarely cause an accident.

What’s a warning to the safety professional may feel like a false alarm to the workforce.

At home too

The same is true outside the workplace, of course. Suppose I text on my cell phone while I’m driving and briefly lose concentration – but I don’t crash the car; I don’t even swerve into the adjacent lane; no other car swerves into mine while I’m not looking. What do I learn?

If I already know that texting-and-driving is dangerous, the near miss – just the lapse in attention – reminds me that I should leave my cell phone in my pocket while I’m driving. But if I don’t know that already, the near miss is quite likely to “teach” me that texting-and-driving isn’t as dangerous as they say. (Of course my reaction depends on other things too, such as whether I tend to go into denial when I’m frightened.)

And what if I do swerve, but I regain attention just in time and pull back into my lane, and nothing bad happens? Maybe I’ll see the experience as a vulnerable near miss – especially if I know how dangerous texting-and-driving is, and doubly so if I knew someone who died in a texting-and-driving accident. But there’s a good chance I’ll see it as a resilient near miss instead. “Wow,” I conclude. “I’m a really good driver!”

The bottom line

The bottom line: If you want people to see near misses as warnings, you need to do at least two things:

number 1
Convince them first that the ratio of near misses to accidents resulting from the behavior in question is a known, low constant (if it’s true). In other words, prove that the behavior in question results in actual accidents often enough to justify seeing the near miss as a warning.
number 2
Emphasize the ways the near miss demonstrates how vulnerable your audience is – how close we came, how lucky we were. And deemphasize the ways it demonstrates resilience – how well we coped, how skillful we were.

I’m not trying to take anything away from the analysis of near misses as a crucial tool of safety management (though I do think we need to pay more attention to the problems of hindsight bias and the gambler’s fallacy). Learning from near misses is a lot less costly than learning from disasters.

But as a tool of safety communication, near misses are all too likely to backfire.

Copyright © 2015 by Peter M. Sandman

For more on precaution advocacy:    link to Precaution Advocacy index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.