Posted: August 15, 2005
Hover here for
Article SummaryThis short column discusses some words that have different meanings to risk professionals than to normal people – and that are therefore likely to be misunderstood when used to explain risk to a nonprofessional audience. Among the words: conservative, significant/insignificant, positive/negative, bias, anecdotal, safe, prepared, confident, and even the word “risk” itself.

Risk Words You Can’t Use

This is the eighth in a series of risk communication columns I have been asked to write for The Synergist, the journal of the American Industrial Hygiene Association. The columns appear both in the journal and on this web site. This one can be found (more or less identical except for copyediting details) in the August 2005 issue of The Synergist, pp. 58–59.

Communication is about shared meaning. Successful communication requires that the words you choose mean the same thing to you as they mean to the people you’re talking to. Here are some risk words that are likely not to communicate.


Most risk professionals know that a “conservative” risk estimate or risk management strategy is one that errs on the side of caution. Conservativeness is a close cousin of the Precautionary Principle; we intentionally overestimate uncertain risks in order to be confident we are not underestimating them.

But to laypeople a “conservative” estimate is a low estimate. (Leave aside those who think it means a politically right-wing estimate.) When the media talk about a conservative estimate of how much damage an accident has caused, for example, they mean the accident probably caused more damage than the estimate suggests. So to the public a conservative estimate of the size of a risk is an estimate that probably understates the risk – exactly the opposite of what the word means to professionals.

Don’t bother trying to explain this; just don’t use the word. (The same goes for all the words that follow.)


To statisticians, a significant finding is unlikely to have occurred by chance; it means something. To public health experts, a significant outbreak affects lots of people; it is widespread (or likely to spread) as well as serious. Neither of these meanings sits well with ordinary (that is, normal) people, for whom personal, emotional significance is the significance that matters.

The problem usually arises in the negative. Your analysis of the correlation between exposure to an industrial solvent and incidence of migraine shows an insignificant relationship – that is, no evidence that the solvent gives people headaches. If you use the word “insignificant,” your audience is bound to think you’re minimizing their pain. People don’t want to hear that their headache (or, worse yet, their mother’s cancer) is insignificant. Most statistical abstractions, in fact, are off-putting when talking about illness and death. People don’t want to hear that their mother’s cancer is “an outlier in a data array” either.

Similarly, a disease that kills only a few people and isn’t infectious may not be very significant from a public health standpoint, but it is terribly significant to those grieving for the dead.


In real life, something is positive if it’s good news. But in statistics, a positive relationship means that when one variable goes up, so does the other. So what’s a positive conclusion about whether dimethylmeatloaf causes pancreatic cancer? An emotionally positive finding that it doesn’t, or a statistically positive finding that it does? And what’s a negative outcome in your study of cardiac disease impacts of working in the packing room? The bad news that there’s an effect, or the good news that there isn’t?

Bias, Anecdotal

All “bias” means in statistics is non-randomness; a biased study is one that can’t be relied on because the sample might not be representative. But to normal people “bias” implies cheating. So when plant neighbors collect information about family members’ diseases, don’t say their study is biased.

“Anecdotal” has a slightly different problem. Again, all you mean is that the neighbors’ study is nonrandom; it sounds like you think it’s chock-full of amusing little stories about those sick family members.


You can’t avoid using “risk” – but you do need to be careful. What risk assessors mean by risk is the magnitude of a bad outcome times its frequency. The rest of the world focuses on the frequency half of the definition; when people ask how big a risk is, they usually mean how likely, not how bad. People also use risk to mean uncertainty; a course of action may be called “risky” either because the probability of a bad outcome is high or because the probability is unknown.

Not to mention that what most people really mean by risk is what I call outrage – is it unfamiliar, memorable, involuntary, controlled by others, morally wrong, imposed by people you can’t trust, etc.


Saying something is “safe” means that it is risk-free. But nothing is risk-free, so nothing should ever be called safe. Safer than something else? Sure. Or safer than a standard. Or safer than it used to be. Or even “pretty safe” (though not “acceptably safe” – what’s acceptable to me isn’t up to you). But not “safe.”

Your goal in discussing risk and safety should always be to replace the question “Is it safe?” with the much better question “How safe is it?” What if people insist on asking “Is it safe?”? Say no. Then pause. They’ll do a double-take, think for a minute, then ask you what you mean. Now you can explain: If you’re forced to dichotomize risk at zero – to choose between “safe” and “unsafe” – you’ll have to go with “unsafe.” Another pause, and this time they’ll get the question right: “How safe is it?”

By contrast, claiming that something is “safe” allows people to imagine (or pretend) that you’re promising zero risk. Then when the risk turns out greater than zero, they can protest: “But you said it was safe!” “There is no such thing as zero risk,” you belatedly explain. “Now you tell us!”


Prepared, like safe, should never be a dichotomy. We may be better prepared for an accident or a terrorist attack than we were last year, but we are not “prepared.”

Health and safety officials spend much of their professional lives arguing that they’re not prepared enough and, therefore, we’re not safe enough. If only you could buy this or that piece of equipment, preparedness and safety would be much improved. Then a critic makes the same point, arousing some natural defensiveness, and suddenly you find yourself insisting that you’re “prepared” after all. This is yielding to the risk communication seesaw instead of managing it.


Try “hopeful” instead. A confident tone is okay, as long as it doesn’t stray into arrogance. But most risk situations have a lot of uncertainty; say so – confidently. Don’t say you’re confident unless (a) the news is bad, not good, so if you’re wrong we’ll feel relieved instead of blindsided; and (b) you’re really pretty confident.


These words have lost their meaning through insincere overuse. They no longer convey the emotions they describe. Try “I feel terrible that….” or “I can’t imagine how you must have felt when.…”

Copyright © 2005 by Peter M. Sandman

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.