Posted: May 25, 2018
This page is categorized as:    link to Precaution Advocacy index link to Pandemic and Other Infectious Diseases index
Hover here for
Article SummaryFor decades I have maintained a file of articles in which experts claimed some precaution they disapproved of could give people a “false sense of security.” But until recently I didn’t focus enough on false sense of security as a genuine risk communication and risk management problem. Every precaution is also a communication; it tells people that they’re safer than they were previously, and thus implies that alternative precautions may be superfluous. Sometimes a precaution does a better job of making people feel safer than it does of making them actually safer – thus inculcating a false sense of security and potentially undermining other precautions. This column looks at both intentional and unintentional inculcation of a false sense of security. It tries to make a case for not overselling precautions. It also addresses some related phenomena: false sense of insecurity (precautions that work better than they seem); risk homeostasis and compensation; etc. The column makes substantial use of flu vaccination as an example, so I’ve indexed it under Infectious Diseases as well as Precaution Advocacy.

False Sense of Security

This would have been the 38th in a series of risk communication columns I have written since 1995 for The Synergist , the journal of the American Industrial Hygiene Association. The columns always appeared both in the journal and on this website.

Because this column uses flu vaccination as an example for several points it makes about “false sense of security,” the editor sent it for review by “an AIHA member who is an occupational nurse and is involved with AIHA’s Healthcare Working Group.” She objected to most of what I had to say about flu vaccination. Based on her objections, The Synergist decided to reject the article (a first since 1995).

I thought a couple of the reviewer’s points were valid. Several others steered me to better ways to express what I was saying so it would be less vulnerable to misunderstanding, real or feigned. But mostly I thought her objections were “the thing itself.” The column criticizes misleadingly one-sided promotion of flu vaccination that could give vaccinees a false sense of security. Her objections didn’t so much defend what I was criticizing as they embodied it.

My Synergist columns are often a good deal longer on this website than in The Synergist. To meet the magazine’s length specifications, I have typically written two versions. So what’s below isn’t the article The Synergist rejected. I changed what I had sent to the editor, taking the reviewer’s comments into account, and I reinstated the paragraphs and sections I had never sent the editor at all.

The dictionary definition of “false sense of security” is simply the belief that some situation is safer than it actually is. If you underestimate any risk, you’re experiencing a false sense of security vis-à-vis that risk. Thus every justified warning (in my jargon, every justified piece of precaution advocacy) aims at diminishing the audience’s false sense of security.

More often than not, the term “false sense of security” is linked to a verb like “give.” Some behavior or message or precaution is accused of giving people a false sense of security. I have no quarrel with this accusation when it’s used to criticize over-reassuring risk communication. Though over-reassurance frequently backfires, when it succeeds it does indeed give people a false sense of security.

Rhetorical ammunition

The accusation of giving people a false sense of security is also deployed as rhetorical ammunition against precautions – but only against precautions of which the speaker disapproves. For example, a gun control advocate is much likelier than a gun rights advocate to point out that owning a gun may give the possessor a false sense of security. A gun rights advocate is likelier to point out that making guns illegal may give the public a false sense of security.

Critics who believe a regulation focuses on the wrong problem are exceedingly likely to complain that it could lead to a false sense of security. They think the regulation will erroneously convince people that they’re safe now and that the other regulation – the “right” regulation – is therefore unnecessary.

Similarly, critics who believe a regulation at least addresses the right problem but doesn’t go far enough to remedy that problem might well complain that it could lead to a false sense of security. They think the regulation needs to be tougher so the sense of security it gives people will be true rather than false.

Those two are obvious. But this one is less obvious: Critics who believe a regulation goes too far might also complain that it could lead to a false sense of security. With a less stringent regulation, they think, or even no regulation at all, people wouldn’t be lulled into imagining they were safe when they’re not.

In 2014, for example, California repealed a law requiring food service workers to wear gloves. The rationale was that “glove use can create a false sense of security among food workers, leading to more high-risk behaviors and eventual cross-contamination.” Bare-handed food handlers, the thinking went, would know they were all too likely to contaminate customers’ food, so they’d be more careful.

Supporters of a regulation, on the other hand, are exceedingly unlikely to point out that it might give people a false sense of security.

It’s not just about regulations, of course. All precautions are vulnerable to the “false sense of security” accusation. As I was starting to draft this column, the U.S. Surgeon General urged people whose loved ones have opiate addiction problems to keep the overdose antidote naloxone on hand. Some opponents responded that the ready availability of an antidote might reassure opiate users that an overdose wouldn’t necessarily kill them. By creating a false sense of security, they said, it could paradoxically exacerbate the addiction problem.

Recent uses of the term “false sense of security” in The Synergist and other AIHA publications have included warnings that exoskeletons have “the potential for creating a ‘false sense of security’ for wearers when handling heavy loads”; that “inexpensive cloth masks worn by people who hope to reduce their exposure to air pollution … could be giving users a false sense of security”; that a proposed standard for preventing Legionnaire’s Disease needs revision because “there is a potential the building operator will have a false sense of security”; that imposing tougher rules for personal protective equipment “avoids that possibility of employees having a false sense of security when using their own” unofficial PPE; etc.

I wrote about the rhetorical use of “false sense of security” in a 2013 entry in my website Guestbook. An Australian physician had asked whether I thought getting a flu shot – a notoriously ineffective vaccine compared to most – might actually do more harm than good by deterring vaccinees from other precautions like covering their coughs, washing their hands, or staying home when they were sick. I didn’t think so, for reasons I’ll get back to later in this column. But I took advantage of the question to write at length about how public health professionals and others disparage precautions they disapprove of by accusing those precautions of giving people a false sense of security, while ignoring that same possibility vis-à-vis precautions they recommend.

I found examples of anti-vaxxers asserting that the flu vaccine could give people a false sense of security that they won’t catch the flu, or that their flu-like illness couldn’t be the flu. I found the same claim in the sales literature for hand sanitizers, and in blogs promoting various naturopathic nostrums. The public health profession, on the other hand, virtually never worries aloud that the flu vaccine might lead to a false sense of security. To the contrary, since public health professionals (rightly) want people to get vaccinated, they (maybe not so rightly) hesitate to point out to their target audience of potential vaccinees that the flu vaccine is a crappy vaccine as vaccines go. Instead they promote and often oversell the flu shot, apparently calculating that in the case of flu vaccination a false sense of security can increase vaccine uptake and save some lives.

Flu prevention campaigns typically recommend covering your cough, washing your hands, and staying home when you’re sick right along with getting a flu shot. So they don’t disparage any of those flu precautions as likely to give people a false sense of security. But the flu prevention leadership generally disapproves of mask-wearing as a flu precaution. Masks are de rigueur for healthcare workers in close proximity to respiratory disease patients, to protect the healthcare worker from the patient. But in most other contexts they’re discouraged, mostly because they’re expensive, inconvenient, and uncomfortable. So there are lots of warnings in the flu literature about a false sense of security from masks. If you don’t believe me, do a Google search for “flu ‘false sense of security’ mask” and see for yourself.

But these warnings disappear when healthcare workers refuse the flu shot. Then many hospital managements punish their unvaccinated employees by making them wear masks instead – and temporarily stop intoning their false-sense-of-security mantra about masks.

For more examples of this rhetorical use of “false sense of security,” check out my 2013 Guestbook response. I want to focus here on false sense of security as a genuine risk management and risk communication problem. I now think I dismissed this substantive issue too easily in the Guestbook response. So I want to take a second crack at it.

The risk communication ideal

Let me start with one of the basic goals of risk communication as I see it: to achieve a level of audience outrage (fear, concern, anger, etc.) commensurate with what the speaker believes to be the extent of the hazard. You should want people to be as upset about a risk as you honestly think the risk actually deserves. A corollary: You should want people to see a precaution as reducing the risk as much as you honestly think the precaution actually reduces the risk.

Not everyone endorses these standards. More to the point, not everyone who endorses these standards in principle tries to adhere to them under all circumstances. Some examples:

Needless to say, industrial hygienists sometimes find these four sorts of misrepresentation tempting, and sometimes give in to the temptation.

Debates can get both hot and complicated about whether/when it’s acceptable to promote these under- and overestimates of risks and precautions. Some commentators argue that fear itself is a threat to health and wellbeing, so downplaying a risk and overstating existing precautions can serve the public interest by suppressing fear (even when the fear is empirically justified). Others argue that mobilizing public outrage is by far the most effective way to achieve change, so overstating a risk and downplaying existing precautions can serve the public interest by building a head of steam that forces useful corporate and governmental reform (even when the grievances that do the trick are empirically unjustified).

Although risk communication expertise conveys no special wisdom about the ethics of misrepresenting risks and precautions, it does have knowledge to offer about the likely repercussions of misleading people in these ways. If your audience suspects at the outset or finds out later that you have misled them, the predictable outcomes include increased mistrust and decreased credibility: for you, for your organization, for whatever policy option you were promoting, for the remainder of your organization’s policy agenda, and for your entire profession. Another predictable outcome: People’s assessment of the risk or precaution you misrepresented tends to boomerang to the opposite extreme from the side you dishonestly endorsed.

The probability and magnitude of these bad effects of risk communication dishonesty depend substantially on who you are and which side you’re on. The alarming side of any risk controversy is “allowed” to exaggerate more than the reassuring side. And good guys are “allowed” to exaggerate more than bad guys. If you’re an oil company over-reassuring the audience that your refinery won’t pollute the neighborhood, the likelihood of getting caught and the outrage after you’re caught are disastrous. If you’re an anti-smoking activist overstating how much a tobacco habit reduces a smoker’s life expectancy, you’re far less likely to get exposed, and even if you are exposed, you’re far less likely to pay a high reputational price.

For purposes of this column, the misrepresentation of interest is the last one on the bullet list above: intentionally overstating the effectiveness of a precaution in hopes of inculcating a false sense of security.

In my long career as a risk communication consultant, I invariably opposed my clients’ plans to intentionally give people a false sense of security about any precaution. Before you join me in my opposition, think again about flu vaccination.

The seasonal flu vaccine is usually less than 50% effective. And it’s least effective – way under 50% – for the people who need it most, the elderly who are likeliest to die if they catch the flu. In recent years the CDC has been scrupulously honest about these facts in technical publications, and sometimes even admits them in news releases. But for obvious reasons, you’ll rarely if ever see this information in flu vaccination promotion campaigns; people being urged to get their flu shots are typically told simply that the vaccine is “safe and effective,” sometimes (but not usually) accompanied by a grudging acknowledgment that like every vaccine the flu vaccine “isn’t perfect.”

Why not be more up-front about the flu vaccine’s unique deficiencies? Vaccination proponents reasonably figure that if everyone realized how ineffective the flu vaccine really is, probably fewer people would bother getting vaccinated. But despite the vaccine’s deficiencies, getting vaccinated is good for our health. So arguably thinking the vaccine works better than it does – a false sense of security – is good for our health too … even if it might deter us from other flu precautions and even if there might be a credibility price to pay if we eventually find out the truth about flu vaccination.

Can creating the impression that a precaution works better than it actually works, thus giving people a false sense of security, still do more good than harm? The false sense of security is certainly a downside: a disincentive for other precautions and a threat to credibility. But promotional hype may have an upside too: more people supporting or taking the precaution. It is far from obvious how to balance the scales.

The downside of precautions

Every precaution induces some sense of security. The only exception I can imagine is a secret precaution, one that we’re unaware of. Whatever else it is, a precaution we know about is a piece of risk communication. It tells us that we are safer than we would have been if the precaution hadn’t been implemented.

It follows that every precaution makes other precautions feel less necessary.

Of course every precaution that isn’t useless also makes us objectively safer, and makes other precautions objectively less necessary.

These truths are undeniable. Let me say it again:

Here’s what’s crucial: The two effects aren’t necessarily equal in magnitude.

With this in mind, we can more explicitly define what we mean by false sense of security. Consider three possible outcomes of a precaution.

Outcome A: The second effect (feel safer) is stronger than the first (objectively safer), so we experience a false sense of security. We worry less than we did and less than we should about the risk, and we lose interest in other possible precautions that we would be wise to stay interested in.

Outcome B: The first effect (objectively safer) is stronger than the second (feel safer). We don’t realize how much safer the risk has become. So we remain more worried and more interested in other possible precautions than is appropriate to the new situation. We’re experiencing a false sense of insecurity.

Outcome C: The two effects are exactly equal. The precaution has communicated perfectly how much additional safety it has provided us. We are as attentive to the remaining risk and as interested in other possible precautions as the remaining risk justifies, neither too apathetic nor too anxious.

Outcome C is the risk communication ideal. But Outcome A and Outcome B happen. Precautions often miscommunicate their own effectiveness; they feel more or less protective than they actually are.

Sometimes this is the communicator’s intention, whether for honorable reasons or not-so-honorable reasons. Sometimes it’s totally accidental. And sometimes it’s somewhere in the middle: You didn’t consciously intend to give people a false sense of security or a false sense of insecurity, but you knew or should have known that you were misrepresenting the effectiveness of a precaution.

Whatever the intentions of risk managers and risk communicators, Outcome A and Outcome B happen.

When is a false sense of security a significant problem?

When is an unintended or not-fully-intended false sense of security most likely to be a significant risk management and risk communication problem? My answer: when the hazard is pretty serious and there are lots of available precautions, all of which are way less than perfect.

It makes sense in such a situation for people to take several precautions. But if one precaution feels much more effective than it actually is, the result will be what I called Outcome A: a false sense of security. Other precautions will feel less needed than they actually are. People will be less likely to take these other precautions, and will therefore be less safe than they could have been – and less safe than they would have chosen to be if they had realized how ineffective that first precaution actually was.

This sort of situation is quite common. And it’s quite commonly exacerbated by overselling that first precaution.

You want your audience to do X. You know that X isn’t that great a precaution, but it’s better than nothing and it’s the best you can offer. You’re afraid that if you tell people about X’s deficiencies, they won’t be motivated to do enough X. So you tell them that X is a great precaution. They do X, feel adequately protected, and therefore resist doing Y and Z, additional precautions that would have helped overcome the deficiencies of X. You weren’t especially trying to discourage Y and Z; that was an unintended but predictably likely side-effect of your efforts to encourage X.

I haven’t seen a study that proves my point, but I’ll bet it’s more effective to say, simply, “X isn’t great but it’s the best precaution we’ve got. Doing Y and Z as well will help overcome the deficiencies of X.”

Years ago my wife and colleague Jody Lanard was training a group of Southeast Asian government officials about risk communication vis-à-vis the risk of H5N1 bird flu – a type of influenza that is far deadlier to humans than most flu viruses, and against which there is no available human vaccine. (Fortunately, H5N1 doesn’t often infect humans in the first place – but experts feared and still fear that it could mutate in ways that would make it much more transmissible from bird to human and human to human.) The discussion turned to hand-washing. As a precaution against any influenza virus, hand-washing has some value, but not much; flu transmission is mostly face-to-face through the air. Participants agreed that enthusiastic endorsement of hand-washing risked inculcating a false sense of security. But they worried that acknowledging the deficiencies of hand-washing might deter audience members from washing their hands enough.

So Jody and the officials conducted several informal focus groups of staff at the Malaysian hotel where the meeting was taking place. Some groups were given enthusiastic messages about how marvelously hand-washing would reduce the risk of bird flu. Others were given messages that recommended hand-washing as “not all that great against bird flu but readily available, and the best we’ve got at present.” Then the groups were asked about their hand-washing intentions in the event of a bird flu outbreak. Those in the “best we’ve got” groups planned to wash their hands more than those in the “enthusiastic messages” groups.

In other words, the risk communication approach that worked best to encourage hand-washing was the approach that didn’t oversell hand-washing. It maximized the target audience’s intentions to wash their hands while minimizing the twin risks of overselling: a false sense of security that might deter learning about other precautions and a loss in credibility when people later learned how insufficient hand-washing really is against flu.

The guesses of a few hotel staff members in Malaysia about how they would respond to hypothetical messages about hand-washing as a bird flu precaution aren’t proof that candor about imperfect precautions is always wiser than overselling. But absent a methodologically sound study – or better still several methodologically sound studies of different precautions and different audiences – Jody’s informal focus groups are (to coin a phrase) the best we’ve got.

There’s one important factor that partially protects people from a false sense of security about an oversold precaution. It is the psychological urge to be consistent (the universal effort to avoid cognitive dissonance). People who take one precaution are often likelier to take others. That’s partly because the same concern that motivates one precaution against a particular risk motivates other precautions against that risk as well. But it’s also because people’s decision to take that first precaution “communicates” to themselves that they consider the risk serious – so they try to be consistent and take other precautions too.

If you’re worried enough about intruders to buy a burglar alarm and turn it on, you’re likelier to lock the doors and windows. People with good anti-intrusion software on their computers are likelier than others – not less likely, likelier – to also have strong passwords, think twice before opening up weird-looking attachments from strangers, etc.

As I wrote in my 2013 website Guestbook response, “people who got their flu shot last year are likelier than others to get their flu shot this year too. And people who get their flu shot every year take more of the other recommended anti-flu precautions than people who don’t.” Study after study confirms this pattern of consistency in influenza precaution-taking – and in precaution-taking more generally.

But I don’t want to give you a false sense of security about false sense of security! That is, I don’t want to oversell the value of cognitive dissonance avoidance as surefire protection against overestimating the value of one precaution and neglecting other possible precautions. That’s what I now think I got wrong in that Guestbook response.

A 2017 study in the American Journal of Infection Control examined “presenteeism” (going to work sick) in healthcare workers. Not surprisingly, the study found that 41% of those who said they had a flu-like respiratory illness during the most recent flu season admitted they went to work anyway. The surprise: For healthcare workers who had been vaccinated against the flu that season, the figure was 45%, while for unvaccinated respondents it was 29%. (The difference was statistically significant at p = .03.)

We should never put too much trust in a single study, but it looks like vaccinated healthcare workers are likelier than unvaccinated healthcare workers to come to work with a flu-like illness. As the study authors theorized, “Influenza vaccination receipt might reduce [healthcare workers’] perceived risk of having influenza and transmitting it to others.”

I’m guessing that many vaccinated healthcare workers thought their flu shot was a lot more effective than flu shots actually are. They figured their vaccinations made them highly unlikely to get the flu, so they assumed their respiratory symptoms were probably “just a cold.” That made them likelier than their unvaccinated colleagues to go to work sick – ignoring the pleas of public health professionals that everybody, but especially people in the patient-care business, should stay home with flu symptoms.

Flu vaccination, it seems, can give healthcare workers a false sense of security. Of course lots of factors, from morale to sick pay policy, affect a flu-stricken healthcare worker’s decision to go to work or stay home. Flu vaccination is only one of those factors.

Michael Osterholm, head of the University of Minnesota’s Center for Infectious Disease Research & Policy, put it this way: “Some health care workers who get vaccinated, assuming that they are protected, they may actually in fact come to work when they shouldn’t be working, with fever, and chills, and actually have a false sense of security, and transmit the virus to these patients.” Osterholm called mandatory flu vaccination of healthcare workers an “overreach” that “science does not support” in part because of the false sense of security such a policy might inculcate.

A false sense of security about flu vaccination could have societal effects as well as individual effects. In a 2012 report entitled “The Compelling Need for Game-Changing Influenza Vaccines,” link is to a PDF file Osterholm and colleagues argued that the overselling of flu vaccine efficacy had undermined the incentive to fund research toward a “universal flu vaccine” that would last longer than a year and work against a wide range of flu strains, including newly evolved pandemic strains.

(Note that there’s a bit of circularity here. I was a member of the Expert Advisory Group for this report, though I was not one of its authors. So I’m citing in support of my opinion a report that might have been influenced by my opinion. See “We’d Be Likelier to Develop a Better Flu Vaccine If Public Health Officials Didn’t Keep Misleading Everyone about the Flu Vaccine We Have.”)

As one article on the Osterholm et al. report put it:

In findings that have stirred some controversy, the report takes the Centers for Disease Control and Prevention to task for widely promoting vaccine uptake while downplaying the limited efficacy of the seasonal vaccine. As a result, there is little incentive to develop a much-needed new vaccine that could meet the threat of future pandemics and provide immunity to seasonal viruses for up to a decade, the Center for Infectious Disease Research & Policy (CIDRAP) at the University of Minnesota in Minneapolis reports….

“[They] have overstated the benefits of this vaccine,” says Michael Osterholm, PhD, MPH, CIDRAP director and lead author of the report. “And while we strongly urge people to get vaccinated, what [public health officials] have done by suggesting that this vaccine is the answer – we just need to get more people vaccinated – is unintentionally dampen any appetite in the private sector for investing in these new vaccines.”

In the several years since this report was published, the CDC and the public health profession have become more candid about the deficiencies of the seasonal flu vaccine, at least in policy documents though rarely in “Get Your Flu Shot” promotions. And there has been a modest increase in funding for research on a universal flu vaccine. I can’t prove these two developments are connected.

Notwithstanding all of the above, I still believe (as I said in my 2013 website Guestbook response) that cognitive dissonance tends to help prevent precautions (including flu shots) from inculcating a false sense of security. A precaution teaches you two things, in short. It teaches you that you’re safer than you were, so maybe you’re safe enough and other precautions would be superfluous; that’s the false sense of security effect. But it also teaches you that you’re serious about this risk, so maybe you should be consistent and take some additional precautions against the same risk; that’s the cognitive dissonance effect. They’re both real. In my Guestbook response I focused on the cognitive dissonance effect and wrote that flu shots were unlikely to give anybody a false sense of security. Now I think that both effects deserve attention, and that sometimes flu shots do give people a false sense of security.

There are times, moreover, when the cognitive dissonance effect can exacerbate people’s false sense of security instead of protecting against it. Here again flu vaccination may be a case in point. Most Americans make a free choice to get vaccinated against the flu or not to bother. So for them the cognitive dissonance effect and the false sense of security effect work in opposite directions. But think about healthcare workers.

Healthcare workers haven’t just been told that the flu shot is effective. Often they have been pressured or even required to take it. Most willingly got the shot and didn’t object to the pressure. But the not-so-small minority who got the shot despite feeling skeptical about flu vaccine efficacy might dislike accepting that they knuckled under to pressure. It might arouse considerable cognitive dissonance to see themselves as the sort of person who gives in to management demands. It might feel more self-respecting to tell themselves that they must think the flu shot works really well after all – so well that other flu precautions aren’t needed.

Pressure to do something we’d rather not do can cut either way. Sometimes the fact that we knuckled under arouses cognitive dissonance – so we rationalize knuckling under by telling ourselves we actually wanted to do what we were pressured to do – so the pressure turns us into proponents. Other times we resent the pressure (even as we give in to it), and the resentment makes us even more fervent opponents than we were before.

And sometimes, of course, we give in to the pressure without feeling either resentment or cognitive dissonance. We just comfortably do what we’re told. Consider for example a law that requires motorcycle helmets. Some motorcyclists resent having to wear a helmet, and their resentment makes them ever more anti-helmet. Some motorcyclists can’t bear to see themselves as the sort of person who knuckles under, so they decide they actually approve of motorcycle helmets. And some motorcyclists simply obey the law, and doing so has essentially no effect on their opinion about helmets as a motorcycle safety precaution.

What happens when precautions are taken under pressure is an especially relevant question for industrial hygienists, who are often the ones pressuring their workforce to take precautions. Leaving aside the “no effect” possibility, I’m suggesting four effects:

  1. The precaution itself makes us feel safer, which makes us less inclined to take other precautions (the false sense of security effect).
  2. The precaution itself makes us feel like this is a risk we take seriously, which makes us more inclined to take other precautions (the cognitive dissonance effect).
  3. We give in to the pressure but we resent having to give in, which makes us opponents of the precaution we were forced to take (call it the resentment effect). The resentment might logically diminish our false sense of security about the pressured precaution, leaving us more open to other precautions. Or the resentment might logically make us more convinced that the risk isn’t worth worrying about, leaving us less open to other precautions.
  4. We give in to the pressure but we resist feeling like the sort of person who gives in to pressure, which makes us supporters of the precaution we were forced to take (another cognitive dissonance effect). That might logically increase our false sense of security about the pressured precaution, diminishing our interest in other precautions.

Two of these effects (#1 and #4) make us less likely to take other precautions. One (#2) makes us more likely to take other precautions. And one (#3) could cut either way.

In the absence of pressure, only the first two of these four effects are in play. Taking your first precaution against a particular risk can convince you you’re safe enough now, and diminish your interest in further precautions. That’s the “false sense of security” outcome. Or the precaution can convince you you’re really worried about this risk, and increase your interest in further precautions. That’s the cognitive dissonance reduction outcome.

People who solicit charitable contributions for a living know that potential contributors who have given before sometimes resist giving again because they feel they have done enough already. Other times that first contribution is experienced as a behavioral commitment, a “foot in the door” that makes later contributions likelier (thanks to cognitive dissonance). Considerable research has gone into teasing out what factors affect which of these two phenomena occurs. I summarized some of the conclusions from this research at the end of my Guestbook response.

It’s the same for a partially effective precaution that you hope will lead to other precautions, not preempt them. Sometimes you get a foot in the door; the first precaution convinces people they should take other precautions too. Other times you get a false sense of security; the first precaution convinces people they have done enough already.

Five bottom lines

number 1
Rhetoric about a “false sense of security” is often just a way of disparaging a precaution the speaker disapproves of. But a false sense of security is also a genuine problem in risk management and risk communication. Taking a precaution or having one taken on our behalf sometimes makes us feel safer more than it makes us actually safer. This can lead us to downplay, avoid, or shrug off other precautions we ought to take as well.
number 2
Risk communicators sometimes intentionally try to inculcate a false sense of security (or a false sense of insecurity). Sometimes, for example, they want the audience to overestimate the effectiveness of a precaution, so more people will take, support, or tolerate that precaution. I don’t favor such a strategy, but I concede that it is tempting and sometimes works.
number 3
Unintentionally giving people a false sense of security does the most harm when a risk is serious enough and the available precautions are weak enough that taking several precautions is the right course of action. If we ought to be doing everything we can to protect ourselves, it’s a bad mistake to encourage us to see one particular precaution as so effective it’s enough in itself.
number 4
The best way to avoid a false sense of security is candor about how effective/ineffective a precaution actually is, trying not to preempt the case for additional precautions. I believe that candidly endorsing a precaution as “not that great but the best we’ve got” (when that is the truth) probably works at least as well as overselling the precaution – without the twin downsides of (a) loss of credibility if people don’t believe you or later learn you misled them; and (b) a false sense of security and diminished interest in other precautions if they do believe you. But I haven’t got proof.
number 5
Even when a precaution is oversold, a false sense of security isn’t the inevitable outcome. Sometimes taking a precaution or having one taken on our behalf convinces us the risk must be serious and makes us likelier to take other precautions as well – the “foot in the door” effect motivated by cognitive dissonance. But don’t count on it.

 

[The version of the column I submitted to The Synergist ended here (and had a lot less detail). Below are three additional sections I held back – partly for reasons of length and partly because they were relevant but somewhat peripheral and interrupted the flow of the column. Think of them as postscripts.]

 

Risk communication from a “false sense of security” perspective

Much of my career as a risk communication professional has been devoted to decreasing or increasing people’s sense of security. If people feel more secure about a risk than you believe they should feel, precaution advocacy is a toolkit for decreasing their sense of security: “Watch out!” If they feel less secure about a risk than you believe they should feel, outrage management is an entirely different toolkit for increasing their sense of security: “Calm down.”

Precaution advocacy aims to correct what the communicator considers the audience’s insufficient concern – or in other words, its false sense of security. Outrage management aims to correct what the communicator considers the audience’s excessive concern – in terms of this column, its false sense of insecurity.

Other communicators may disagree, of course. It is far from rare for activists to be doing precaution advocacy while a company is doing outrage management about precisely the same risk, each of them convinced that the other is misleading the audience.

Precaution advocacy is often needed to convince people to take, support, or tolerate a precaution. But the precaution itself, when viewed as a piece of risk communication, is a kind of outrage management. Remember, every precaution has two effects. It makes people safer, and thus decreases the objective need for additional precautions. And it makes people feel safer, and thus decreases their appetite for additional precautions. To one extent or another, every precaution increases both people’s actual security and their sense of security.

As I have already noted, there’s one major exception to this generalization. Sometimes a precaution arouses cognitive dissonance (“why did I bother to take this precaution?”). And then, in their effort to avoid or reduce the cognitive dissonance, people may amp up their concern about the risk (“I guess I must think this is a serious risk”) and their interest in other precautions (“I guess I should do more to protect myself”). That’s the foot-in-the-door phenomenon. Instead of calming people down, sometimes a precaution constitutes a foot in the door that makes them more rather than less interested in taking other precautions as well.

Cognitive dissonance aside, precautions can be seen as a kind of outrage management. They make people feel safer.

Intentionally inculcating a false sense of security

I have talked about flu vaccination as an example of good guys intentionally trying to inculcate a false sense of security about a precaution in hopes of persuading more people to take the precaution despite its only partial effectiveness. I think this is unwise. But I’m frankly not sure if I think it’s also unethical. Certainly the public health professionals who do it have a pretty convincing defense: that reducing influenza mortality and morbidity is a higher priority than aggressively broadcasting the whole truth about the deficiencies of the flu vaccine.

There’s another situation in which inculcating a false sense of security about a precaution may be appealing, and perhaps even ethical. Sometimes people are more upset about a risk than you want them to be. So you implement a precaution not so much to protect them as to reassure them.

Is it dishonorable to make people feel safer without making them actually safer? That’s certainly dishonorable if the hazard is serious. Giving people a false sense of security about a useless faux precaution that purports to remedy a serious hazard, and thus deterring them from taking real precautions that are effective and necessary, is one of the worst things an industrial hygienist could do.

But suppose people are unduly frightened about a hazard you know is trivial. Telling them they’re foolish to be so frightened is unlikely to work. So instead why not take, recommend, or even require an unnecessary but harmless and reassuring precaution? When people are more upset than a situation objectively justifies (what I call a low-hazard, high-outrage situation), they’re burdened by a false sense of insecurity. They’re safer than they think. Is it honorable to deploy a useless precaution in order to give them a “false” sense of security so they’ll consider themselves properly protected from a risk they don’t realize is trivial?

We do this with young children all the time. We give kids magic words to keep the (nonexistent) goblins at bay. Is it okay to treat adults that way?

Thimerosal, for example, is a mercury-based preservative that used to be added to many vaccines. Despite compelling evidence of thimerosal’s safety, anti-vaccine activists often focused on what they saw as its dangers, especially to young children. So in the late 1990s U.S. public health authorities decided to remove the thimerosal from most vaccines – not so much to make the vaccines safer, but in hopes of making them feel safer (if not to antivax activists, then at least to worried parents).

Whether or not it’s ethical, taking an unnecessary precaution in order to reassure people often backfires. The goal is for the precaution to convince people that they’re safe. Instead, the fact that somebody in authority took, recommended, or required the precaution frequently convinces people that the danger must be serious.

I wrote about this at length in a 2003 report entitled “Because People Are Concerned: How Should Public Outrage Affect Application of the Precautionary Principle?link is to a PDF file Commissioned by Vodafone, the report analyzed the pros and cons of proposed policies regulating electromagnetic fields from mobile telephones and towers. Vodafone asked me to assume that these regulations were technically unnecessary, and to ponder whether they would be nonetheless effective as a way to calm public concern about cell phone EMFs. On balance, I judged, the answer was no.

I am not opposed to taking extra precautions to add a margin of safety to a situation that’s probably safe enough already. There’s nothing wrong with an abundance of caution, a belt-and-suspenders approach to a risk even when one or the other is probably sufficient. But when authorities say they’re doing something “out of an abundance of caution,” they’re usually signaling that they’re taking a precaution they themselves consider silly in an effort to placate critics and/or reassure the public. They’re trying to create a false sense of security about an unnecessary precaution as their antidote to people’s false sense of insecurity about the risk that the unnecessary precaution purports to mitigate.

This approach doesn’t usually work. It has other downsides as well: It wastes time and money; it often imposes pointless burdens on third parties; it undermines the claim that safety policy is science-based. But the main problem is that it doesn’t usually work. As the saying goes, actions speak louder than words. You may think your use of a phrase like “abundance of caution” is enough to clue people in that the risk isn’t really serious. Usually it’s not enough. If the government takes the thimerosal out of most vaccines, lots of people are going to figure that thimerosal must be dangerous! If international agencies make companies reduce cell phone EMF emissions, lots of people are going to figure that cell phone EMFs must be dangerous!

Whether it works or not, sometimes you have honorable reasons to want your audience to overestimate the effectiveness of a precaution, either so they’ll take/support/tolerate that precaution or so they’ll stop overestimating the risk that they think the precaution mitigates.

The opposite is also true: Sometimes you have honorable reasons to want your audience to underestimate the effectiveness of a precaution, so their false sense of insecurity will motivate them to take, support, or tolerate additional precautions.

As a risk communication professional, I am not a fan of intentionally inculcating a false sense of security (or a false sense of insecurity) in your audience. Even in the short term, I think dishonesty works less well than many risk managers assume. And in the longer term, I think lying, misleading, or even just intentionally giving people false impressions backfires. I spent my career advising clients to be totally candid about risks and precautions, even when it looked like being less than totally candid might be good safety management … and even when it looked like being less than totally candid might save lives. But I have to concede that my clients often had honorable reasons to disregard my advice.

(Of course sometimes risk communicators have dishonorable reasons to be less than totally candid about risks and precautions. Drug promotion campaigns, for example, often try to inculcate a false sense of insecurity about a comparatively minor risk – and then try to inculcate a false sense of security about the product they’re promoting. Their ideal customers are people they have persuaded to believe two things: I’m almost sure to catch this disease; the drug is almost sure to cure me.)

Here are two more examples of good guys exaggerating the benefits of a precaution or understating its downsides in hopes of encouraging the precaution by inculcating a false sense of security:

Good guys also intentionally exaggerate the seriousness of risks when they think the end justifies the means. Smoking, for example, is incredibly dangerous. Making people think it’s even more dangerous than it actually is might help them decide to quit or not to start. Research consistently shows that smokers do in fact believe that smoking is even deadlier than it is – a misperception public health professionals nurture at every opportunity.

Sometimes a risk is also a precaution – and good guys may decide to focus on one half of this two-sided truth rather than the other half or both halves. For example, an extensive body of research demonstrates that moderate consumption of alcohol (not just red wine) has significant health benefits. But of course too much alcohol does enormous health damage – and there’s a concern that alcoholics and others who drink too much might see the health benefits of alcohol as reassurance that it’s okay to drink too much. So news articles and even scientific writing about studies that clearly show alcohol’s health benefits routinely deny that that’s what the studies show.

Throughout this section and at a few points elsewhere in this column, I have referred to good guys who “intentionally” are less than totally honest. I have to qualify here what I mean here by “intentionally.”

It is comparatively unusual, I think, for industrial hygienists or public health professionals or even corporate communicators to say to themselves in so many words: “I plan to be dishonest about this because I have a good reason.” It’s far more common for them to focus on their good reason and not let themselves notice that what they’re writing or saying for that good reason isn’t exactly the truth, the whole truth, and nothing but the truth. With occasional exceptions they don’t actually lie. They just craft their words to give useful misimpressions. And they do it without focusing on the fact that that’s what they’re doing.

When they’re accused of being dishonest – which doesn’t happen all that often if they’re the good guys – they’re genuinely taken aback. They’re sincere when they deny that what they wrote or said was dishonest. If confronted with specifics detailing how what they wrote or said was systematically misleading, they offer excuses like “But I didn’t actually lie” or “The public wouldn’t understand the technical details” or “The other side is dishonest and we have to fight fire with fire.” Only after long and painful dialogue have I ever gotten good guys to confess, “Okay, yes what I wrote or said wasn’t strictly honest. But it was in a good cause.” Even then, I’m quite sure they reverted almost immediately to seeing themselves as honest while continuing to be dishonest in a good cause. This is yet another example of cognitive dissonance at work.

I have long since despaired of being able to engage good guys in a fruitful discussion of when dishonesty is justified in a good cause, since sustainable self-awareness about the dishonesty itself would be a prerequisite for such a discussion.

Risk homeostasis, compensation, and apathy

Note that this last section is pretty complicated and pretty abstract. You can understand the rest of the column without it. Feel free to skip it if you like.

The core argument of this column is that precautions not only make you safer and thus make other precautions less necessary. They also make you feel safer and thus make other precautions seem less necessary.

The extreme version of this argument is the risk homeostasis hypothesis, originally proposed in 1982 by Canadian academic Gerald J.S. Wilde. (Wilde is still at it; his website is http://riskhomeostasis.org/.) According to risk homeostasis, everybody has a target level of risk. Or several target levels: the level we enjoy; the level we are willing to endure to accomplish what we’re trying to accomplish; etc. We adjust our behavior to keep our risk at exactly the level we have chosen, neither higher nor lower.

Since a precaution makes us feel safer, Wilde claims, it motivates us to do something that will make us feel less safe, thus reestablishing precisely the risk level we have chosen.

Under some circumstances, risk homeostasis is pretty well documented. How fast you drive, for example, typically results from a calculus of the expected upsides of going faster (e.g. reaching your destination sooner) versus its expected downsides (e.g. increased risk of crashing). So if a highway is reengineered to make it safer, people using that highway drive faster. It is surprisingly difficult to make a highway safer; safety improvements often make the highway faster instead.

Years ago, I worked with a mining company in Australia that told me that many miners viewed the riskiness of mining as a plus, not a minus. They resented safety rules, and sometimes ignored them, because they saw the rules as taking the fun out of the job. Only half-kidding, I recommended that management should subsidize weekend hang-gliding and bungee-jumping for its employees. If the company encouraged weekend activities that at least felt risky, even if they weren’t, those activities might satisfy the miners’ risk-seeking and make them more willing to comply with safety rules on workdays. I don’t know if the company ever tried this “faux risk” strategy, and I don’t know if it would have worked. But it was certainly consistent with Wilde’s risk homeostasis hypothesis.

The research evidence on risk homeostasis is mixed, and Wilde’s theory remains controversial. But a more moderate version of risk homeostasis, usually labeled compensation, is just about universally accepted. We all drive faster on roads that feel safe than on roads that feel dangerous. If a road is reengineered to feel safer, we’ll tend to drive on it at least a little faster. The new road may still have a lower accident rate than the old road, but it won’t be as much lower as it would have been if we hadn’t compensated for its increased safety with increased speed.

Compensation is of course a response to how much safer a precaution makes us feel, not to how much safer it actually makes us. And of course the two effects aren’t necessarily equal in magnitude, as I illustrated earlier in this column with my Outcomes A, B, and C. To minimize compensation and thus maximize the safety benefit of a safety improvement, you should look for an improvement that doesn’t feel like an improvement at all. Make your road safer without making it feel safer. In fact, make it feel more dangerous if you can, giving people a false sense of insecurity that will motivate them to slow down. Make your overpasses graceful and translucent, for example, so they look fragile. (Consider how cautiously most people walk on glass sightseeing walkways.) Obviously it would be possible to go too far in this direction; you don’t want people to crawl on your highway or avoid it altogether. I’ve already said that inculcating a false sense of insecurity is bad risk communication because it isn’t honest. But by minimizing compensation or even eliminating compensation altogether, it is arguably good safety management in dangerous situations.

Risk homeostasis and compensation are grounded in the assumption that people’s perceived risk level is already the way they want it. So they try to keep it that way. Risk homeostasis and compensation are responses to something (a precaution, for example) that disrupts a previously achieved equilibrium.

You’re driving as fast as you choose to drive given how safe/risky the road feels to you. If the road is made to feel safer, you will drive faster. If the road is made to feel a lot safer but it’s actually only a little safer, you’ll overcompensate. You’ll increase your speed more than you would if you realized how risky the road still is. Contrariwise, if the road is made a lot safer but feels only a little safer, you’ll undercompensate. You’ll increase your speed less than you would if you realized how safe the road now is.

More often than not, nothing disrupts the equilibrium. You feel safe enough already, so you’re not interested in taking any precautions you’re not already taking. You’re not even interested in hearing about any precautions you’re not already taking. This isn’t risk homeostasis or compensation. It’s plain, ordinary, garden-variety apathy. Your perceived risk level and your ideal risk level are roughly equal.

Apathy is fine if your actual risk level is roughly equal to the other two. You feel as safe as you want to feel, and you are as safe as you feel. But if your actual risk level is higher than the other two, your apathy is dangerous. You’re experiencing a false sense of security.

Apathy is also relevant when people’s perceived risk level starts out different than their preferred level.

If you crave more risk than you feel like you’re getting, you’re likely to look for activities that are risky (or at least feel risky). You’re not apathetic. You’re bored! I already illustrated that with my Australian mining story. Risk-seeking isn’t limited to miners, of course. It’s characteristic of some (but obviously not all) soldiers, cops, pilots, and – unfortunately – teenagers.

Apathy can come into the picture when people start out feeling like they’re enduring more risk than they’d ideally like. The road feels a little dangerous. Then something happens that makes you feel safer. Maybe the road is reengineered. Maybe you simply slow down a bit. Or maybe somebody comes along and convinces you that the road is safer than you thought, or that you’re a really good driver, or that it’s the slowpoke drivers who get into the most auto accidents.

One way or another, your perceived risk level has been adjusted so it’s now compatible with your ideal risk level. Now you’re comfortable. So you feel no need to take additional precautions. Your optimum perceived risk level has been achieved, and you therefore become inattentive to the risk.

This is apathy rather than compensation. Compensation happens when a precaution disrupts an equilibrium, so you act to increase your perceived risk and reestablish the equilibrium. Apathy happens when a precaution establishes an equilibrium, so you lose interest in considering further precautions.

But like compensation, apathy is a response to how safe you feel – not to how safe you are.

Again, apathy is appropriate if your perceived risk level, your ideal risk level, and your actual risk level are all roughly the same. Your sense of security is true, not false. Of course someone else – your spouse, your parent, your employer – might think your ideal risk level is too damn high. Their quarrel isn’t with your risk perception, it’s with your risk goal.

But if your perceived risk level and your ideal risk level are the same, while your actual risk level is higher, then the problem really is your risk perception. You’re unduly apathetic. If you knew the actual extent of the risk, you would be more worried than you are and more interested in additional precautions. Precaution advocacy is needed to rouse you out of your false sense of security.

Copyright © 2018 by Peter M. Sandman

For more on precaution advocacy:    link to Precaution Advocacy index
For more on infectious diseases risk communication:    link to Pandemic and Other Infectious Diseases index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.