2012 Guestbook
Comments and Responses

Stigmatizing smokeless tobacco – and how to fight back

name:Joel L. Nitzkin, MD, MPH, FACPM>
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Public health physician
date:December 27, 2012
email:jln@jln-md.com
location:Louisiana, U.S.

comment:

The federal agency (CDC, FDA, and NIH) approach to smoke-free tobacco products raises the question as to when and under what circumstances it is appropriate for a federal agency (or any public agency or organization) to purposely mislead the American public.

The issue here is purposely misleading warnings on chewing tobacco, snus, other snuff products, and dissolvables (sticks, strips and orbs), collectively referred to as smoke-free tobacco products.

Four warnings are mandated. One of the four is technically wrong – the smoke-free products on the American market since the 1980s do not increase the risk of mouth cancer. Two are misleading. One warns that the smoke-free product is not a safe alternative to cigarettes. What it does not say is that the smoke-free products on the American marketplace since the 1980s pose a risk of tobacco-attributable illness and death less than 2% the risk posed by cigarettes. The other misleading warning warns of tooth and gum disease. What it does not say is that, again, for the products on the American marketplace since the 1980s, this risk is trivial, and the tooth and gum disease is reversible by quitting.

CDC justifies these warnings on the basis of international data, which, in turn, reflect the extreme risks from locally made, heavily alkaline and heavily contaminated mixtures of tobacco with other ingredients in common use in India, but not available in the USA.

Spokespersons from all three of these federal agencies reference the goal of a “tobacco-free society,” which, in turn, rules out any consideration of any non-pharmaceutical tobacco/nicotine product in any public health initiative. They are also quick to point out their perception that any suggestion that any tobacco product might be safer than cigarettes would likely balloon the numbers of teens initiating tobacco/nicotine use, and, once so initiated, they would then transition to cigarettes. This means the message of lower risk for smoke-free tobacco products would ultimately result in more tobacco-attributable illness and death than would have occurred without that message.

The science behind this perception is highly questionable. All of the data were gathered with the warnings noted above fully in place, leaving open whether or not teens would transition from smoke-free products without warnings to cigarettes with warnings.

The practical effect of these warnings is to convince current smokers to continue smoking if they are unable or unwilling to quit, rather than even consider switching to lower-risk and, more likely than not, less addictive alternate sources of nicotine.

The only warning that is accurate, relative to the American products, is the fourth of the four warnings, that the product is addictive.

peter responds:

Your assertion that public health agencies are distorting the scientific evidence about smokeless tobacco isn’t shocking to me. There are plenty of precedents. A haphazard partial list:

  • The reluctance of the CDC and other agencies to acknowledge the low efficacy of the flu vaccine, for fear of deterring prospective vaccinees.
  • The reluctance of the FDA to allow latex glove manufacturers to advertise lower allergenicity than competitors even when they had solid data, because it disliked the implication of insufficient regulation in the fact that not all legal latex products were equally safe.
  • The reluctance of WHO to acknowledge that the swine flu pandemic was mild – so mild that it arguably saved lives (albeit elderly lives) by crowding out more virulent flu strains.
  • The reluctance of international polio eradication campaigners to acknowledge (as part of informed consent) that the oral polio vaccine occasionally causes polio, not just in vaccinees themselves but even in non-vaccinees as a side-effect of shedding.

There are tobacco-related precedents as well. Antismoking campaigns, for example, have been reluctant to acknowledge that smokers actually overestimate rather than underestimate the years of life lost due to smoking, or that smokers (who die young) cost their employers and the society less in healthcare and pension costs on average than nonsmokers (who live longer). Needless to say, the dishonesty of antismoking campaigns is more than matched, historically, by the dishonesty of cigarette ads.

Another tobacco-related example: In the late 1980s and early 1990s, I worked with the U.S. Environmental Protection Agency and other federal and state agencies on radon risk mitigation campaigns. Radon is a natural risk; it’s a decay product of uranium in the rock and soil that collects inside buildings and causes lung cancer. The lion’s share of radon risk is to smokers and people who live with smokers, because radon daughters (decay products of radon) attach themselves to smoke particles and ride them deep into people’s lungs. In less smoky environments radon is a lot less deadly. EPA knew this from the outset. But it went out of its way to underplay the relationship between radon risk and smoking. It didn’t want to give nonsmokers a sense that radon wasn’t really very serious for them; and it didn’t want to give smokers a sense that if they coped with their radon problem it was okay to keep smoking. So all its materials for the general public averaged smoker and nonsmoker risk, thereby understating the risk to smokers by a little and overstating the risk to nonsmokers by a lot.

It has been quite a while since I thought public health (or environmental protection) was reliably a science-based enterprise.

I want to draw a distinction here between dishonest public health communications that save lives, at least in the short term, and dishonest public health communications that cost lives. I oppose both. I believe that the credibility of public health requires acknowledging inconvenient truths, even truths that may deter audiences from wise precautions (e.g. the truth that the flu vaccine is only about 60% effective in healthy adults or the truth that the oral polio vaccine can give you polio). Even more tendentiously, I believe that communicators almost invariably distort the truth when they are convinced that they won’t get caught and that distorting the truth will therefore serve their goals – so I believe the only reliable path to scrupulously honest public health communications is public skepticism about those communications.

But it is certainly possible to disagree with these contentions with regard to public health dishonesty that saves lives and still oppose public health dishonesty that costs lives.

You are arguing that the stigmatization of smokeless tobacco costs lives. I’m not technically qualified to have an opinion about whether you’re right or wrong (or the likeliest option: mostly right but overstating your case). But assuming you’re right or mostly right, the risk communication question is how best to make your pitch to Congress, the media, and the public in order to overturn the misleading warnings and other regulations that deter smokers from switching to smokeless tobacco products. (I’m guessing that the inside-the-Beltway regulatory lobbying approach has already been tried and hasn’t worked, and going public now looks like your best course.)

I haven’t thought through the complex risk communication issues this question raises, but at first glance it seems to me that there are three main options:

  • The “turn the tables” option. The smokeless tobacco industry is the victim of a dastardly plot, a genuine conspiracy to stigmatize smokeless tobacco by people who know perfectly well that they are distorting the truth. The cigarette companies joined the conspiracy because they want to sell cigarettes, and smokeless tobacco threatens their sales. Big Pharma joined the conspiracy because it wants to sell non-tobacco smoking cessation products, and smokeless tobacco threatens their sales too. Anti-smoking campaigners, government and non-government, joined the conspiracy because … well, I’m not sure why. Maybe they just got outmaneuvered and went along. Or maybe they got caught up in all-or-nothing health Puritanism.
  • The “focus on data” option. You assume the agencies and organizations that are saying misleading things about smokeless tobacco believe what they’re saying. You’re not accusing them of dishonesty. They’re simply mistaken. The weight of the evidence says smokeless tobacco is far safer than smoking, and you’re sure that once they review the data they’ll alter their messaging to match. And if there are data holes that need filling, you’d be delighted to participate in a collaborative, transparent research project to fill them, with all relevant interest groups represented and no way for any side to bias, misrepresent, or suppress the results.
  • The “share the dilemma” option. The good is the enemy of the best. You understand the dilemma of public health agencies that are rightly reluctant to recommend a lifetime of addiction to smoke-free nicotine. You get it that the ideal public health goal is to wean smokers from all forms of tobacco, and that criticism of smokeless tobacco, even exaggerated criticism, might help achieve that goal. But after decades of anti-smoking advocacy, there are still many millions of smokers. The odds of persuading them to quit tobacco altogether are slim, but the odds of talking them into a much safer addiction – smokeless tobacco – are pretty good. Most public health agencies have come to accept methadone and other addictive drugs as alternatives to heroin addiction (or quitting cold turkey). Can’t they come to terms with smokeless tobacco too?

The first option – turn the tables – is emotionally very appealing to the smokeless tobacco industry, whose executives genuinely feel like victims of a dastardly plot. It satisfies their outrage.

But it’s a tough sell. I have worked for several decades with corporate clients that wanted to turn the tables: “Don’t think of us as a polluting chemical company. We’re the innocent victims of dishonest, overzealous regulators.” There is often some truth to the claim, in their case and no doubt in yours as well. But in any fight between a chemical company and an environmental regulator, everyone “knows” without looking that the company is the bad guy and the regulator is the good guy – and we resist learning that we’ve got it backwards. A tobacco company (even a smokeless tobacco company) is even more a consensus bad guy than a chemical company – and a public health agency is more a consensus good guy than an environmental regulator. Your chances of turning the tables with regard to smokeless tobacco are pretty poor, I think.

My objection to the second option – focus on data – is threefold.

  1. It’s not true. You think the federal agencies are intentionally misstating the data about smokeless tobacco, and you shouldn’t have to pretend to think they’re simply mistaken.
  2. It’s not effective. Risk controversies are mostly about outrage, not hazard. Any risk communication strategy that’s grounded in explaining the data to people who have already chosen up sides is a strategy that’s unlikely to accomplish its goal.
  3. It doesn’t address what I consider the main underlying problem. If your top priority is to save lives or to nurture smokeless tobacco sales, you might rationally be willing to let public health officials save face if they’ll shift to more rational policies. But my top priority is to achieve more candid public health communications, and more skeptical audiences for those communications. Exposing prior dishonesty and establishing systems that inhibit future dishonesty matter more to me than quietly replacing specific dishonest claims with more accurate alternatives.

So I don’t see much merit in a “look at the data” approach. But I do like the possibility of offering to do joint research to nail down any open data questions. Public health agencies want to be seen (and see themselves) as science-driven, so it’s seriously embarrassing for them to have to turn down such an offer. And if you’re right about the science, it’s even more embarrassing for them to accept and have to live with the results.

The third option – share the dilemma – is probably truer than the first two, and it is almost certainly more salable. At least until I give the matter more thought (and maybe see some focus group data), it would be my choice.

By the way, you consistently use the phrase “smoke-free tobacco.” I prefer “smokeless tobacco.” It’s more common, according to Google. More importantly, it’s more neutral. The people who say “smoke-free” are mostly proponents.

Additional Comment:

name:Farrell Delman
field:President, Tobacco Merchants Association, Inc.
date:December 27, 2012
location:New Jersey, U.S.

The “tobacco control” movement is not called the “smoking control” movement since those most eager to sell medicinal nicotine replacement, Big Pharma, wish to vilify not only all tobacco products but all tobacco companies as part of a broader effort to “denormalize” the consumption of tobacco in any form.

The fact that smokeless is less than 2% as harmful as combustible cigarettes and has roughly the same safety profile as their nicotine replacement therapy (NRT) is a problem for them, especially since their NRT products are being shown to be ineffective for smoking cessation purposes. As shown by the epidemiological data from many decades of snus or smokeless use in Sweden, the way to really lower mortality and morbidity from cigarette smoking is to replace nicotine in cigarettes with a lifestyle version of its consumption at a level of nicotine dosage that will appeal to combustible cigarette smokers.

Further, based on the data that Dr. Riccardo Polosa is generating from trials of e-cigarette use among cigarette smokers not motivated to quit, e-cigarettes are now becoming the favored alternative nicotine device for cigarette smokers – largely because of the replication these products offer for a typical smoker’s behavioral needs, beyond the nicotine. No wonder e-cigarettes are the fastest growing form of alternative nicotine product in the U.S. and around the world.

So if alternative lifestyle nicotine products are shown to be both safe and more effective than Big Pharma products, pharmaceutical companies may well lose their consumers to tobacco companies offering these other products.

Furthermore, FDA approval for the Big Pharma products is currently contingent on a defined period of use: 12 weeks. And their labeling regulations restrict them from selling the same consumer nicotine lozenges or gum while the consumer is also using the patch. And their dosage levels are too low for cigarette smokers to feel. No wonder at a recently concluded FDA workshop on NRT and alternatives, the representatives from GlaxoSmithKline and Johnson & Johnson both argued that the term of use, labeling, dosage, and pack size should be altered – even with no data to buttress their claims that such changes would enable NRT to realize its full potential.

What all of this points to is that in a nation like the U.S., which continues to suffer from its puritanical upbringing, nicotine addiction is still viewed as a disease even though nicotine itself has not been shown to cause significant harm in any of its forms.

Joel responds:

I have a few technical notes in response to Farrell’s posting:

With the nicotine replacement therapy (NRT) products, in most studies, about 40% of smokers are able to abstain from cigarettes while on the 12 weeks of product administration. The problem is that such short-term use does not lead to long-term abstinence. Many physicians and patients have figured this out and are using these products on a long-term basis, as a substitute for cigarettes, in harm reduction mode. This seems to work pretty well for those who were able to abstain during the original 12 weeks. According to Bill Godshall, long-term use currently accounts for about 90% of NRT sales. (I have asked Bill for his data sources.)

The problem is that such long-term use is technically illegal, as an “off-label” use of NRT medication. The doctors and patients like it because they figure that with FDA approval, they can at least rely on the products’ purity and consistency of dosage. The doctors and patients are under the misimpression that e-cigarettes and other smoke-free tobacco products have not been approved by FDA because of impurities, contaminants, and inconsistency in dose. This is incorrect. They have not been approved by FDA because FDA has not yet established standards for their review.

Nicotine is not totally innocuous. There are issues with use of any nicotine-containing product in pregnant women (re premature delivery) and also issues relative to nicotine and wound healing. (We don’t know if it is the nicotine or other ingredients in cigarette smoke that cause these problems, but most suspect at least some of it is due to the nicotine.) In addition, nicotine is highly addictive. In very high doses, nicotine can also be very toxic, especially to agricultural workers. As an aside, high-dose nicotine is sometimes used as rat poison.

There is substantial non-puritanical reason to prohibit sales of tobacco/nicotine products to teens (and even young adults through about 24 years of age). Experience has shown that if people do not become addicted by then, they are unlikely ever to become addicted to nicotine.

Nicotine addiction is, indeed, a disease in its own right, if the person is unable to function normally without it and/or must endure substantial withdrawal symptoms to discontinue its use.

Peter responds:

Farrell’s response strikes me as an example of my first strategy, turning the tables. And I don’t think it works, though I can tell it’s heartfelt. To me, at least, it reads as special pleading: a bad guy trying to sound like (and no doubt feeling like) a good guy. Big Pharma, he argues, produces a less effective product than e-cigarettes, snus, and other forms of smokeless tobacco. The rest of the turn-the-tables argument isn’t explicit here, but it’s deducible: If people have a more positive impression of nicotine replacement therapy products than of smokeless tobacco products, they’ve been intentionally misled by a cabal of Big Pharma, Big (cigarette) Tobacco, and Big Government.

As victims tend to do, Farrell also overstates his case – which Joel makes clear in his follow-up comment, pointing out that NRT works better than Farrell suggested and that nicotine isn’t as safe as Farrell claimed.

A certain amount of overstatement is tolerable, even acceptable, in the messaging of victims; you get to exaggerate how badly you’ve been treated. Overstatement also goes down reasonably well in warnings; you get to exaggerate how dangerous something is, whether the “something” in question is cigarettes or smokeless tobacco products or pharmaceutical cigarette replacements.

But the chief messaging goal of the smokeless tobacco industry is to convince people – consumers, journalists, legislators, and eventually even regulators – that smokeless tobacco products are safer than cigarettes and more attractive to smokers than NRT or quitting cold turkey … and that many lives can therefore be saved by changing the regulatory regime so smokeless tobacco is less disfavored. That’s basically a message of reassurance: “We do less harm and more good than most people think.”

And messages of reassurance cannot afford to exaggerate.

Farrell responds:

Whether NRT works or does not work is a function in part by what we mean by “work.” From metadata provided by Stead and Lancaster in a Cochrane Review on smoking cessation, pooling data from 40 studies with more than 15,000 participants, we can deduce that in clinical trials of NRT’s effectiveness, when also coupled with behavioral support, 12.5% remained cigarette-free after 12 weeks. Other studies, with varying follow-up periods of six months to one year, show success rates of 12–16% for subjects selected for the clinical trials.

The real world, composed of people not selected for clinical trials, however, tells a different story – with 12-month quit rates falling to below 10%. Zhu et al. conclude that based on data from 1991–2010, during which time the use of NRT quintupled, “there has been no corresponding increase in the population cessation rate.” Alpert et al. reported no significant differences in the relapse rates between smokers who used NRT and those who quit cold turkey.

Such a low success rate is due, I believe, to the fact that cigarette smoking has a very strong behavioral component that includes moving hands, blowing smoke, etc. Nicotine addiction is only part of the problem. This may explain why the fastest growing form of nicotine consumption is through electronic cigarettes, which give the smoker a smoking experience without the toxins, as I reported before.

Now, I never meant to imply in my post that nicotine poses no harm. What I wrote is that based on most studies it does not cause “significant harm.” In fact, it is frequently compared to caffeine, which in a large enough concentrated quantity could also be used as a rat poison, I would guess.

Where Joel and I may differ is that I do believe that a major stumbling block for the health authorities to admit that smokeless tobacco is 98% less harmful than cigarettes is a fear that they will be courting a new round of nicotine addiction by encouraging consumers to migrate to another addictive substance, smokeless tobacco. This is clearly what has driven the European Union to continue its ban on snus throughout the EU, except in Sweden, which due to its long history was exempted from this ban when Sweden joined the EU. Now in the latest incarnation of the EU’s Tobacco Product Directive, issued a week ago, the snus ban remains in effect, since the EU’s public health authorities do not want consumers in nations without a history of smokeless consumption to be offered an alternative way to remain addicted to nicotine – even if migrating to smokeless tobacco means a much longer and healthier life for those now addicted to nicotine through cigarettes. It is this fear of continued nicotine addiction, I believe, coupled with a fear that non-nicotine consumers will then think that some forms of tobacco are okay to consume, that drives these policy makers to ban snus. So until the issues of nicotine addiction as a disease can be addressed (along with how should lower risk be communicated so it does not get understood as no risk), I doubt that public health authorities will change their minds on smokeless.

And it appears that Joel and I differ as to whether nicotine addiction is a disease in its own right. This may have to do with what we call a “disease.” Some may wish to call a caffeine addict who has not had his morning coffee diseased when he reacts aggressively. My definition is more related to morbidity and mortality.

It is encouraging, however, to learn that the United Kingdom, through its MHRA, has eliminated the 12-week term for NRT and is itself no longer treating nicotine addiction as a disease. If the choice is between smoking to get nicotine and getting nicotine in some other form, MHRA now believes it is less harmful to encourage permanent NRT use. So does Big Pharma.

All adopt Peter’s “share the dilemma” approach, one that is especially applicable to tobacco since “joint research” is not a real option. Even universities housing independent researchers stay away from tainted tobacco company money for fear that they would then lose NIH funds. FDA, which under Dr. [Margaret] Hamburg has taken “evidence-based” policy-making to higher and higher levels, has made no effort to embrace industry offers to share its multi-decades of scientific research in a cooperative spirit. FDA shows not the slightest embarrassment at its refusal to make use of the industry’s data.

Now since smokeless tobacco’s permissible messaging, under the law, cannot in any way be health-related, and certainly cannot make a relative risk claim vis-à-vis cigarettes, how are smokeless tobacco companies to get out the “we do less harm and more good” message? The Tobacco Control Act, as Joel clearly states in his post, compels warnings that are false on their facts and contrary to this claim. Based on the last survey conducted, 85% or so of the American public believe smokeless tobacco to be more harmful than cigarettes. Is there no ethical requirement that government correct such a misunderstanding among its citizens, a misunderstanding that is maiming and killing 433,000 of them a year?

Peter responds:

I’m not qualified to chime in on the health effects debate, but let me summarize what I think I have learned from Joel and Farrell:

  • There’s a huge public health benefit when a smoker “converts” to a nicotine source that’s not combustible – whether it’s smokeless tobacco or a nicotine replacement therapy (NRT).
  • There’s an even huger public health benefit when a smoker abandons nicotine altogether, since nicotine has some negative health effects and no addiction is better than any addiction.
  • There’s a significant (if not huge) public health cost when a non-smoker takes up smokeless tobacco (or NRT), but the cost is smaller than if that non-smoker had taken up cigarettes instead.
  • When NRT is a way station from smoking to being nicotine-free, it’s terrific. When it’s a permanent replacement for smoking, it’s still pretty good (even if it’s an off-label use). When it’s a failed effort to quit that ends up with the smoker smoking again, it’s close to valueless. When it’s a new habit of a previous non-smoker, it’s modestly harmful. And when it’s a way station from non-smoking to smoking, it’s horrible.
  • Exactly the same is true for smokeless tobacco: terrific when it’s a stop en route to quitting; pretty good when it’s a permanent smoking replacement; close to valueless when the smoker resumes smoking; modestly harmful when it’s a new habit of someone not previously addicted; horrible when it’s a step toward becoming a smoker.
  • We know a fair amount about the relative harm of smoking versus smokeless tobacco versus NRT versus nothing. But to formulate sensible policies, we also need to know the relative frequency of the various pathways – and there don’t appear to be much data on that. So if you work for the smokeless tobacco industry, you tend to want to tell people (accurately) that a smoker who can’t or won’t quit benefits enormously from switching to smokeless tobacco. If you hate the smokeless tobacco industry, you tend to want to tell people (accurately) that smokers should quit, not just find a less harmful addiction; and that a non-smoker who starts with smokeless tobacco and graduates to cigarettes would have been far better off not starting.
  • No side in this debate is trustworthy. The cigarette, smokeless tobacco, and NRT industries have an obvious financial stake in the outcome. Public health agencies are biased for less obvious reasons, most notably their strong emotional and ideological opposition to tobacco in any form. If you want to know “the truth about smokeless tobacco,” you have to listen to all sides and weigh the evidence yourself. I haven’t done that – I’ve been listening to Joel and Farrell – so everything in this list has to be considered tentative.

The other thing I’ve learned from Joel and Farrell is that the debate isn’t a fair fight. The smokeless tobacco industry is pretty powerless compared to the other players: the cigarette industry, the pharmaceutical (NRT) industry, and the public health agencies. Smokeless tobacco companies are forbidden to advertise what they see as their product’s principal benefits (it’s less dangerous than cigarettes and a more satisfying cigarette substitute than NRT). And they’re required to advertise what their critics see as their product’s principal drawbacks (it’s addictive, it’s not a perfectly “safe” alternative to cigarettes, and in some formulations it has been linked to oral cancer and to tooth and gum disease).

The fact that you’re oppressed doesn’t necessarily mean you’re right, of course. But if Joel is right that smokeless tobacco is 98% less dangerous than cigarettes, and if Farrell is right that 85% of Americans wrongly believe that smokeless tobacco is more dangerous than cigarettes, then I’d have to conclude that the smokeless tobacco industry is the victim of a successful smear campaign. And it’s hard for me to imagine that the harm done by seducing (some number of) non-smokers into nicotine addiction could possibly outweigh the good done by converting (some other number of) confirmed cigarette addicts to a much less deadly addiction.

I’m not an attorney, but I believe that the messaging on smokeless tobacco packages and in smokeless tobacco product advertisements is much more vulnerable to regulatory restrictions than the industry’s messaging in other venues, where its First Amendment rights are less shackled. I’m thinking about newspaper op-eds, social media campaigns, and even ads – issue ads as opposed to product ads. Regulators can forbid a smokeless tobacco company to put comparative risk claims on its packaging. But I don’t think they can forbid the company’s CEO to cite comparative risk data in an article arguing that the industry has been unfairly smeared by public health agencies that want to give smokers no options but quitting or dying.

And then it would be fun (and might be legal) to put something like this on the packaging and product advertising: “The law forbids us to tell you the relative risk of this product compared to cigarettes. If you want to know, check out www.saferthancigarettes.com.”

Of course the smokeless tobacco industry would incur the wrath of its critics (even more wrath than it incurs now) as soon as it started exercising its First Amendment rights. Declaring war is an option worth considering only if negotiating peace is a nonstarter. And using data to prove that public health agencies are perpetrating a smear campaign is probably not optimal messaging. As noted earlier, I think “share the dilemma” is a better strategic guideline for the smokeless tobacco industry than “turn the tables” or “focus on data.”

My point here is simply that there are venues less regulated than product ads and packages (and less invisible than this website) for people like Joel and Farrell to start making the case for smokeless tobacco.

From what I’ve heard so far, it’s a case worth making.

Does the public care about the H5N1 research controversy? How can officials involve the public? Do they really want to?

name:Leslie
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Vaccination proponent
date:December 12, 2012
location:Canada

comment:

Are you aware of any public opinion survey on the H5N1 controversy? I would be interested in trying to arrange such a survey if nobody has conducted one. In that case, I would welcome your thoughts on the design of the survey. It would have to be very carefully constructed.

My interest in broadening the discussion on the H5N1 controversy to include civil society stems from the negative impact of the controversy on public trust in the life sciences. I believe it is very important to build public trust in vaccines, vaccine research and biomedical research in general at a time when we have falling immunization rates and growing public distrust. The fallout of the H5N1 controversy undercuts all our efforts in this department – and provides fodder for the anti-vaccine movement.

peter responds:

Before introducing readers to the H5N1 research controversy and offering my thoughts about “broadening the discussion,” I want to suggest adding a word to your description of your goal. Instead of working “to build public trust in vaccines, vaccine research and biomedical research,” I’d suggest saying you hope “to build earned public trust in vaccines” etc. As I will suggest later in my response, I think vaccination proponents sometimes seek the public’s trust without earning it.

For readers unfamiliar with the controversy you’re raising, H5N1 (“bird flu”) is an especially deadly strain of influenza that could pose a huge human health risk if it ever acquired the ability to spread easily in humans – which so far it has not done. But in early 2012 a controversy arose over research aimed at bioengineering a new kind of H5N1 that would be more readily transmissible in mammals. The debate focused on the potential value of the research (for example, it might help scientists better understand how to stop H5N1 from becoming transmissible) versus its potential risks (an accident or an intentional release might launch an H5N1 pandemic). While the debate raged, research teams headed by two scientists, Ron Fouchier and Yoshihiro Kawaoka, saw their papers go unpublished. After a few months the two papers were belatedly published, but a voluntary moratorium on similar research remains in effect while scientists, policymakers, and interested citizens try to thrash out what rules, if any, should govern this sort of research in future.

For more background on risk communication aspects of the H5N1 research controversy, see:

Earlier this year when the controversy was hot, I read a lot of casual references to how H5N1 bioengineering research was arousing high levels of public concern. Both sides said so. Opponents of the Fouchier and Kawaoka research suggested that the public’s concern was evidence that the research should stop. Supporters suggested that the public’s concern was evidence of the need for public education, and perhaps a moratorium to allow time for public education. No one suggested publicly that the public’s concern was evidence that the public is an idiot and that research of this sort should therefore be done quietly if not quite secretly – though of course that’s what many researchers quietly/secretly believe.

But I don’t recall seeing any formal surveys of public reaction to the controversy or public opinion about the relevant policy questions. If one had been published, I’m pretty sure I’d have seen it.

A quick look at Google Trends suggests that the controversy was of only brief and modest interest to the general public, as measured by search volume. H5N1 Google searches peaked in 2006 when a bird flu pandemic looked like it might be imminent, then fell off precipitously; the number remained fairly constant and fairly low between 2007 and 2009, and then got lower still. A tiny blip in early 2012 coincided with the research controversy; it’s barely detectable, and never approached even the level of 2007-2009, much less the level of 2006.

The Google Trends news coverage graph goes back only to 2008. The controversy generated more H5N1 news coverage in early 2012 than had been typical in the previous few years (though surely less than in 2006) – but since the controversy has abated the amount of H5N1 coverage is now lower than in the 2008–2011 baseline.

I suspect it will be hard to learn much from a public opinion survey regarding this controversy (especially a year after the controversy was hot). I’m guessing that most people barely paid attention to it then and barely recall it now.

For those (few) who were paying attention, I agree with you that the controversy further eroded public trust in science – public trust in the wisdom (and good sense) of bench scientists, in the wisdom of letting them police themselves without outside scrutiny, and in the likelihood that they’ll defer readily to such outside scrutiny. I think that erosion in trust is justified by what actually happened. In fact, I think H5N1 researchers did surprisingly well in hiding from the public the most distrust-worthy aspects of the controversy. (See especially “Science versus Spin: How Ron Fouchier and Other Scientists Miscommunicated about the Bioengineered Bird Flu Controversy.”)

Even for those (few) who were paying attention, I rather doubt that the controversy would have impacted attitudes toward vaccination. But I could be wrong, of course. If your intuitions and mine diverge, that might be reason enough to do a survey!

Study vaccination hype instead

For what it’s worth, if I were asked to state my highest priority for vaccine-related social science research, I would urge a study to assess the impact of vaccination hype on public mistrust. I believe that public health professionals not infrequently exacerbate the public’s skepticism about vaccination by overselling its virtues and under-acknowledging its defects. Among my favorite examples: the reluctance of polio vaccination campaigners in the underdeveloped world to acknowledge VAPP and VDPV (that is, the fact that the oral polio vaccine can give you polio) and the reluctance of flu vaccination campaigners in the U.S. to acknowledge the low efficacy of the flu vaccine, especially in the elderly.

Together with my wife and colleague Jody Lanard, I have written about these phenomena extensively – though mostly on my website rather than in the refereed literature. We don’t have much evidence of a causal relationship between the dishonesty of vaccination proponents and the public’s vaccination skepticism, though it would be surprising if there weren’t such a relationship. We have a great deal of evidence about the dishonesty itself.

A few things on my website along these lines that you might want to look at:

You might also want to look at my three-part video interview linked from here: “Vaccination Safety Skepticism: Public Health’s Self-Inflicted Wound.”

For years my corporate clients would tell me their main risk communication problem was that they weren’t trusted, and I would reply that I thought it was a much bigger problem that they weren’t trustworthy. The same applies to public health, I think. Jody and I are strong proponents of vaccination, but strong critics of vaccination hype.

When I argue these issues with colleagues and clients in public health, I get some pushback on my claim that vaccine promotion campaigns are systematically dishonest. But I get more pushback on my claim that the dishonesty does harm, that it is a significant threat to the credibility of vaccination and even to the credibility of the entire public health enterprise. I’d love to have more evidence that this is so. (Evidence that it isn’t so would put me in the same quandary that vaccination campaigners are in with regard to evidence about vaccine inadequacies!)

I apologize for responding skeptically to your proposed survey focus, and for trying to make a case for a different focus … about which you may be equally skeptical.

Leslie responds:

Like you, I was stunned by the about-turn by Fouchier, and the NIH’s rush to embrace Fouchier’s new line that his mutated viruses were not as lethal as first reported. In fact, I was appalled by the manipulation. Claiming that the media had misreported the facts and stoked public fears was disingenuous. I was even more distressed that so many seasoned newspaper reporters who cover science and medicine fell for it. (I won’t name them.) That aspect of the H5N1 saga has never really been told, nor has the way in which the NIH and subsequently HHS backed the new line as a face-saving way to preserve their credibility, get themselves out of the mess, and allay concerns in the White House.

Regarding my search for any public opinion surveys on the H5N1 controversy, I agree that public awareness of the controversy is probably low, and that any awareness peaked several months ago at the height of the controversy. I would still like to see some kind of survey and some ongoing tracking of public opinion on the controversy. I have learned that awareness and attitude studies are possible even when the awareness of a given topic is low – the cost just quadruples.

Of course you are right about OPV and influenza vaccines. However most anti-vaccination sentiment focuses on ideas for which there is no basis in science and which have been thoroughly debunked. I am less concerned about the anti-vaccine movement than the far larger group of people who have doubts and worry about vaccine safety. That’s where the issue of public trust comes in. As we all know, it only takes one major event to undo years of hard work building that trust.

I will keep you posted if I learn of any relevant studies on H5N1 underway or planned. I will also let you know if I decide to try to help launch a survey. If I do, I would welcome your input on the design of the survey and the methodology.

In the meantime, I have three questions for you:

number 1
How can interested parties involve the public in the debate over H5N1 research?

Every group and institution involved in the H5N1 research controversy has called for public participation in the “international discussion.” The organizers of the Asilomar conference have made extravagant claims about how they sought to involve civil society in their discussions – and successfully did so. As far as I can tell, civil society participation in the Asilomar conference was limited to a few journalists who were invited to attend the meeting. If we believe the calls by the NSABB, WHO and USG for greater civil society involvement are genuine and not just lip service/political correctness, how would one go about getting greater civil society participation? Who would one invite to represent civil society?

Unlike HIV or cancer where there are very vocal and well-informed patients and patient advocates, there is no obvious constituency to invite representing the “H5N1 community” or concerned citizens. On the NSABB, there is only one non-scientist on the panel (a judge), who could be said to represent civil society. Tellingly, she was one of the three dissenters [to the NSABB decision to approve publishing the Fouchier and Kawaoka papers]. The WHO influenza group, as far as we are aware, has no civil society advisory panel to which they could refer. All WHO meetings convened to date on the H5N1 controversy have been closed to the media and the general public. The NIH/NIAID influenza network meeting in New York on July 29 was also closed to the media and civil society participation. It was only “opened up” by the organizers 24 hours prior to the meeting and a few hand-picked journalists thought to be loyal to the NIH were invited at the last minute to attend.

As further evidence that public participation is not really welcome despite all the lip service, the CDC’s recent call for public input on their recommendation that strains of H5N1 that are transmissible between mammals be designated as “Tier 1” select agents – which means future research would have to be conducted in a BSL-4 facility – was announced in the Federal Register. As we all know, few people read the Federal Register. This would seem to me to be a conscious and very cynical decision by USG to limit public input.

This leaves unanswered the question: “If one really wanted to get public participation in the international discussion, how would one go about it?” Ruling out a series of Town Hall-type meetings, focus groups, etc., what other ways are there to get public participation? At present, we can count the number of “consumer advocates” (that’s not the right term) who are interested in and knowledgeable about the H5N1 controversy on one hand, perhaps two hands. There is only one individual who has made herself extremely knowledgeable about the controversy, former Pulitzer-prize wining journalist Laurie Garrett, who could be said to speak for civil society. And there is only one Peter Sandman, if one counted you as civil society. It’s an extremely small group unless you expand “civil society” to include the media. It’s hard to argue that science editors represent civil society. With the exception of Steve Connor at The Independent in London, the most knowledgeable journalists covering the story are Science and Nature reporters.

I would welcome your thoughts on this dilemma.

number 2
Why doesn’t WHO take the lead?

Along with other observers, I have noted what has been described as a weak WHO response. Instead of taking the lead and opening up the debate, they appear to have hunkered down. We all know the WHO influenza group wants to protect their virus-sharing network. However, where is the leadership from the top? The WHO Director-General has been almost silent on the controversy, even though she was in the middle of the scares in Hong Kong. The virtual silence of the WHO leadership on the H5N1 controversy contrasts with the strong lead they took on SARS.

Is the WHO feeling chastised after declaring H1N1 a pandemic? They were criticized for crying wolf on that issue but that doesn’t seem enough to explain their reluctance to speak out strongly on the H5N1 research controversy and push – for example – for stronger biosafety standards.

number 3
Do you think the U.S. authorities are being sincere when they say they want public comment?

There is the growing feeling that those calling for greater public involvement don’t really want it – at least they don’t seem to want the genuine – “genuine” being the key word – public participation in the “international discussion” that’s been called for.

How else to explain why the CDC/HHS would invite public comment using the Federal Register? Have you seen their announcement?

Marshall McLuhan would consider the use of the Federal Register for this purpose to be a prime example of a government agency (or a large corporation) using an obscure publication – the medium itself – to to shape and control “the scale and form of human association and action.”

It’s hard to disagree with that assessment. The Federal Register announcement is 2,476 words of bureaucratic mumbo-jumbo that would seem designed to turn people off or exhaust them. Only the most determined individuals with a special interest in the controversy will be prepared to grit their teeth and wade through it – and groan at being forced to do so. I would postulate that using the Federal Register is part of a coordinated effort to suppress public comment. If they really wanted public comment, the leadership of the CDC and the HHS could call a news conference.

Peter responds:

For the record, I share your judgment that anti-vaccination activists are much, much further from the scientific evidence than vaccination proponents. But they can afford to be. The reassuring side in any risk debate is held to a far higher standard of scrupulousness than the alarming side. We are slightly inconvenienced if a smoke alarm goes off when there’s no fire; we are likely to die if there’s a fire and the smoke alarm doesn’t go off. So we calibrate smoke alarms to be conservative – to go off too much so we can be confident they won’t miss a fire. For the same conservative reason, we also “calibrate” activists to go off too much. Those who warn that X isn’t safe (whether X is a vaccine or an industrial pollutant) are held to a much looser evidentiary standard than those who assure us that X is safe.

As I have written at great length (I won’t burden you with further citations), vaccination proponents are at least 99% right about vaccine safety – but they choose to say misleading things about the other 1% – which gives anti-vaccination activists their strongest arguments. And of course vaccination proponents are a lot less than 99% right about vaccine efficacy, and very prone to playing fast and loose with the efficacy data, especially in the case of flu – which again gives anti-vax activists grist for their mill. Proponents endlessly intone that the flu vaccine is “safe and effective.” When people find out how ineffective it really is, can they be blamed for starting to wonder if it might also be unsafe?

At least that’s my contention – or perhaps I should say my hypothesis, which I often advance as a contention (though I think I try harder than CDC and WHO do to acknowledge when I’m making an assertion I think is probably true but for which I have little actual empirical evidence). I’d love to see it tested properly: What is the effect of vaccine hype on public skepticism?

But that’s my hobbyhorse. Let me get back to yours.

number 1
Regarding your first question, how to involve the public in the debate over H5N1 research:

I think it’s important to bear in mind the distinction between publics and stakeholders. (See my column “Stakeholders” for more than you want to read on that distinction.) Stakeholders see themselves as having a stake in the outcomes of decisions. Publics don’t see themselves as having a stake.

If you buy these definitions, it is literally impossible to involve publics in decisions. They will choose not to get involved.

That leaves three tasks that have something in common with “public involvement”:

  • The first task is to involve stakeholders meaningfully. That’s hard enough – but unlike public involvement, at least it’s not impossible by definition. It’s where I usually put most of my energies: identifying people (and kinds of people) who feel a stake in pending decisions – feel it already or feel it immediately when informed that those decisions are pending – and soliciting their involvement. The lowest-hanging fruit, of course, is involving people who are already clamoring to be involved.
  • The second task is to turn publics into stakeholders. The easiest way to do this – the way activists use – is to arouse people’s outrage (a term of art for me – see www.psandman.com/index-intro.htm) and increase their self-efficacy. People who feel outraged will typically come to see themselves as stakeholders; people who feel both outraged and efficacious are the most likely to get involved. Rational argument (“here’s how this affects you and why you should get involved”) is considerably less powerful as a motivator of involvement, but it’s not worthless.
  • The third task is to educate publics. Of course stakeholders are much more motivated to learn than publics (though their information-seeking tends to be biased by their outrage and by the direction of their stake). But even publics can be taught stuff – their curiosity can be piqued, for example; or their sense of civic obligation to know what’s going on can be aroused and then satisfied. As publics get more educated about an issue, they are likelier to see their stake and become stakeholders – but even when they don’t, decisions are more legitimate (and substantively better) if publics are watching than if they’re not.

All three of these tasks are pretty obviously relevant to decisions about H5N1 bioengineering research. None of the three, I think, has been seriously attempted by the major players in the game.

number 2
Regarding your second question, why WHO doesn’t take the lead:

I think you have answered this question pretty well yourself. WHO’s biggest stake in the H5N1 research controversy is its virus sharing protocols. When citizens of Indonesia and other developing countries get sick with H5N1, their governments routinely but reluctantly turn over blood samples for scientists in developed countries to study, enabling them to track how the virus is changing and develop vaccines to match. Controversy over potentially dangerous H5N1 bioengineering research could endanger this flow of samples.

And WHO judges (rightly) that its influenza-related credibility is very fragile right now. In the wake of its failure to acknowledge publicly that pH1N1 – that is, “swine flu” – was turning out mild, and the way that failure played out politically in Europe, it doesn’t want to issue any urgent-sounding warnings about a less-than-imminent H5N1 pandemic. It doesn’t want to warn that continuing H5N1 bioengineering research might lead to a devastating pandemic, and it doesn’t want to warn that failing to do such research might make it harder to prevent or respond to a devastating pandemic.

Perhaps equally important is the fact that WHO very seldom takes a public leadership role on controversies that divide its science/public health/research constituency from its constituency of politicians and the publics they represent. It can’t afford to alienate either group. Arguing that the research should be banned would alienate the former; arguing that the research should continue unabated would alienate the latter. And arguing that the research is both essential and extremely dangerous, and that we therefore need a substantial ratchet upward in the safety and security precautions under which the research is conducted, would alienate both.

So WHO has mostly confined itself to convening groups of scientists and other supporters of the research in forums where they could tell each other how essential it is and how safely it is done, and then genuflect vaguely in the direction of henceforth being even more careful and listening a bit harder to the concerns of critics.

number 3
Regarding your third question, whether I think U.S. government authorities are sincere when they say they want public comment:

No.

To make the point more carefully: I’m sure the U.S. government wants to punch its public consultation ticket. It wants to have solicited public comment as required by law … but it doesn’t want to have to deal with actual public comments.

To make the point more carefully still: Put yourself in the shoes of the people at CDC, NIH, and the rest of HHS who are responsible for shepherding the new H5N1 research regs through the approvals process. They genuinely welcome constructive comments from knowledgeable insiders, and have doubtless solicited and received many such comments already – not just blanket endorsements, but also comments that included some suggested improvements along with their overarching support. That all happened before the public comment period started, and presumably the regs improved some as a result.

Now the proposed regs need to run the gantlet of comments from outsiders – three kinds of outsiders:

  • Those who are dead-set against the main thrust of the new regs (whether knowledgeably or ignorantly). Taking their comments onboard would require going back to square one. Understandably, their comments are not welcome. Some will be received, of course; people who are dead-set against the new regs are among the few who will take the time to read the Federal Register. What they say will be rebutted, not truly considered.
  • Those who are knowledgeable and not dead-set against the new regs – but who have significant reservations to explain and significant changes to suggest. These are the people whose comments are potentially most valuable … and whose comments might actually have been influential if they were insiders, if they had been consulted earlier, and if the process weren’t so far along already. But the goal now is to get the regs adopted and the research resumed, and so even suggestions that are sensible and feasible are far from welcome. Nonetheless, when a formal public consultation process yields any changes at all, the changes come from this group. If the people masterminding the process are canny or honorable (or both), they will leaven the inevitable litany of rebuttals with an occasional “Good idea!” in response to something somebody in this group has proposed.
  • Those who are not very knowledgeable but have strong opinions nonetheless. This is what we usually mean by “the public” – people who saw a newspaper editorial or heard something on the radio or read a rant on somebody’s blog and were inspired to write. A robust public consultation process inspires a carload of passionate but not-very-knowledgeable submissions; hiding your public consultation process in the Federal Register lightens the load of such submissions. Everyone who has done much public consultation work knows two things about this third group: (a) If you read their comments carefully and empathically, they include plenty of common sense and more than a few profound nuggets of wisdom that could help knowledgeable insiders fashion a better, wiser set of regs; and (b) Because they also include a lot that’s nutty and useless, no insider trying to get a set of regs implemented on schedule wants to read a carload of passionate but not-very-knowledgeable submissions carefully and empathically.

Soliciting public comment in the Federal Register is SOP for a proposed new set of federal regulations. But of course you’re right: If you really want public comment, you don’t have to confine yourself to the Federal Register. There are plenty of better ways to ask people what they think.

But it’s rare for those in charge to really want public comment. When they actually do, they seek it informally before the legally obligatory formal public comment period. When that formal public comment period yields robust discussion and meaningful policy change, it’s almost always because opponents made it happen – not the people in charge.

As you know, an “international consultative workshop” on the H5N1 research controversy is scheduled for December 17–18 in Bethesda, Maryland, under HHS sponsorship. The preliminary agenda link is to a PDF file currently online lists an astonishing 73 speakers, moderators, and panelists over 16-1/2 hours (8:00 to 6:30 on the 17th and 9:00 to 3:00 on the 18th). Allowing no time for meals or bathroom breaks, that’s less than 14 minutes apiece. The agenda also calls for five opportunities for “moderated discussion” with the audience; I’ll leave it to the reader’s imagination how much audience comment will be allowed to cut into each speaker’s 14 minutes.

Judging from the people whose positions I know, the preliminary list of speakers includes a lot more supporters of H5N1 bioengineering research than critics of that research. But balance issues aside, there are simply too many people on the agenda for even the speakers to get much chance to develop an argument, critique an opposing argument, or cross-question each other. They’ll be pretty much confined to predigested set pieces. As for the audience, a “moderated discussion” that lets some of the people in the room sound off for a minute or two each and then “that’s it, you had your turn, let’s hear from somebody else” isn’t a discussion at all. It virtually guarantees that nobody’s contribution will be followed up, explored, responded to, or even listened to.

It’s possible to do better than that – a lot better. In 2005 and again in 2009, the CDC sponsored a series of public workshops around the country devoted to this thorny question: Suppose there were a severe flu pandemic (H5N1, for example) and not enough vaccine to go around. What groups should get top priority for the limited vaccine supply? Children? The sick and elderly? Cops, firefighters, healthcare workers, and others in key occupations? Their families too? Should there be priority groups at all, or should the vaccine be distributed by lottery or on a first-come first-served basis?

The 2005 public consultation process began with a national stakeholders meeting to help frame the issues. Then there were a couple of citizen dialogue sessions; then another stakeholders meeting; then more feedback sessions with citizens. The 2009 follow-up included ten public meetings, two web engagements, and one stakeholder meeting. (Both the 2005 report link is to a PDF file and the 2009 report link is to a PDF file are available online.)

I attended one of the 2009 workshops. There were only a handful of presentations; nearly all the time was spent in small-group discussions, followed by large-group debriefings. And as far as I could tell, the majority of the participants were neither experts nor representatives of organized advocacy groups; they were just interested citizens.

Of course even this admirable process can’t be said to have involved “the public.” Millions of people weren’t involved, only hundreds. And people who would choose to devote a weekend day to thinking through the practicalities and ethics of pandemic vaccine prioritization are a little weird, almost by definition. Still, hundreds of weird-but-ordinary folks all across the country spent a day publicly pondering these tough questions, not a minute or two. Their views were explored, questioned, clarified, and in some cases changed – and then they were recorded and fed back to the group itself and to the CDC policymakers in Atlanta.

That’s what a sincere effort at citizen engagement looks like.

A gradient of outrage susceptibility; outrage versus moral panic

name:Liam Gash
field:Communications manager, University of Tasmania
date:December 3, 2012
email:Liam.gash@utas.edu.au
location:Australia

comment:

Your risk communications methodology makes absolute sense to me.

I have read quite a lot about “moral panic” in sociology and there seems a direct connection to outrage. The moral panic literature asserts that social movements deliberately aim to stir up “moral panic” around certain issues, but the effort won’t be successful unless there is an underlying social anxiety that means that a large percentage of the community are receptive to “that button being pushed.”

Perhaps people could be outraged about a particular risk, but not react in a group in a way to foster a “moral panic.”

You say in your book link is to a PDF file that “exaggeration is the natural tool of the ‘alarming’ side of the debate.” Exaggeration is a classic aspect of moral panic – that the sky will fall in unless something is done about their issue.

I’m very interested in your “gradient” of susceptibility. I’m wondering whether you could apply a 10-point scale to each of your top 20 criteria (voluntary vs. coerced, natural vs. industrial, etc.) in order to apply it as a survey. You could give people a one-paragraph scenario and then a 20-point questionnaire. I think this could be a useful way to measure the potential “outrageousness” of a situation. What do you think?

Also, I think it could be useful to have an ongoing dialogue between moral panic sociologists and risk communications people like yourself.

peter responds:

I mentioned a possible “gradient” of outrage susceptibility in an earlier email to you. Here’s how I put it then:

Level 1: People are already upset. They’re outraged, and they know it – but they’re relatively inactive because they feel alone and impotent. The organizing task is an easy one: to make them aware of each other and each other’s outrage, and thus to empower their outrage.

Level 2: People aren’t actively upset, and may not even know they’re upset at all – but all the ingredients are there. The situation has lots of the “outrage components” (coercion, unfamiliarity, low trust, etc.) already; the ingredients for active outrage are in the pot, and just need to be stirred and perhaps heated. This is a controversy waiting to happen. Activists are skilled at recognizing this state of affairs, since it is low-hanging fruit (to mix my metaphor). They can easily swoop in and start converting the latent outrage into manifest outrage.

Level 3: The situation has characteristics that make it possible to arouse outrage, but people aren’t aware of those characteristics. The company/industry/technology is still vulnerable, but the activist side has more work to do than in Level 2. I used to talk about this in terms of “salience manipulation.” Example: The community is already strongly opposed to pollution, but ignorant, apathetic, or even weakly positive about fluoridation. An anti-fluoridation activist can make some headway by reframing fluoridation as “fluoride pollution,” pointing out that if some industry were dumping that crap into the water, it would be illegal! (This is a real example from early in my career.) The anti-pollution value is already there to be used as a way of arousing outrage; it doesn’t need to be inculcated – but it isn’t yet salient in people’s assessment of fluoridation.

Level 4: The situation has few of the characteristics that are conducive to outrage. That could change over time, due to strenuous activist efforts or serious company missteps – but for now the prospects for a widespread controversy are poor. A smart company would be building alliances and monitoring the situation for possible changes, but not actively engaging in outrage management. A smart activist group would be deploying its assets elsewhere.

I call this a gradient of “susceptibility” to outrage because there isn’t already a widespread public controversy – just a possible future one. For a more complete scale you could add a “Level 0” where stakeholders are actively complaining and organizing already, and perhaps a “Level 00” where all hell is breaking loose.

The level that preoccupies me most is Level 2. That’s the “cusp” where latent outrage either does or doesn’t turn into manifest outrage, depending largely on what activists do to exacerbate the situation and on what the responsible company or government agency does to ameliorate it.

I hasten to add that “ameliorating” the situation doesn’t just mean reducing the outrage and thus calming people down. It also means reducing the hazard – doing something meaningful about the problems or abuses that underlie, motivate, and justify the outrage. But it is a fundamental principle of outrage management that once people are outraged they tend to stay outraged even if the objective situation improves, unless things are done to address the outrage itself. As I endlessly tell clients: If the hazard is serious, fix the hazard. If the outrage is serious, fix the outrage. If they’re both serious, fix them both. Don’t expect emissions reduction (good hazard management) to make people less upset any more than you’d expect an apology (good outrage management) to make people less endangered.

So my outrage assessment or outrage due diligence tools (my “OUTRAGE Prediction & Management” software, for example) are devoted to identifying conditions that are ripe for outrage exacerbation and thus desperately in need of outrage management (depending on which side you’re on).

Measuring outrage

This gradient of outrage susceptibility is a bit different from simply asking people how outraged they are. Instead, it tries to get at how outrage-prone the situation itself is.

You suggest a 10-point scale for each of the 20 outrage components I discussed in my “Responding to Community Outragelink is to a PDF file book – twelve principal ones link is to a PDF file I talk about all the time and a second list of eight also-rans covered in the book. Other scholars have sometimes used five-point or seven-point Likert-type scales to measure the outrage factors.

Asking people how “voluntary,” “natural,” “familiar,” etc. they consider a situation is an improvement over simply asking them how outraged they are about the situation. Because it’s usable before the outrage itself is manifest, it has more predictive value. And because it asks separately about each of the outrage factors, it has more diagnostic value.

It has two great weaknesses. The first weakness is its assumption that people who checked “2” on a seven-point (or five- or ten-point) Likert scale from “unfamiliar” to “familiar” are actually more outraged or more likely to become outraged about some situation’s unfamiliarity than people who checked “3.” In other words, the Likert scale method assumes that people’s reaction to a global concept like “familiarity,” “trust,” “control,” and the rest is systematically correlated with how outrage-prone the situation is. That may be true, but it’s not proven and it’s not obvious.

The second weakness of using Likert scales to measure outrage potential is the need to combine the separate factors into one numerical total. The usual way to do this is simply adding them up. But there’s no reason to assume the 20 outrage factors are equally powerful. After 40-odd years of outrage-related consulting, I know they’re not. “Trust,” “control,” and “responsiveness” are far likelier to be major sources of outrage than, say, “naturalness” or “dread.” There are exceptions; sometimes naturalness or dread is the 800-pound gorilla. But I don’t think that happens often. And I damn well know the 20 factors aren’t equal.

I tried to get around these two problems with my “OUTRAGE Prediction & Management” software. I wrote literally hundreds of questions about the various outrage factors (just the twelve biggies), and I kept tinkering with the math until I had produced an algorithm that replicated my consulting experience. For actual situations I had worked on, in other words, the software consistently yielded a “total outrage” figure that matched what had actually happened. That made me willing to bet that for new situations the “total outrage” figure the software yielded would match what was going to happen.

Of course the software doesn’t just predict total outrage. It partitions it among the twelve factors and in some cases even subfactors. And it estimates the impact on outrage if the organization decided to change particular policies in order to change particular answers.

The software has its own weaknesses. You can’t get stakeholders to answer hundreds of questions for you, so the software needs to be completed instead by a group of people in the organization who trust their collective guesses about how the relevant stakeholders would answer. Even at that, it takes a bunch of hours for them to answer all the questions. In practice, even companies that purchased the software didn’t use it much; they typically found it quicker and easier to bring me in as a consultant instead. So in 2009 I made it freeware and posted in on my website.

The bottom line for measuring outrage, as I see it:

  1. To measure manifest outrage (Level 0 or 00), simply measure the outraged behavior. Count stakeholders’ angry letters or social media postings; count how many rocks are thrown at your car; etc.
  2. To measure Level 1 outrage, ask people how outraged they are – how upsetting the situation is for them.
  3. To measure Level 2 outrage, ask people separately about each of the outrage factors. That’s the Likert scale methodology you suggest. Or use something like my software. Or give up on quantification and go the qualitative route: read the local media; talk with people in bars and supermarkets and buses (or in focus groups); really get to know the community.

Moral panic versus outrage

I don’t know a great deal about moral panic theory, as developed by sociologist Stanley Cohen and others. From what little I know, I certainly do see some commonality with outrage.

But I have a strong impression that “moral panic” – as the term implies – is usually applied to situations where mainstream values are threatened by some new and offensive behavior, leading to a distinctive mix of disapproval (the “moral” part) and fear (the “panic” part). One of the classic examples seems to be when drug-using young people move into a middle-class neighborhood and start harassing their more conventional neighbors. The term also gets applied to overreactions to serious but uncommon crimes (child molesting in kindergartens, for example), and to outright bigotry aimed at nonconformist but perfectly law-abiding groups (minority races, idiosyncratic religions, etc.).

By contrast, I initially developed the outrage concept to describe the mix of disapproval and fear felt by neighbors of a polluting industrial facility, who are likely to believe the facility’s emissions are dangerous (whether they are or not) because the facility’s management is arrogant, secretive, and unresponsive.

So moral panic is most characteristically what majorities feel about offensive minorities (who are often referred to in the moral panic literature as “folk devils”), whereas outrage is most characteristically what ordinary folks feel about powerful plutocrats. Still, both moral panic and outrage are mixtures of disapproval and fear. And as you say, both are exacerbated by exaggeration.

The Wikipedia summary lists five characteristics of moral panic: concern, hostility, consensus, disproportionality, and volatility. Concern and hostility are certainly characteristic of outrage as well. Consensus and disproportionality are sometimes but not always characteristic of outrage; small groups of stakeholders can experience outrage their neighbors decidedly do not share, and justified outrage needn’t be disproportionate at all. As for volatility, I would never say of outrage what Wikipedia says of moral panics, that they “tend to disappear as quickly as they appeared due to a wane in public interest or news reports changing to another topic.”

But as I say, I don’t know much about moral panic. You wrote me (offline) that you plan to apply both the moral panic perspective and the outrage perspective to your master’s thesis on genetically modified food. By the time you finish you will know a lot more than I do about how the two perspectives relate to each other.

Revealing a problem that’s merely possible: good risk communication or self-destructive overkill?

name:Madam Wing-Ding
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Manufacturing company owner
date:November 16, 2012
location:Somewhere, U.S.

comment:

I’m a longtime follower of your work, and I try to implement your outrage management principles in all parts of my life. The details of the following business problem have been changed to protect the confused.…

As the new president of a company that makes “wing-dings” – a company with a 30-year history of quality-and-precision wing-dings, and a very positive worldwide reputation for quality-and-precision wing-dings – I’m struggling with what may or may not be a quality problem. My risk communication problem is whether and how to tell my customers about my possible, unconfirmed quality problem.

Your column on “Misoversimplification: The Communicative Accuracy Standard Distinguishes Simplifying from Misleading” is a really good one. It’s making me stress, and making me think about whether or not I should be (am?) exposing Wing-Ding Company to the possibility of losing thirty years of solid reputation for quality. The current “framulators” in our wing-dings, which were chosen by my predecessor after a careful investigation determined that they were good enough to use, may not actually be as good as we’d hoped. Or, they may be good enough.

A wing-ding specialist came to see me, suggesting there might be a problem with my framulators. He doesn’t know if there is, but he wanted to share his concern, which is not shared by anyone else at his company. And his company is only one facet of our customer base. I take his concern seriously anyway, and have begun my own investigation. (An additional burden is that his company, and other customers of my company, can sometimes be involved is lawsuits, the defense to which rests partly on wing-ding accuracy.)

In my responsibility for the health of the Wing-Ding Company, I am concerned about whether or not there is an undercurrent “out there” where people (besides the wing-ding specialist who came to see me) could be saying: “Their wing-dings are now crap – their framulators aren’t correct.” I don’t know if that is happening, and I have no way of finding out with any validity. I can’t find anything anywhere on the web about our wing-dings – no comments, no posts, no discussions. Sales are down but is it the horrid economy or rumors about the wing-dings?

It wouldn’t be beyond possibility that, a few years ago (before my time), my predecessor settled for these current framulators because they were the only framulators out there. Maybe he knew they were suboptimal, and since that was all he could find, he went with them. I can’t do that. (I won’t!) Or maybe he just didn’t know. He was very careful with the company; would he have accepted framulators that weren’t high-quality, if that’s all that was available? I just don’t know and he has died, so I can’t ask.

Here’s where I identify with your corporate guys not wanting to tell what may be (but also may not be) the truth, which might expose the company to destruction (or at least that’s how it feels). If the word on the street is that Wing-Ding framulators are crap, that Wing-Ding’s wing-dings are crap, then the company is lost anyway. As your column puts it, I’ve been “misleading toward the truth” (the truth?) about the framulators. I’ve posted that I’m trying to find higher-quality framulators. I have not said (and won’t say, because I don’t actually know): “I can’t guarantee that these framulators you have are accurate (enough)!” (All my own testing seems to show they are. Is my own testing sufficient? I don’t know.)

I’m pretty sure our wing-dings and our framulators are accurate (enough). The external framulator expert I consulted seemed to think they are. Even the wing-ding specialist who came to see me about them said that they were probably accurate enough, though he’s not sure. His company has been using our wing-dings for decades, very successfully.

I am concerned that if I publish anything more direct about a (possible but not probable) problem, it could shake the entire wing-ding-using world’s trust in the Wing-Ding Company. I am working as fast as I can to find a source for framulators I can trust. Then I’m planning to try to “recall” (without actually recalling!) the current framulators, offering customers a better one. I’m hoping the slightly more difficult operation required by the current framulators will be reason enough for a lot of customers to want switch them, without my having to raise the accuracy issue at all.

But oh dear, oh dear. My immersion in “the Sandman way” is really pushed home by this “Misoversimplification” column. The hazard of inaccurate framulators – which, yes, I am trying to hide from the public while I frantically try to fix it, if it even exists! – could destroy the company. Am I being overcautious? Histrionic? I don’t know. I feel as if I am in a race:  Find a new framulator factory and get new framulators made, get a swap-out underway, and whew! I’ll have crossed the finish line ahead of the end of my world. But if it turns out that the wing-ding specialist who came to see me does have a real problem caused by the framulators, how do I announce to the world that our much-vaunted “quality and precision” aren’t and haven’t been for a couple years?

You discuss companies telling the truth, being transparent, and so on. But I don’t have a sense that you address the other side much: What if there is a possible hazard (bad framulators, thus bad wing-dings, thus bad quality) that could destroy the company? What if addressing the hazard out in the open could destroy the company before it repairs the hazard, if there is one – or could destroy the company even if it turns out not to be a real hazard at all? How does one weigh openness as against the (maybe undeserved) destruction it can wreak on a company?

peter responds:

If I’m reading your comment correctly, you have four options:

  1. The Wing-Ding Company will be in deep, deep trouble if word gets out from some other source that you have an accuracy problem you have been hiding. That will be true even if it turns out that you don’t actually have an accuracy problem at all. The fact that you thought you might and others told you you might, and you kept the (possible) problem secret, would devastate your reputation anyway.
  2. The Wing-Ding Company will be in pretty deep trouble – but less than in #1 – if the news comes from you, belatedly, after you nail down the problem and figure out how to solve it.
  3. The Wing-Ding Company will be in some trouble – but less than #1 or #2 – if you tell customers now that you have been advised of a possible problem; you don’t know if it’s a real problem or not; here’s what you’re doing to find out; here’s what you plan to do if it’s real (or what you hope you’ll be able to do – find a better framulator and organize a swap); here’s what you’d advise customers to do in the meantime; you’d welcome any information or suggestions they might have; you’ll let them know as soon as you know more one way or the other; you’re sorry to impose this uncertainty on them but you figure they’d rather know than get surprised later … the standard riskcomm messaging.
  4. The Wing-Ding Company will be in no trouble at all if the problem goes away – if you find out it’s a non-problem without ever having told customers about it; or if you never find out one way or the other; or if you find out it’s a real problem but keep it secret and customers never catch on; or if you find out it’s a real problem and swap out the old framulators for new, more accurate ones without ever mentioning the accuracy problem.

If this is accurate, then #3 is your best option unless you like #4. “Liking” #4 could mean you’re willing to gamble that you’ll find out your framulators are fine and that your customers will never have heard they might be inaccurate. Or it could mean you’re willing to try to keep the framulator problem secret forever if you can’t get a clear answer or if the answer you get is a bad answer. Or it could mean you’re willing to bet that all your customers will participate in a swap that’s sold as a minor improvement rather than as an essential quality fix – and that nobody (especially the swap non-participants) will find out later that the swap was much more important than you ever acknowledged.

But if you think there’s a more-than-trivial chance that your customers are going to find out eventually, then it makes sense to bite the bullet and tell them yourself. It’s conceivable that telling them yourself could destroy the Wing-Ding Company. I have no way to judge that, though my gut says otherwise, especially if there are no other providers or if other providers are in the same boat (and you make that clear when you come clean). What isn’t conceivable to me is that telling your customers yourself would destroy the Wing-Ding Company but having them find out some other way wouldn’t.

Wait till you have a solution?

What about #2? Is telling your customers about a possible problem now really better than waiting till you’re sure the problem is real and you have a solution? Assuming they don’t find out from someone else in the meantime, isn’t waiting a benefit? Lots of business consultants advise their clients not to acknowledge problems until they have a solution in hand. I disagree, for five reasons.

First, your customers may find out from someone else while you’re waiting to get your ducks in a row. Then #2 turns into #1 and you’re in big, big trouble.

Second, even if we assume your secret has sufficient shelf life and you can afford to wait till you have a solution, I think the solution will be more credible if it is preceded by problem acknowledgment (along with acknowledging your uncertainty, sharing the dilemma, etc.) than if the solution is offered while the problem is still a surprise and people haven’t gone through their adjustment reaction yet.

Third, the old canard that it’s pointless to tell people about a problem until you have a solution is simply false. Information about a possible problem is actionable information, even if you don’t know yet whether the problem is real and don’t have a solution ready to implement. Your customers can look for ways to rely less on their wing-dings. They can decide whether or not to warn their customers. They can consider doing some testing of their own to see if they have a problem. They can switch to a different wing-ding supplier with better framulators (if there is one). Some of what your customers may do if you tell them the truth about the framulators is bad for your business. But the fact that there are things they can do arguably means you have an obligation to tell them; it certainly means they’ll hold it against you if you don’t.

Fourth, the decision to wait to acknowledge a problem until you’ve figured out a solution is often self-deceptive. Before you have a solution, you tell yourself there’s no point in talking about the problem until you know what to do about it. After you have a solution, you tell yourself there’s no point in talking about the problem now that it’s well on its way to being solved and no longer a problem at all. (Just swap out the framulators without mentioning the quality problem.) So the decision to do #2 (solve the problem and then acknowledge it) turns into #4 (keep it secret) – which turns into #1 (a company-threatening controversy) when someone else reveals that you had a problem and kept it secret.

Fifth and probably most important, secrecy arouses outrage even when it doesn’t do actual damage. You’re obviously in trouble if your customers eventually find out that you had a quality problem, kept it secret, and let them go on unknowingly using wing-dings with inaccurate framulators. But you’re also (if less obviously) in trouble if your customers eventually find out that you thought you might have a quality problem and kept that secret – even if you turned out not to have a quality problem in the end, or even if you turned out to have a quality problem that you solved (without ever acknowledging) by giving your customers new framulators. When customers learn belatedly that their businesses were potentially endangered and you decided not to tell them so, the fact that the danger never materialized and their businesses were unharmed won’t keep them from feeling blindsided, mistrustful, and outraged.

Acknowledge problems before you solve them

This issue comes up pretty much every time clients realize that they need to change some behavior. “Okay,” they tell me, “we get it that we need to stop doing X and start doing Y instead. But can’t we just quietly implement the new policy. Do we have to acknowledge what was wrong with the old policy?”

The risk communication answer, sadly for my clients, is that stakeholders (certainly including customers!) give you very little credit for secretly solving problems. The fact that you kept the possible problem secret while you were investigating it, or kept the actual problem secret while you were solving it, arouses a lot of outrage – even if your investigation reveals it’s not a problem or your solution works before anything goes wrong.

This is so basic that “Acknowledge current problems” has earned a slot in my one-page handout entitled “Reducing Outrage: Six Principal Strategies.” link is to a PDF file

Here’s an example that’s uncannily similar to your situation. A mining company was told by a consulting engineer that a corner of its tailings impoundment (waste dam) was weaker than it ought to be. In the unlikely event of an earthquake, the engineer advised, the impoundment might collapse, and the tailings might inundate a nearby neighborhood, with substantial potential loss of life. The company took the warning – tentative though it was – very seriously. It spent millions of dollars reengineering the tailings impoundment to make sure it could withstand an earthquake, a process that took years.

During those years, the neighborhood in the potential path of the tailings was told nothing. The option of briefing the neighborhood was considered. In fact, a company lawyer wrote a memo arguing that the company should release the information, but the site manager at the time decided not to do so. The company mitigated the risk without ever acknowledging the risk. I don’t know whether the company planned to come clean after the solution had been implemented – but in fact the risk was never acknowledged, beforehand or afterwards. The upgrade in the tailings impoundment wasn’t secret, of course – only the engineer’s warning that the upgrade might be needed to prevent a catastrophe.

Decades later, long after the impoundment had been strengthened and the risk eliminated, a whistleblower turned over all the relevant documents to a local newspaper. The result was a huge controversy over the company’s earlier decision to keep the potentially endangered neighborhood in the dark. Pretty much everybody, even the company’s current management, condemned that decision. The lawyer’s memo recommending transparency was Exhibit A in the argument that the company had misbehaved horribly … analogous, perhaps, to this Guestbook entry.

The bottom line: The “transparency clock” starts ticking the moment you learn about a possible problem that might threaten your customers (or other stakeholders). Every day you spend investigating the problem or solving the problem without having acknowledged the problem is a day that is likely to be held against you if the truth ever comes out. Even if the tailings impoundment wasn’t weak after all (even if the framulators are accurate); even if no earthquake occurs and the tailings impoundment is successfully strengthened (even if the old framulators are swapped for new framulators) – even so, companies are held accountable for their failure to tell stakeholders promptly about a possible problem.

The risk communication lessons of worst case scenarios lead to the same conclusion. As I wrote in my 2004 “Worst Case Scenarios” column, “any risk that’s serious and plausible enough to justify internal planning is serious and plausible enough to justify public discussion.

Reductio ad absurdum

I don’t want to turn this recommendation into a reductio ad absurdum. Suppose you’re collecting periodic samples from a factory’s stacks, for example, and one sample comes back with incredibly high readings for some toxin. You’re close to certain it’s got to be some kind of lab error. The sample from an hour earlier was normal, and you can’t imagine any way your factory could have suddenly released what the newest sample is telling you it released. It’ll only take you an hour to retest the sample, just to make sure. Nobody else has seen the weird sample, and nobody’s out there saying you might have a problem.

Under these conditions, even I wouldn’t advise sounding the alarm. When you’re almost certain it’s a false alarm, and you’ll know for sure very soon, and there aren’t any rumors that need to be addressed, then going public prematurely is foolish. (Even so, you might want to shut down the line that produced the scary reading till you get your retest back.)

Some of what you wrote in your comment sounds like you’re almost certain that your wing-dings are accurate, and almost certain that your customers are never going to start hearing rumors that they’re inaccurate. If that’s the case, what you’ve already posted about looking for higher-quality framulators is enough, maybe more than enough. You’d need to be a risk communication zealot to “come clean” about a possible problem you doubt is real and nobody would ever know about if you didn't tell them. You are a risk communication zealot! But even I think that would constitute excessive zeal.

But some of what you wrote sounds like the rumors may be starting already, and you’re not at all sure the rumors are mistaken. If there’s a more-than-trivial chance that you have a real framulator problem, I think you should talk to your customers now about the possible problem. And if there’s a more-than-trivial chance that your customers are hearing rumors that you might have a framulator problem, I think you should talk to your customers now about the rumors. Don’t wait till you’re sure, and don’t wait till you have a solution.

Warning fatigue: when bushfire warnings backfire

name: Brenda
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:Ph.D. student
date:November 9, 2012
location:New Zealand

comment:

Can I be so bold as to presume you might be able to answer another question?

Here’s a bit of background: After Black Saturday (7 Feb 2009 – when 173 people died, over 400 were injured, and several communities were destroyed by the worst bushfire Victoria [Australia] has ever known), the authorities ramped up their warnings for people to leave on “Code Red” days. link is to a PDF file These are days which the authorities deem to be super-dangerous, and they come to that conclusion through a complicated formula that takes into account a combination of wind speed, direction, ground moisture, temperature, and fuel load.

After the Black Saturday fire the number of these warnings increased fourfold, partly because of legal imperatives (blame and liability, etc). The public know this, and after the first couple of times of packing up everything and “going down off the mountain,” a lot of people (but not all) got sick of the “over-warnings” and stayed put.

They know that the hazard is real (and high) but the way the information is communicated to them makes them mad. Is this outrage? The credibility of the “experts” comes in for a fair bit of debate also.

Another way of looking at this is that people underestimate the risk because there is no outrage about bushfires. They are not particularly emotionally “attached” to the notion of bushfires – most of them have lived with this risk all their lives.

Thoughts?

peter responds:

Well, I feel just a teeny bit set up.

You mentioned in your previous Guestbook comment that you’re writing your Ph.D. dissertation on “Reconceptualising Disaster Warnings: Warning Fatigue and Long-lead-time Disasters” and that your dissertation research surveys Victorians about how they’re responding to bushfire warnings since Black Saturday. Now you’re asking me to speculate on exactly that question, the one you’ve been researching in detail.

So I will. I hope when your research is done you’ll tell me what I got right and where I went wrong.

I agree with you that bushfires are a fairly low-outrage risk for most people who live (or have second homes) in the bush. Of course people who lost relatives or homes on Black Saturday, or who nearly did so, are likely to feel strong bushfire outrage. But most rural Victorians didn’t suffer or come close on Black Saturday. As you point out, they “have lived with this risk all their lives” without anything awful happening to them.

Even so, I’d guess that Black Saturday increased the memorability of bushfire disasters in the minds of rural Victorians. But for many of them, this outrage-increasing effect of higher memorability isn’t enough to overcome the outrage-reducing effect of very high familiarity. Sometimes, in fact, the “lesson” people draw from a near-miss isn’t that something awful almost happened to me, but rather that nothing awful happened to me. Even a memorable near-miss can inculcate overconfidence instead of concern, a point I discuss in my recent column on “Managing Risk Familiarity.”

But the main question here is the effect of all those additional post-Black-Saturday warnings on people’s outrage. Are the new warnings making people more outraged (anxious and cautious) about bushfires, as they’re meant to do? Or are they just making people more outraged (irritated) at the warnings themselves, and at the demand embedded in the warnings that they must abandon their bush homes for some safer, unforested area every damn Code Red day?

This is not an unusual problem. Whenever precaution advocacy fails, the natural assumption is that the audience must be insufficiently outraged about the risk. But the alternative explanation is always a possibility: Perhaps the audience is excessively outraged about the precaution and the way it’s being advocated. When workers in a factory don’t wear their hardhats, for example, maybe they’re not worried enough about getting their skulls crushed – or maybe the hardhats are too uncomfortable and the safety officer is too bossy.

I’m sure you will answer this question empirically in your dissertation: Are the new bushfire warnings working or backfiring? But there are reasons for guessing that they may be backfiring, at least for some in their target audience.

First of all, you mention that the new warnings are motivated in part by the authorities’ reputational and liability concerns, and that the public knows this to be the case. A disastrous fire followed by a fourfold increase in the number of warnings reeks of official CYA. And that, of course, is itself a reason for people to find the warnings irritating and unconvincing, even hypocritical.

“Outrage” may sound like too strong a word to describe how people feel about all those new CYA warnings. But at least for some Victorian bush homeowners, I suspect it’s exactly the right word.

Communicating the policy change

But there’s a more fundamental issue here. As you know, the Victorian government’s bushfire policy changed significantly in the wake of Black Saturday. There used to be a fair amount of emphasis on helping bush homeowners prepare to stay and defend their property. Among the lessons of Black Saturday was the finding that too many people thought they were adequately prepared to defend their property when they actually weren’t. Another lesson was that too many people delayed the stay-or-go decision until the fire was too close, waiting for explicit instructions from the authorities that they ought to leave.

Let’s rank order the four choices a bush homeowner might make:

  1. Leaving early when there’s a serious bushfire threat (a Code Red day) is obviously the safest course in terms of preventing deaths and injuries.
  2. But preparing thoroughly and then staying to defend your home is the safest course in terms of preventing property damage.
  3. Staying to defend your home when you haven’t prepared properly is much, much riskier than either.
  4. And fleeing at the last minute when you belatedly realize you were unwise to stay is even riskier than that.

Now you tell me that on Code Red days people are unambiguously advised to leave … although there’s not even a bushfire yet, just dangerous fire-prone conditions. (All levels of the Fire Danger Rating scale link is to a PDF file apply before a fire actually starts. The scale estimates the likelihood that a severe fire will start.) And you tell me that there are a lot more Code Red days than there used to be before Black Saturday.

I’m guessing a bit here, but it sounds like the Victorian government has backed off its acceptance of #2 – not because #2 is unsound in principle but because in practice too many people do a half-assed job of #2 and end up doing #3 or #4 instead … and on Black Saturday some of them died as a result.

This is a very tough policy change to communicate effectively.

If the government doesn’t acknowledge that the policy has changed, the new warnings simply won’t compute for people who are thoroughly familiar with the old policy. Under those circumstances, it’s all too easy to imagine bush homeowners thinking to themselves, “That’s for my neighbors who haven’t prepared properly to defend their properties. But I’m prepared. This warning isn’t meant for me.” Many of the homeowners who think this way, of course, are actually inadequately prepared themselves.

This is a basic principle of risk communication:  Policy changes need to be acknowledged. People take forever to notice a new policy if you don’t explicitly point out that it’s different from the old policy.

But if the government does acknowledge that its bushfire policy has changed, it has to explain why. It could try to say that it has changed its mind and decided that #1 (leave) is simply smarter than #2 (prepare, then stay and defend). That’s probably true for disastrous mega-fires like Black Saturday, but according to what I’ve read it’s not true for most bushfires – and honesty aside, false claims are hard to defend convincingly.

The alternative is for the government to tell the truth: “The reason we have pretty much stopped saying it’s okay for people who are properly prepared to stay and defend their homes against bushfires is because so many people think they’re properly prepared when really they’re not.” That particular truth has two huge defects as a message.

First, it’s insulting – and government officials don’t prosper when they insult their constituents. A lot of bush homeowners (even second homeowners who are weekday urbanites) are proud of their autonomy, their self-sufficiency – which includes being proud of their ability to defend their homes against bushfires. That pride is a big part of why they tend to overestimate their own preparedness. So the government’s message here is a double insult: the insult that they haven’t done a good enough preparedness job, and the insult that they have misled themselves into thinking they have. Little wonder people are inclined to get outraged at this double insult.

The second problem is that this particular truthful message leaves a loophole. People can all too easily tell themselves, “Well, okay, I get it. The government is worried that people who haven’t prepared properly will try to stay and defend anyway. But I have prepared properly. So I don’t need to leave when they say leave.”

I don’t know what the Victorian government has actually been saying about its shift in policy. My advice would be to go with the truth – and then to “go meta” on the truth, addressing both the insult and the loophole:

We used to offer bush homeowners an explicit choice: Either you do all the work and buy all the equipment and really prepare yourself to defend your property in a bushfire, or you decide that that’s more than you can or want to do and resolve to leave whenever the fire threat gets serious.

But what we learned from Black Saturday is that a lot of people thought they were prepared to stay and defend their property when they really weren’t. That wasn’t mostly their fault. We were too encouraging about the stay-and-defend option. We let people think preparing was easier than it really is. For a lot of people – like families that include kids or the elderly, for example – stay-and-defend just isn’t feasible. It was a mistake for us to imply otherwise.

So now we advise everybody to get out on Code Red days, when the fire threat is the most severe.

Even as I’m saying this, I’m worried that some people hearing it or reading it are going to think I’m not talking about them. It’s not just a few people who think they’re prepared when they really aren’t. It’s most people. I’m talking about you here. If you think you’re an exception, go to this website for a self-test that will probably show you you’re wrong. Better yet, call this phone number and a bushfire expert will come give you a test that’ll scare the pants off you, and maybe save your life.

I think I’d also recommend some kind of “stay and defend certification” program – a really tough one. Homeowners who thought they were properly prepared could apply for certification. Most of those who applied would fail, and in the process would gain a better appreciation of why they should get out on Code Red days. More importantly, most rural Victorians wouldn’t even apply – and their decision not to apply would be a way of telling themselves that they should get out on Code Red days. In order to pass, applicants would have to demonstrate not only their ability to defend their property against an ordinary bushfire but also their awareness that sometimes an extraordinary bushfire can overcome even the most thorough preparedness – so even they might end up becoming likelier to get out on Code Red days. (Victoria may very well have such a certification program already. I am describing what I think I would recommend without knowing what’s currently in place.)

Warning fatigue

Judging from its title, your dissertation will apparently focus on warning fatigue with regard to bushfires in Victoria.

For readers new to the term, warning fatigue (sometimes called “the boy who cried wolf syndrome”) is the tendency of people to stop paying attention to warnings that are excessive in some way – excessively dramatic, excessively frequent, etc.

I wrote about “the dangers of excessive warnings” in a 2008 Guestbook response. Here’s part of what I said then:

What’s most noticeable about warning fatigue … is how weak it is. When weather forecasters warn that a hurricane is coming, most people in the predicted path prepare; when the hurricane changes course, most people are relieved; the next time there’s a hurricane warning, most people prepare again. Similarly, activists have long known that they’re pretty safe warning that a particular industrial facility is likely to explode or its emissions are likely to increase the cancer rate; if the explosion doesn’t happen and the cancer rate stays stable, the activist group simply moves on the next issue, undeterred and undamaged.

There’s a good reason why warning fatigue is weak. People intuitively understand that a false alarm is a lot smaller problem than a disaster they weren’t warned about. We understand that it’s a minor irritation if a smoke alarm goes off when there’s no fire, but a catastrophe and a scandal if there’s a fire and no alarm. So we calibrate smoke alarms to be oversensitive; we tolerate their going off too much in order to be fairly confident that they won’t miss a fire. We “calibrate” activists and weather forecasters to be similarly conservative in their warnings – that is, to err on the alarming side.

It’s also worth bearing in mind that the test of warning fatigue isn’t whether people say the issue is getting hyped and they’re sick of all the warnings; it’s whether people stop taking the recommended precautions because they think the issue is hyped and they’re sick of the warnings. In early 2003, the U.S. government urged Americans to stockpile duct tape as part of their preparedness for a terrorist attack. In our column on the derisive public response this recommendation provoked, Jody Lanard and I noted that the derision was “quite frequently combined with cooperative, even diligent, behavior: ‘It sounds pretty silly/inadequate/scary/duplicitous, but I’ll stop anyhow for duct tape on my way to the supermarket, where I’ll pick up more batteries and a few gallons of water.’” The derision notwithstanding, duct tape sales soared.

The warnings that are most prone to warning fatigue are those that aren’t just warnings; they’re predictions. I’d love to see a test of the difference in impact of the two warnings described below – the difference in how much preparedness they inspire, and also the difference in how much warning fatigue they arouse if the warning turns out to be unnecessary or premature:

  1. “We can’t tell if X will happen or not, but it’s certainly possible, and it’s potentially so dire that we need to take precautions now even though they may turn out unnecessary.”
  2. “X is going to happen and is potentially dire, so we need to take precautions now.”

The difference between these two warnings is important. Option (a) dwells on the magnitude of the risk but doesn’t overstate its probability. It inspires preparedness without arousing warning fatigue. Option (b), on the other hand, overstates risk probability; it’s a prediction as well as a warning. It’s not only more vulnerable than (a) to warning fatigue; it’s also less credible and therefore less effective, even in the short term.

Thus an insurance salesperson who reminds you how awful it would be if your house burned down is likely to get more renewals than one who keeps insisting your house will probably burn down. A vaccination proponent who focuses on how sick the disease might make you is likelier to get you to roll up your sleeve than one who says you’ll probably get sick if you don’t get vaccinated.

Many warnings about uncertain but horrific risks have mistakenly opted for (b) when they should have picked (a). I’m thinking of George Bush proclaiming that Saddam Hussein had weapons of mass destruction … and also of environmentalists proclaiming that greenhouse gas emissions were having devastating effects on world climate. Bush turned out wrong while it looks like the environmentalists are turning out right – but that’s not the point. Both sounded far too confident far too soon.

Similarly, at the height of bird flu anxiety in 2007, I wrote a column entitled “A severe pandemic is not overdue – it’s not when but if.” Pandemic preparedness enthusiasts (of whom I am one) were making a big mistake, I wrote, when they implied that a disastrous pandemic was expected imminently. They were setting themselves up for warning fatigue if the pandemic didn’t materialize on schedule or if it didn’t turn out disastrous.

People are wise to lose faith in false predictions. But properly constructed warnings aren't falsified when the risk doesn’t materialize … any more than the failure of your house to catch fire proves that you were foolish to buy fire insurance.

The Victorian government may very well be predicting that there will inevitably be more bushfires in Victoria in the coming years. That’s a safe prediction. But if it’s predicting another Black Saturday anytime soon – or even if its warnings sound like it’s predicting another Black Saturday anytime soon – then it’s making a profoundly serious risk communication mistake, and setting itself up for warning fatigue.

One final point: It is arguable, I think, that bushfire warning fatigue is justified in Victoria – not because of the way the warnings are framed, but simply because “over-warning” is part of the normal way we all respond to disasters like Black Saturday. It’s part of our adjustment reaction. After a vivid but rare event, everybody acts for awhile as if there were going to be a lot more such events. Then, assuming there aren’t a lot more such events, we get the disaster back into context and relax some of our precautions.

As I write this, SuperStorm Sandy is fresh in my mind. It has been widely described as the most severe such storm in the modern history of my part of the world (the U.S. east coast). And yet there has been extensive criticism of various government agencies, transportation systems, and utilities for being insufficiently prepared to cope with the storm and its aftermath. Consider this: Any organization that was fully prepared to cope with the most severe storm in centuries (or even in decades) was over-prepared; almost by definition, it has been wasting resources year after year while waiting for this rare event to finally materialize and justify all that preparedness.

The same is true of individuals.

Black Saturday is thought to have killed 173 people. You say it was Victoria’s worst bushfire ever, but let’s assume for the sake of the argument that there’ll be one just as deadly every 20 years for the foreseeable future. 173 deaths every 20 years is an average of a little under nine deaths a year. Presumably some of the nine can be saved if rural Victorians get into the habit of fleeing their homes whenever conditions are favorable for a bad bushfire. But how much distortion in people’s lives is appropriate to save something less than nine lives per year? (Of course there are injuries to be prevented too. On the other hand, there would be less property damage if more people stayed and defended their homes.)

Even though Black Saturday may not be arousing much outrage in rural Victorian homeowners – less outrage than all those additional bushfire warnings arouse – Black Saturday is still a source of pretty high outrage for Victorians generally. In the minds of journalists, government officials, and millions of city dwellers, Black Saturday is still vivid. They lived through it vicariously in 2009, and they don’t want to do so again anytime soon. Disasters are catastrophic, memorable, dreaded, etc. They arouse a lot more outrage than chronic risks.

But when a recent disaster isn’t front-and-center in our minds, we tend to underestimate disaster risk. We round off low-probability risks to zero probability. It’s not so surprising that rural Victorian homeowners stopped being obsessed with Black Saturday more quickly than other Victorians. They’re more familiar with the risk. They get more of the benefits. (If I love my weekend house in the bush, I am motivated to shrug off the bushfire danger.) And they’re paying more of the price of precaution-taking; they’re the ones who keep getting told to drop everything and head for safe ground.

The disaster response pendulum inevitably swings too far. Warning fatigue is one way to get it to swing back to the middle … before it swings too far the other way.

Why doesn’t “Risk = Hazard + Outrage” get more attention in the academic risk communication literature?

name:Brenda
field:Ph.D. student
date:November 8, 2012
location:New Zealand

comment:

You may remember me – when we last emailed I was considering doing a masters thesis on risk communication and the avian flu. Well, I did that and now am a year away from finishing my Ph.D. dissertation, which is entitled: “Reconceptualising Disaster Warnings: Warning Fatigue and Long-lead-time Disasters.”

I’m at the end of my literature review and am writing about your “Risk = Hazard + Outrage” theory, which resonates with me on so many levels and is very useful for understanding how all my participants (people who live in bushfire-vulnerable areas in Victoria, Australia) have responded to bushfire warnings – especially after Black Saturday.

I’m curious as to why your theory has not had more uptake amongst the risk communications and disaster communities. Do you have any comments about that? It doesn’t bother me nor does it dissuade me from using your theory, but I have found few academic papers referencing it. I would really appreciate your take on that.

peter responds:

Why doesn’t “Risk = Hazard + Outrage“ get more attention in the academic risk communication literature? The short answer is I don’t know. And I’m probably the wrong person to ask. This is really more a question for risk communication scholars.

That said, I have three speculative explanations, for whatever they might be worth.

number 1

I don’t do research, or even use it much.

Most of the academic literature – in risk communication as in other social sciences – consists of empirical research. I don’t do empirical research anymore. I did when I was a university professor, but since leaving academia in 1994 I have written mostly essays, not research papers.

Much of what I have to say is supportable with research, I think … other people’s research. I could productively cite this research to add weight to my arguments, but I don’t usually do so. Sometimes I’m simply unaware of the relevant studies. But even when I’m aware of them, I tend to cite them sparingly, preferring to rely on examples from my consulting experience. I’m under no illusion that cherry-picked examples are stronger evidence than empirical studies. The opposite is true. (I do try to pick examples that are representative, but I don’t try to prove that they are.) I rely on examples because they are more interesting reading than statistical evidence – and for the practitioners who are my target audience, examples may also be more convincing than statistical evidence.

number 2

I don’t publish in academic journals.

As you know, the risk communication literature appears mostly in refereed academic journals, and to a lesser extent in books. Scholars read other scholars. Since leaving academia to become a full-time consultant, I have made very few contributions to that literature.

It’s partly that academic journals don’t usually take essays. But I have had occasional invitations from such journals to write review articles and opinion pieces – that is, essays. I usually respond that I’m willing on condition that the journal allow me to post my piece on my website as soon as I finish it. (If the editorial review process results in changes, I promise to post the final version as well.) Understandably, that condition is almost always unacceptable to journal publishers, whose already-shaky business model requires them to keep most of what they publish behind a subscribers-only firewall.

The norm against citing work that didn’t appear in refereed journals is quite strong. I remember a colleague who wanted to cite something from my website on flu risk communication in a top medical journal; he had to get special permission from the journal editors to do so. Periodically I get emails from risk communication scholars asking where they can find thus-and-such an idea of mine in a journal article, so they can cite it properly. I write back that they should cite the relevant website article. Sometimes they do; sometimes they end up not citing the idea at all.

number 3

I don’t have a theory.

You refer to “Risk = Hazard + Outrage” as a theory – which is flattering, but not really accurate. Theories should be supported by research – and as you say, the research addressing my work is scanty.

I like to call it a “model.“ A model is a way of looking at certain phenomena (in this case, how people think and talk about risk) that helps focus attention on certain aspects of those phenomena; a different model would focus attention on different aspects. By contrast, a theory makes truth claims; a competing theory makes different truth claims – and a well-designed study ought to be able to determine which theory’s claims are borne out.

Most of the truth claims I make are grounded in the psychometric theory pioneered by Paul Slovic, Baruch Fischhoff, and others – a theory that has inspired lots and lots of empirical research. Where I go beyond the psychometric theory is mostly in my advice about what to do: “Do this if you’re trying to arouse more outrage in unduly apathetic people; do that if you’re trying to calm the outrage of excessively upset people.“ These recommendations are truth claims of a sort: I’m claiming that my readers are more likely to accomplish their goals if they do things my way. But these claims are more prescriptive than descriptive; they’re about what works best, not about what is usually done. Prescriptive claims are less likely to get tested empirically than descriptive claims.

number 4

I don’t write like a scholar.

I know I said I had three speculative explanations, but I can’t resist the temptation to add another: that my writing is steadfastly non-scholarly. I don’t footnote. I use contractions. I aim for a conversational style.

I have no evidence that insufficient turgidity deters scholars from making use of my work. But I have my suspicions.

I know my conversational writing style makes practitioners likelier to make use of my work. And they’d make even more use of it if I’d discipline myself to write shorter.

In spite of these explanations (or rationalizations, perhaps), I do show up some in Google Scholar – not as much as you think I should, apparently, but more (I think) than most people who insist on writing virtually nothing except prescriptive, conversational, self-published website essays for practitioners.

Still, you’re basically right. There have been only a handful of published studies that tried to test hypotheses derived from my work. And I often read academic articles that address an issue I have written about at length, and include a thorough-looking literature review that doesn’t mention me. Even more frustrating to me is when a literature review mentions something tangentially relevant that I published in the academic literature decades ago, but doesn’t mention much more recent and much more relevant (and much more accessible!) articles on my website.

My work shows up a lot more often in course reading lists than in journal article literature reviews. I know because the links from the online reading lists to my website articles show up in my webstats – and because I routinely meet young practitioners who tell me they studied me in university. (They say this with wonder in their voices that I’m still alive.) I think there are more than a few scholars who think my writing is a good way to introduce their students to various risk communication concepts, but when they’re writing about those concepts they prefer to cite something more scholarly.

A tougher issue for me – tougher to bear, not tougher to understand – is why practitioners so often seem to be ignoring what I say. This is true even of practitioners who have attended my seminars or read some of my stuff. I love it when I meet people who tell me how much my work has influenced their approach to risk communication … until I look at their work product and find that it violates what I consider basic tenets of my teachings.

My take on that problem: Doing good risk communication – especially good outrage management – is hard. It’s not intellectually hard. It’s emotionally and organizationally hard. It conflicts with organizational culture; it conflicts with people’s comfort and self-esteem; it conflicts with their own outrage at their critics. When people are reading my website or watching a video or attending a seminar, they feel like they get it. And all too often they still feel like they get it when they’re implementing approaches that are quite distant from what I was urging.

That’s the failure that gives me the most heartache as I approach retirement – not the fact that scholars don’t use my work as much as they might, but the fact that practitioners think they’re using it when I think they’re not.

I must add that I have done more than a little training specifically on bushfire risk communication, both before and after Black Saturday, most of it sponsored or cosponsored by the Victoria Department of Sustainability & Environment. I’d love to think you saw some effect of this work in your on-the-ground research.

The L’Aquila case: Is criminalization a good way to discourage bad risk communication?

name: Cristina Serra
This guestbook entry
is categorized as:

      link to Outrage Management index       link to Crisis Communication index

field:Biologist; now science communicator/writer
date:October 25, 2012
location:Italy

comment:

I’d like to have your opinion/comments on the recent trial in Italy, where:

A court in Italy has convicted six scientists and one civil defense official of manslaughter in connection with their predictions about an earthquake in L’Aquila in 2009 that killed 309 people. But, contrary to the majority of the news coverage this decision is getting and the gnashing of teeth in the scientific community, the trial was not about science, not about seismology, not about the ability or inability of scientists to predict earthquakes. These convictions were about poor risk communication, and more broadly, about the responsibility scientists have as citizens to share their expertise in order to help people make informed and healthy choices.

Thank you very much for your comments.

peter responds:

Your quotation is the first paragraph of David Ropeik’s excellent October 22 blog post on the Scientific American website.

I agree fervently with David that the L’Aquila defendants were not found guilty because they failed to predict an earthquake. They were found guilty because they foisted overconfident, over-reassuring risk communication on a frightened community – either by saying overconfident, over-reassuring things themselves or by abandoning the communication task to others who were happy to do so.

Reporters who got this distinction wrong – bad science versus bad risk communication – were being careless.

Scientists who got it wrong, I suspect, mostly knew better. They were simply more comfortable complaining that seismologists don’t know how to predict earthquakes than acknowledging that seismologists also don’t know how to deal candidly with frightened people seeking answers to questions they can’t confidently answer.

I think David is right on target in his point that it was irresponsible for the national government to send a bunch of technical experts into a very anxious community without including a risk communication professional on the team. He writes: “That there was no one at that experts’ meeting trained in and responsible for communicating the results of the discussion to the public, is a gross failure in and of itself. At the very least the experts in the meeting should have been expressly told that … they had an obligation to communicate to the public they were there to serve.”

A few days ago, Jody Lanard and I posted a commentary on the L’Aquila case from a risk communication perspective. We made three principal points:

  • The experts on the L’Aquila panel failed to acknowledge sufficiently – far less proclaim vehemently – how uncertain they were about whether an earthquake might be imminent and what the swarm of tremors the area was experiencing might signify. They let the misimpression stand that the tremors were actually making the earthquake risk smaller. Those who spoke came across as overconfident and over-reassuring.
  • Experts who disagreed with this overconfident, over-reassuring assessment did not share their disagreement publicly. They adhered to a “speak with one voice” standard that allowed a false unanimity to prevail. Most of the expert panelists simply went home without collaborating on a public statement or a list of message points, leaving behind spokespeople determined to reassure the public by going well beyond the science.
  • The L’Aquila experts ended up issuing (or tolerating) exaggerated reassurances in part because they were outraged at a non-expert who had been arousing public concern with exaggerated warnings. This is common; the most extreme public statements of scientists are often efforts to rebut even more extreme statements from non-scientists. Outrage played another role in this story as well. Scientists around the world were so outraged at the prosecution of the L’Aquila experts that they willfully misinterpreted the basis for the prosecution, pretending it was about failing to predict the earthquake rather than about misleading the public.

While I think what the defendants did was culpable, I am not comfortable seeing them convicted of manslaughter. I’d very much like to persuade scientists to be more candid about risk (and especially about risk uncertainty). But I worry that scientists who are afraid ever to say anything reassuring lest they end up in prison if they turn out wrong may find themselves sounding unnecessarily alarmist instead. If the legally safe answer to every risk question becomes the alarmist answer, we will simply have replaced one misleading risk communication distortion with its equally misleading opposite.

I have another reason for feeling some disquiet about the manslaughter convictions. Experts who overconfidently over-reassure the public are more the rule than the exception. When the public is upset about an uncertain risk, experts routinely try to calm the waters. In the process they may omit or understate the reasons to be worried, exaggerate or overemphasize the reasons to be unworried, and give the distinct impression that their reassuring opinion is nearly unanimous and nearly certain. That’s what corporate experts often do when neighbors are worried about factory emissions. It’s what public health experts often do when parents are worried about vaccine side effects. It’s what economic experts often do when people are worried about their jobs.

There are alarmist experts too – but in most fields and most situations they are a minority (especially when the public is frightened already). And there are “agnostic” experts who realize how little they and their fellow experts really know – but as happened in L’Aquila they usually hesitate to take a stand on behalf of uncertainty, typically allowing their more confident colleagues to carry the communication ball.

These are serious flaws. But they’re everyday, systemic flaws. I want to correct them, but I’m not keen on criminalizing them.

Suppose the tremors in L’Aquila had subsided without a major earthquake. The experts who overconfidently predicted there would be no earthquake and the experts who kept their mouths shut and let the overconfident predictions prevail would have been just as culpable. But they’d have been lucky; they’d have turned out right. And so nobody today would be criticizing them, far less proposing to imprison them.

We really do need to find ways to discourage expert overconfidence, expert over-reassurance, and the false unanimity of “speak with one voice.” Maybe the L’Aquila manslaughter convictions will help accomplish that. But they’re far from the optimal tool for the purpose.

Additional comment:

name:Elenor Snow
field:My webmaster (entrepreneur and editor)
date:October 26, 2012
location:Georgia, U.S.

What if all those folks had been told to “follow the (folk?) history” and sleep outdoors for a couple of months? How upset would they’ve been if no big quake had shown up? And how long to wait? Months? Weeks? Days? What could anyone possibly advise them to choose? Should they just pick whatever folk history suggests?

The experts can’t win.

There are so many people who are completely uninterested in earthquake prep. A prime example: my younger sister who lives IN Los Angeles! When I asked where her family meet-up spot was in the event of the big earthquake (it’s L.A.: the big earthquake is coming!), she said: “What’s a meet-up spot?” (ARGH!!!!) She’s lived out there for 20-some years. Has she just missed the bits about earthquake preparation?!

Whether it’s Los Angeles or L’Aquila:  No matter what the scientists said or didn’t say, you cannot “force” (or even lead) people to do what they should. And how can you even know what they should do?

I sympathize more than I can say with the folks who lost family, but taking it out on the scientists – no matter whether they did or didn’t say anything reassuring or frightening – really bothers me. I’m reminded of a woman who sued the 911 operators for not trying to teach her over the phone during the emergency (!!) how to do CPR on her husband – who had been ill for many years with COPD. If that woman hadn’t bothered to learn it in all the years she lived with the sick man, why should anyone else have any responsibility to try to make up for her dereliction?

If the residents of L’Aquila had lived all their lives in an earthquake zone – and actually knew the “folk warning” about tremors leading to a big one – how it is anyone else’s responsibility to tell them anything that is not known? It’s dismaying. It’s disturbing. (It’s stupid!)

peter responds:

You’re right that the experts can’t win. But as my wife and colleague Jody Lanard often says (and so do I, copying her), it’s not damned if you do and damned if you don’t. It’s darned if you do and damned if you don’t.

There are actually two “darned if you do and damned if you don’t” asymmetries here.

The first asymmetry is that mistaken warnings are much more acceptable to the public than mistaken reassurances. A smoke alarm that goes off when there’s no fire is irritating; a smoke alarm that fails to go off when there’s a fire is devastating. Similarly, when experts sound alarmist and nothing very bad happens, they get some blame – whether it’s about a hurricane or a pandemic or an earthquake. But when experts sound reassuring and awful things happen, there’s hell to pay. Erring on the alarming side is a basic risk communication and crisis communication precept.

As you point out, many people are pretty apathetic about earthquakes – in Los Angeles and no doubt even in L’Aquila. But after a disaster, even people who were apathetic beforehand have a legitimate grievance if the experts validated their apathy instead of sounding the alarm … at least a gentle, tentative alarm. And most people in L’Aquila were more than a little upset by all those tremors. The experts weren’t so much validating their apathy as urging them to get apathetic again.

The second asymmetry is about being candid about your own uncertainty. The public doesn’t like it when experts say they’re not sure, and the public likes it even less when experts say they’re so unsure they really can’t offer any usable advice on what to do. But the public’s irritation when experts express uncertainty is nothing compared to the public’s fury when experts sound certain and turn out wrong.

Let’s assume the “truth” is unknown and unknowable – a fair summary of the situation in L’Aquila. In a known earthquake zone, a swarm of tremors is a weak, unreliable sign of a possible impending quake. You simply can’t predict whether there’s going to be a quake here soon.

The worst expert response in this situation is overconfident over-reassurance, which is what was offered to the people of L’Aquila. The best expert response is to come across as very, very tentative and a little more alarmist than you actually feel. Something like this:

There’s just not enough science to say the folk wisdom is right, and not enough science to say the folk wisdom is wrong. Before a major earthquake there are often tremors, but after a swarm of tremors there is often no earthquake. Tremors certainly aren’t a good sign – but they’re a very weak, unreliable bad sign.

Don’t let anybody tell you you’re an idiot if you decide to sleep in your car or a nearby field or anyplace outdoors during periods of tremors, which is what many people around here have been doing for centuries. And don’t let anybody tell you you’re an idiot if you shrug off that particular piece of folk wisdom and decide to sleep in your bed as usual. It all depends how worried you are about the possibility of a quake and how inconvenient or unpleasant you think it would be to wait out the tremors someplace safer.

We know everyone wants more definite advice, and we wish we could give you some. But the science just isn’t good enough. You’re stuck making your own best judgment about what’s likely/unlikely and what’s feasible/unfeasible – with damn little help from us scientists, sadly. We know for sure that L’Aquila is in an earthquake zone, and in the long run more earthquake-resistant buildings are an important, much-needed improvement. But that won’t help you decide what to do right now.

And then expect people to be a bit irritated at your unwillingness to give them unequivocal advice about what to do right now.

Labeling foods with genetically modified ingredients: California’s Prop. 37

name:Lisa Pogoff
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Risk communication professional
date:October 2, 2012
location:Minnesota, U.S.

comment:

Do you think the food giants are taking the wrong approach by trying to defeat the California law requiring GM foods to be labeled?

How can they go about getting people to believe these foods aren’t inherently dangerous?

peter responds:

I’m of two minds on the issue of mandatory labeling of foods with genetically modified ingredients (“GM foods” for short).

On the one hand, I believe in giving people as much true information as possible, especially when they’re demanding it. My starting position is always that people should be told what they want to know, unless there are very good reasons not to tell them. And when companies resist telling people what they want to know, required disclosure is usually fine by me. For more praise of making companies reveal information they’d rather suppress, see my 2008 column on “Community Right-to-Know.”

On the other hand, the food industry does have some pretty good reasons not to be forced to label GM foods.

With a few exceptions (most notably irradiation), we normally reserve food labeling for issues of safety and nutrition. If most experts – or even many experts – believed that GM foods could be dangerous to eat, mandatory labeling would be a no-brainer. But since most experts think GM foods are safe to eat, it arguably makes more sense to permit labeling of GM-free foods than to require labeling of GM foods. That’s what we do with kosher foods, halal foods, dolphin-free tuna, etc. And since most processed foods today contain GM ingredients, labeling the ones that don’t would give people the information they want with a lot fewer labels.

Also, the food industry isn’t wrong to worry that mandatory labels might imply to consumers that there must be something dangerous about genetically modified foods. In 2003, I examined this question at length in the context of whether there should be mandatory cell phone radiation labels, link is to a PDF file and concluded that mandatory labels are usually interpreted as warnings, and are therefore likelier to exacerbate consumer concern than to reduce it. That’s a good effect if you think the risk is significant. But if you think the risk is tiny, then the labels inform people who want to know at the cost of unduly frightening people who didn’t especially want to know. In the case of mandatory labeling of all GM foods, that could lead to some disruption to the food supply and some increase in the cost of food.

I don’t want to overstate this second point. Sure, a lot of people will be shocked and alarmed when they learn how much of their food has GM ingredients. But some people were similarly shocked and alarmed by mandatory food labeling about salt and calories – for awhile. (Others didn’t care or managed not to notice.) Then they got through their adjustment reaction. Today, for better or for worse, sales of high-salt, high-calorie foods are doing okay.

Of course too much salt and too many calories are demonstrably high-hazard, something we can’t say about GM ingredients. On the other hand, genetically modified foods are demonstrably high-outrage, something we can’t say about salt and calories. In both cases, I would expect the dynamic of the adjustment reaction to function as it usually does: People get accustomed to risk – both to hazard and to outrage – and it loses much of its capacity to shock and alarm.

The European Union has required GM foods to be labeled since 2003; earlier it had banned GM foods entirely. This regulatory history has certainly reduced the amount of GM food in circulation in Europe. While that has had some negative consequences (it has done considerable damage to African food exports to Europe, for example), the European food industry adjusted without much difficulty. And so did European consumers, whose food-buying patterns are much less anti-GM link is to a PDF file than their survey responses.

There are also valid questions being raised about the specific terms of California’s Proposition 37, the mandatory labeling initiative on the California ballot in November. The San Francisco Chronicle, for example, editorialized that “Prop. 37 is fraught with vague and problematic provisions that could make it costly for consumers and a legal nightmare for those who grow, process or sell food.” It said it supported “the concept of letting Californians know whether the food they eat has been genetically modified,” but argued that “[a]n issue of this consequence should be considered in the Legislature, where the language would be subject to hearings and input from myriad stakeholders.” Furthermore, legislatures can amend legislation if necessary, whereas they can’t do anything to cure the deficiencies of initiatives and referenda.

If I were a California voter, I’d have some trouble voting for Prop. 37. In the end I’d probably vote yes, but not enthusiastically.

That said, I have no sympathy whatever for the GM food industry (which is now nearly the entire processed food industry). If Prop. 37 passes, these companies will be reaping what they sowed.

While the case for California’s mandatory labeling initiative may be weak, there has always been a strong case for voluntary labeling – and, more broadly, for the food industry to talk candidly with the public (and the opposition) about the benefits and risks of genetically modified food. With rare exceptions, the industry has preferred to fly under the radar.

I started writing, speaking, and consulting about biotechnology risk communication in 1987. In 1989 I gave a presentation on that topic to Monsanto, then as now the 800-pound gorilla of GM food companies. In ’91 I talked about it to the International Society for Plant Molecular Biology; in ’92 to the Industrial Biotechnology Association; in ’99 to a meeting of food and ag biotech CEOs convened by Burrill & Company.

At least since 1987, GM foods have been my favorite example of a very-high-outrage risk. When taking seminar audiences through the twelve principal outrage components, link is to a PDF file  I typically illustrate my points with GM foods, which are on the wrong side of all twelve. I particularly stress that labeling is the best way to address three of the components: voluntariness, control, and trust.

My seminar handout on “Biotechnology: A Risk Communication Perspectivelink is to a PDF file  carries a 1999 copyright. It includes this recommendation on labeling:

The most crucial source of individual choice for biotechnology is labeling. Public acceptance will be faster and surer if people feel they will not be unknowingly exposed to GMOs. Where labeling is feasible, industry opposition is self-defeating. Where labeling is unfeasible, the battle for acceptance will be much tougher. Product development choices should be made with this distinction in mind.

In other words: Label GM foods where you can. And steer away from GM product lines you can’t easily label.

Needless to say, my advocacy has fallen on deaf ears. Instead of trying to sell the virtues of GM foods on the merits, the food industry has tried to sneak genetically modified ingredients into the U.S. food supply. Not literally: The introduction of GM foods was never secret. But it was as low-profile as the industry could make it.

That effort has been surprisingly successful. Despite occasional flurries of public concern about GM foods, in the U.S. the GM issue has been quiescent for most of the past two decades. Given that a much more robust debate in Europe ended in stringent regulation, mandatory labeling, and some public resistance to GM ingredients, you can’t entirely blame the industry for choosing a different approach in the U.S. But I think it was a bad decision. (There are too many differences to go into here, including the fact that most U.S. farmers adopted GM seeds and most European farmers didn’t. In Europe, GM foods were usually imports; anti-GM sentiment and anti-GM regulation were largely protectionist.)

Now the bill for 25 years of keeping a low profile in the U.S. may finally be coming due.

The food industry didn’t just pass up the opportunity to label voluntarily. It also lobbied hard in state legislatures against GM labeling legislation – and again and again it won.

So now we have a “California Spring.” The San Francisco Chronicle is probably right that a legislature, even California’s legislature, could pass a better, more nuanced law than Prop. 37. But the industry didn’t want a better, more nuanced law. It wanted no law. Now it may need to learn to live with the law it gets. That’s what the initiative and referendum process is for: rough justice when other processes fail to provide any justice at all. People Power is often a sledgehammer, when The Powers That Be refuse to use their scalpels.

If Prop. 37 passes, Californians who have long been worried about GM foods will have an easier time avoiding them. That’s a good effect.

And Californians who have never been especially worried about GM foods, including many who will vote yes simply because “right to know” makes sense, may find themselves avoiding GM foods too, at least for a while – partly because they’ll have an adjustment reaction when they discover how widespread GM ingredients are, partly because mandatory labeling will be seen as signaling significant risk, and partly because the debate over Prop. 37 will have made GM risk issues more salient.

So sales of non-GM alternatives will go up, and mainstream food companies will think hard about whether to go GM-free for at least some of their products. Organic food companies that are supporting Prop. 37 may thus see both increased sales and increased competition. But if the European precedent is predictive, GM foods will soon fare better in supermarkets than in surveys (or polling booths).

Since California is such a huge market, the sixth largest in the world, much of what Prop. 37 requires or causes in California will end up happening elsewhere as well.

On balance, these effects are bad news for Monsanto (the biggest contributor to the anti-37 campaign) and the rest of the GM industry. But Monsanto and the rest of the GM industry deserve it. And the effect on the public doesn’t look like it’ll be horrific: a shakeup in the food market for awhile, followed by a more honest “new normal” in which people can easily tell which foods have GM ingredients and which foods don’t.

Will Prop. 37 pass? A September 27 Los Angeles Times story reports a statewide poll showing 61% of registered voters in favor versus 25% opposed. But the story adds that the survey was finished before the start of “a major television advertising blitz by opponents aimed at changing voters’ minds on the issue.” Opponents have already raised more than $32.5 million, it notes, versus $3.5 million raised so far by supporters.

I’m not keen on all those ads and sound bites that will pit denunciations of “frankenfood” against predictions of “shakedown lawsuits.” I’d have preferred to get there via proactive, substantive dialogue and voluntarily labeling. But the industry had 25 years to get there that way and chose not to. So maybe we’ll get there this way instead.

Your comment asks whether I think the industry is making a mistake to battle Prop. 37.

I think the industry made a mistake when it decided to introduce GM foods via stealth instead of via dialogue and voluntary labeling. I also think that sooner or later the industry will have to enter into that long-postponed dialogue. And sooner or later it will have to label foods with GM ingredients, in the U.S as in Europe.

The question is whether it should try to defeat Prop. 37 first. I’d be happy to see the industry win on Prop. 37 and then move planfully toward dialogue and labeling. But I’m not convinced that it can win, and I’m pretty sure that if it does win it’ll try to wrap its invisibility cloak around itself once again, aiming for another decade or two of flying under the radar.

So if I were advising the industry, I think I’d recommend instead that it seize the opportunity for high-visibility dialogue that Prop. 37 provides. Instead of trying to defeat Prop. 37, I’d like to see the industry spend its $32.5 million ameliorating the Prop. 37 adjustment reaction by helping the public prepare for a world full of GM labels.

Among the key message points for such a campaign:

  • We should have labeled voluntarily a long time ago, or we should have gone along with labeling legislation. If Prop. 37 passes, as we think it will, we’ll have no one to blame but ourselves.
  • You’re going to be surprised at how much of your food supply has GM ingredients. Get ready for that. Brace yourself.
  • Obviously we think GM ingredients are just as safe as other ingredients, and most scientists agree with us. But instead of arguing the case on the merits, we have avoided the dialogue we should have welcomed. That was stupid and wrong, and we apologize.
  • Now people who are worried about the possible risks are going to find it easier to avoid GM foods if that’s what they want to do. We can live with that. We will have to live with that.
  • But we hope not too many people will misinterpret the labels as meaning that most experts think GM foods are dangerous. Most experts don’t. The labels are about people’s right to know, so they can make their own decisions.
  • Here’s our summary of the benefits of GM foods – the modest but real current benefits and the incredible possible future benefits….
  • And here’s our view of the risks – which ones we think are real, which ones we think are unknown, and which ones we think have been pretty thoroughly disproved….
  • You’re also going to be hearing from opponents, of course. They are right and we were wrong about the public’s right to know. That doesn’t mean they’re right about the risks of GM. We hope you will make up your own mind about whether GM ingredients are safe – even though we deprived you of that opportunity for decades. We hope you’ll listen to both sides … and listen to food safety experts who aren’t on any side.

I don’t hold out much hope that the food industry will do that. I think it will oppose Prop. 37 to the bitter end. Worse, I think it will continue to fight against labeling GM foods, everywhere it can. That’s why, reluctantly and sadly, I hope Prop. 37 passes.

Your comment also asks what else I think the industry should do to encourage public acceptance of GM foods. That’s a book, obviously. But as a start, here are the topic headings of my 1999 handout (to put a little meat on these bones, see the handout itself link is to a PDF file):

  1. Accept that biotechnology is an extremely high-outrage risk.
  2. Notice that biotechnology affronts both left and right.
  3. Take biotechnology hazard seriously.
  4. Recognize public acceptance as a bet-the-company issue.
  5. Understand individual concerns as a stand-in for global concerns.
  6. Support individual choice.
  7. Support labeling.
  8. Acknowledge huge fears and real risks.
  9. Accept that regulators and critics must have a major role.
  10. Lean on size and traditionalism.
  11. Don’t let bad actors hold you back.
  12. Acknowledge uncertainty.

So far food biotech has offered modest benefits and posed modest risks. So it hasn’t mattered all that much that the industry tried to fly under the radar instead of fighting the battle for public acceptance on the merits. But the potential benefits of food biotech are huge, I think – and the potential risks (especially environmental risks) look substantial as well. Over the long haul, reaping the benefits and minimizing the risks will require an industry that deals candidly with the public. Prop. 37 is far from the ideal first step. But at least it’s a step in the right direction.

Postscript: Contrary to my prediction, on Election Day California’s Prop. 37 failed by a 53% to 47% margin. Strong public support early in the campaign was eroded by editorial opposition on the part of most California newspapers, by real defects in the proposition itself, and of course by an advertising blitz on the part of opponents (who outspent supporters 5-to-1).

What should Penn State say in alumni fundraising appeals about its child molesting scandal?

name:Lisa Pogoff
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Risk communication professional
date:September 18, 2012
location:Minnesota, U.S.

comment:

My husband keeps getting letters from Penn State, where he went to college, asking for money. They never say anything in their letters about the incidents that happened on campus over the past decades, and they’ve never issued an apology. When he received their most recent letter, he put his own letter in their envelope (and paid for the postage!) saying, “Shame on you!” and sent it back.

What do you think Penn State should be saying now? After all this time you’d think these big organizations would know that they’re supposed to apologize.

peter responds:

Like you and your husband, I am aghast that Penn State would send out alumni solicitations without even mentioning its child molesting crisis. I am similarly aghast that Susan G. Komen for the Cure is sending out donor solicitations – I got one – that don’t mention its battle with Planned Parenthood. (Readers unfamiliar with these two controversies can check out this long Wikipedia article on the Penn State scandal or this much shorter summary of Komen’s missteps vis-à-vis Planned Parenthood.)

Both organizations have apologized publicly. There are lots of grounds for criticizing their apologies. But they did apologize. Often, even.

To its credit, the Penn State website contains lots of information and opinion about the scandal. It’s not linked from the home page or listed in the website index, but it’s readily searchable. If you’re just browsing the Penn State website, you’re unlikely to stumble on anything about Jerry Sandusky and child molesting. But if that’s what you’re looking for, you’ll find it without difficulty.

As far as I can tell, the much smaller Komen website is silent on its recent controversy – which is, of course, much less heinous than what happened at Penn State.

Basically, both organizations seem to have adopted a strategy of public contrition in venues where they have no choice because the controversy is already front-and-center. But neither organization chooses to raise the issue proactively … in its fundraising appeals, for example.

Is it possible that Penn State and Komen have actually done research to show that this makes sense? Their target fundraising audience presumably consists of four groups:

  • Those who don’t know about the crisis
  • Those who know and are on Penn State’s/Komen’s “side”
  • Those who know, believe Penn State/Komen misbehaved, and would therefore never give them money no matter what they say now
  • Those who know, believe Penn State/Komen misbehaved, and might just forgive them and give them money if they address the issue forthrightly and contritely

Their chosen approach – silence – is better for the first two groups. Neither approach will do any good with the third group. Our recommended approach – contrition – is better for the fourth group.

Is it possible that the first two groups outnumber the fourth group? And if they do, would that be enough reason to choose silence over contrition? Does it make sense to apologize in other forums where the controversy is under discussion, but not to mention it in fundraising?

I don’t think so. But I imagine they think so, and they might even have statistics that make them think so.

For what it’s worth, Komen fundraising is apparently down as a result of the Planned Parenthood controversy, while Penn State fundraising did very well in 2011–2012. Of course it’s not easy to deduce whether they’d have done better or worse if their appeals to prospective funders had been more apologetic.

Why do I think (without any statistics at all) that proactive contrition would be a wiser approach? For at least three reasons:

number 1

Fundraising appeals aren’t just about raising funds.

Like most organizations, both Penn State and Komen cast a wide net in their fundraising appeals. (Penn State included your husband; Komen included me.) Raising money is obviously an important goal of these appeals, but leaving a good impression even on those who decide not to give – this time – is also an important goal.

Perhaps the people likeliest to give right now really are in the first two groups of my audience segmentation exercise – either they don’t know about the crisis or they’re inclined to defend the organization’s behavior. So an unapologetic fundraising letter (and there’s nothing as unapologetic as pretending the crisis never happened) might actually raise more money than an apologetic one. I doubt it, but I’ll concede it’s possible.

But I’m willing to bet that most of the people on the two organizations’ recipient lists are in the third and fourth groups – they’re critical of the organization’s behavior whether they’re prepared to forgive it or not. Let’s assume these people weren’t about to give money right now no matter what the fundraising letter said. Even so, a letter that ignores the issue is bound to exacerbate the outrage of both of these groups – to strengthen their opposition if opposition is inevitable and to delay their forgiveness if forgiveness is feasible.

That’s what happened to me when I read Komen’s appeal. It’s what happened to your husband when he read Penn State’s appeal. It’s a long-term cost that Penn State and Komen don’t seem to have considered.

number 2

Fundraising appeals aren’t airtight. They get into the hands of people other than the intended recipients.

Out in the larger world are millions of people whose opinions of Penn State and Komen matter to those organizations’ futures, even though they are not prospective donors. Some of them, maybe even a lot of them, are going to get wind of Penn State’s and Komen’s unapologetic fundraising campaigns.

Most of those people, I think, want to see Penn State and Komen apologize. More to the point, they want to see Penn State and Komen apologize incessantly. At least for the next couple of years, they don’t want to see Penn State or Komen blowing their own horns without interrupting the self-praise at some point to acknowledge the elephant in the room.

This has been my advice to clients for decades: When you have messed up, apologize proactively and incessantly, until your stakeholders are sick of hearing it (not until you’re sick of saying it, which happens much sooner). As the risk communication seesaw dictates, the more often you apologize the more quickly your stakeholders will get sick of hearing it.

Two ancient examples may drive the point home.

In the years after the disastrous 1984 methyl isocyanate gas leak in Bhopal, India, Union Carbide – the company responsible – used to publish an annual ad on the op-ed page of The New York Times on the anniversary of the tragedy. Others may have forgotten, the ads suggested, but the company must never forget. There is much to criticize in Carbide’s handling of its responsibility for Bhopal, but these ads were a bright spot. (Conflict-of-interest acknowledgment: I recommended the ads, and sometimes helped write them.)

A year or two after the 1989 Exxon Valdez oil spill in Alaska, by contrast, my family and I visited Exxon’s “Universe of Energy” pavilion at Epcot, part of Walt Disney World in Orlando, Florida. There were endless displays about the environment, but nothing about Valdez! It was an extraordinary icebreaker. Total strangers were commenting to each other, “Can you believe those bastards didn’t even mention Valdez?”

Union Carbide was proactively apologetic. Exxon, like Penn State and Komen, was not.

number 3

What organizations say to their stakeholders is also what they are saying to themselves.

Sometimes the most important audience for an apology is the person or organization that’s apologizing. Apologies foster contrition.

It’s not foolproof. Some people and organizations can say they’re sorry, as often as they think the situation calls for, without ever once meaning it. But most of us, even if our decision to apologize was initially more strategic than heartfelt, learn from our apologies; they help us feel our way into what we did wrong and why it was wrong.

This is most obviously true for large, complex organizations. Penn State has acknowledged the need to change its organizational culture. (I haven’t seen such an acknowledgment from Komen, but I may have missed it.) Every time Penn State apologizes externally, it is sending an internal message of cultural change. And every time it passes up an opportunity to apologize, it sends an internal message that those external apologies really are just strategic rather than heartfelt, to be deployed only when needed to defuse opposition.

An organization that was truly determined to change wouldn’t be sending out fundraising appeals that didn’t say so.

How should Mexican tourism officials address fears of drug violence?

name:Arun Sudhaman
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Managing Editor, The Holmes Report
date:July 24, 2012
location:United Kingdom

comment:

I’m visiting Mexico City later this week to take a look at how the Government is persuading foreign tourists that the country is safe, despite the drug-fueled violence.

I’d be really interested in your views on this topic, particularly in terms of what you think the major challenges are. It strikes me that this is probably a low-hazard, high-outrage situation as far as tourists are concerned.

But the level of outrage, given the gruesome media reports of the violence, must be rather unique. I wonder how the country can seriously attempt to communicate its appeal as a tourist destination when dismembered corpses are being found on a regular basis, and large swathes of the country are in the hands of the cartels.

peter responds:

I can imagine that Mexican tourism officials must get incredibly frustrated when tourists, especially American tourists, avoid Mexico because they fear getting killed in the Mexican drug war.

After all, it’s mostly U.S. citizens who keep Mexico’s drug war alive, some of us by being customers for illegal drugs brought in from Mexico and some of us by opposing legalization of those drugs (or at least not fighting for legalization link is to a PDF file). Not to mention that a lot of the guns Mexican drug cartel members are shooting come from the U.S.

Of course it’s not useful for Mexican tourism officials to say that. Even those Americans who feel guilty about fueling Mexico’s drug war are hardly going to make up for it by vacationing there.

Should they talk about this issue at all?

The first thing Mexican tourism officials need to decide is whether they really want to address this issue at all. There are two key audiences here: prospective tourists who are concerned and prospective tourists who aren’t. It will be difficult to reassure the former without simultaneously alerting and perhaps alarming the latter.

Plenty of tourists from the U.S. and elsewhere are flocking to Mexico despite the carnage that is periodically reported in the media. A February 2012 Los Angeles Times story on the latest statistics was headlined “Mexico sets tourism record despite drug violence.” Some tourists have managed to stay unaware of what’s going on; some simply haven’t connected it to their vacation plans; some have thought about the situation and decided it’s still safe to go.

I agree with you that this is a low-hazard, high-outrage situation for some tourists. But for most tourists, apparently, it is a low-hazard, low-outrage situation. And for tourists who stray too far off the beaten track, it may be a high-hazard, low-outrage situation – but I will focus my comment on tourists who are content to do conventional tourist things.

Given that Mexican tourism has had a record year, reassuring the minority (tourists deterred by the drug violence) may not be worth the collateral damage of informing or reminding the majority (tourists who are still comfortable traveling to Mexico).

This is an extremely common problem, grounded in the crucial difference between outrage management and public relations. Most of the people at a public meeting to talk about emissions from a local factory, for example, are bound to be concerned neighbors. Unconcerned neighbors will stay home. But there will be journalists at the meeting, and plenty of unconcerned neighbors watch TV news or read newspapers.

If the factory’s management wants to address and ultimately reduce the outrage of its concerned neighbors, it should acknowledge that the emissions are excessive, upsetting, and potentially dangerous; apologize for not having been fully honest about them until recently; set up an advisory group to supervise the company’s efforts to control them; etc. These outrage-reducing concessions to the concerned people at the meeting will inevitably be news – maybe upsetting news – to a lot of much less concerned people who aren’t at the meeting.

So the company has to choose, or try to compromise, between two goals: managing the outrage of people who are already upset versus sustaining the apathy of people who aren’t. (In this hypothetical example I am assuming that the company has good data showing that the emissions in question are not a significant hazard to the neighborhood.) It’s a choice between outrage management and conventional public relations.

In such situations I usually advise my clients to choose outrage management. The concerned neighbors who come to tonight’s meeting are much more engaged in the issue than the folks who might or might not find out about it from tomorrow’s news. The company usually has more to gain from ameliorating the outrage of its active critics than it has to lose from providing some potentially upsetting information to neighbors who are and will probably remain mostly passive bystanders, more “audience” than “stakeholders.”

Sooner or later most of those passive neighbors will probably find out about the emissions anyway – and the longer it takes till they’re told, the more upset they may be when they finally learn the truth. Their ignorance now is a deferred liability for the factory’s management, not really an asset. So cuing them in now isn’t really collateral damage after all.

That’s my usual analysis for situations that are low-hazard, high-outrage for a small group and low-hazard, low-outrage for a larger group. But my usual analysis doesn’t hold true for Mexican tourism.

Tourists who decide to vacation someplace else this year won’t be stirring up trouble for Mexico the way neighbors outraged by a factory’s emissions try to stir up trouble for the polluting company. All Mexico will lose is their individual tourism dollars, pounds, and euros. Furthermore, there are very few activists (though there are some) busily trying to persuade unconcerned tourists to get concerned and boycott Mexico. And only a small portion of the media coverage of Mexico’s drug war even mentions risk to tourists, and those infrequent references are often reassuring.

Thus it is quite possible that any outrage management campaign implemented by Mexican tourism officials could do more harm than good. The benefit of reassuring concerned tourists might be outweighed by the collateral damage done by informing, reminding, and alarming unconcerned tourists. Except for venues where it’s feasible to talk to the concerned audience without a lot of spillover into the unconcerned audience – online discussion boards about tourism safety issues, for example – maybe the Mexican authorities should leave well enough alone.

Of course that could change if the ratio of concerned to unconcerned tourists starts to change. Certainly Mexican tourism officials should monitor tourism rates and tourists’ (and prospective tourists’) feelings and opinions about whether Mexican drug violence endangers tourists. But for now, that might be all they should do.

What should they say?

But let’s assume for the sake of the argument that the authorities have decided it’s time to talk to the concerned audience, at least in some venues where the unconcerned audience isn’t likely to be listening. What should they say? Here’s a quick list of outrage management bullet points.

number 1
Acknowledge the drug violence. And do so graphically, vividly – with scary numbers and scary stories and even scary photos. Remember that you’re talking to people who are already upset enough to want to avoid traveling to your country. They’ve seen the scary numbers, stories, and photos, and they have taken them to heart. If you’re going to address their fears, you can’t be mealy-mouthed about acknowledging them.
number 2
Acknowledge that tourist concern about Mexican drug violence is widespread, that it is natural, and that it is legitimate. If you want to point out that tourism is up despite the violence, do it in a subordinate clause: “Even though we set a tourism record here last year, many regular visitors to Mexico are beginning to worry that….” Quote some tourists who have told you why they’re not traveling to Mexico this year, or why they’re thinking about not traveling to Mexico this year. Make your concerned audience feel known, understood, accepted, and validated.

(By now it should be getting really, really clear why a good outrage management strategy for tourists who are worried about Mexican drug violence is a horrible public relations strategy for tourists who aren’t.)

number 3
Point out that the violence is almost entirely confined to bad guys and cops. (Extra credit for acknowledging that widespread corruption can make it hard to tell the difference.) Consider language like this: “Bystanders do sometimes get caught in the crossfire, so it’s important not to go to places where gunfire is likely. But for the most part those aren’t tourist sites anyway. There has been very little anti-tourist violence in Mexico. Mexico isn’t a country where terrorists kill tourists to make a political point; it isn’t a country where bandits take tourists hostage to collect a ransom. Mexico is a country where drug cartels and law enforcement officials shoot it out a lot. That’s awful for Mexicans – but it’s a lot safer for tourists in tourist locations.”
number 4
Don’t overstate #3. I may have overstated it some myself, especially in my point about bandits. Though the Mexican “bandito”survives more in Hollywood than in Mexico, nonviolent shakedowns aren’t rare in Mexico today. Neither is violence against tourists who resist the shakedowns or who venture off the beaten track. Frankly acknowledging the real risks of Mexican tourism is probably the single most effective way to give credibility to your reassurances about getting caught in a gun battle over drugs. Beyond that, acknowledge that the risk to tourists from Mexico’s drug war isn’t zero. (And resist the temptation to say “nothing is zero risk.” That’s true, but it always sounds defensive.) Acknowledge that this risk is probably higher than it was a few years ago. Acknowledge that it could get higher still. All this is consistent with your fundamental point that Mexico’s drug cartels are making war on each other and on the cops, not on tourists.
number 5
Since you obviously have a vested interest, rely on neutral third-party sources to validate your claims about the risk to tourists. The U.S. State Department “Travel Warning” for Mexico, for example, points out that “Resort areas and tourist destinations in Mexico generally do not see the levels of drug-related violence and crime reported in the border region and in areas along major trafficking routes.” The travel warning goes on to specify where tourists shouldn’t worry, where they should be careful, and where they simply shouldn’t go. If any country’s government is issuing more alarmist warnings about Mexican tourism than the U.S. warnings, quote them too. Give your audience an accurate understanding of what neutral third parties think about how worried tourists should be about traveling to Mexico.
number 6
Offer concrete advice about ways to stay safe. It’s possible to imagine advice so off-putting it would do more harm than good. Nobody would want to vacation in a country whose government recommended staying in your hotel room, for example. But lots of countries, including Mexico, have advised tourists for years about things they can do to minimize the risk of getting pick-pocketed or mugged or shaken-down. These safety tips help readers feel like they have more control over their risk. That usually makes them feel safer, not more endangered. So write something comparable about things we can do to minimize the risk of wandering into a drug war free-fire zone – and include this content in information prospective visitors read while they’re still deciding where to vacation..
number 7
Finally, acknowledge that even if you convince everyone that visiting Mexico isn’t prohibitively dangerous, some people will still choose not to vacation in a country that’s in the middle of a drug war. Part of what we want from a vacation is a break from unhappy or worrisome thoughts. If simply knowing what Mexico is going through would undermine some people’s vacations, they’re right to vacation elsewhere. Give them “permission” to do so. Express your country’s determination to get the drug cartels under control – for a lot of reasons more fundamental than protecting tourism and tourists! – and tell us you’ll be proud to welcome us back after the dust has settled.

Warning the world about yet another possible catastrophe: solar flares

name:Craig Daniels
This guestbook entry
is categorized as:

      link to Precaution Advocacy index       link to Crisis Communication index

field:Electronics technician (mostly retired and
pursuing other interests)
date:July 21, 2012
location:Oregon, U.S.

comment:

Since it seems to be exactly your line of work, I’m commenting in order to recommend the thankless, probably futile task of motivating our nation to mount an adequate preparedness (with drills) against the threat of a solar flare resulting in a dozen Fukushima Daiichi-style meltdowns.

See for example this site, which says:

The approaching 2012-2013 solar maximum is expected to include a series of potent “X-Class” flares and coronal mass ejections (“CMEs”). Whether one of them smacks the Earth and shuts down some of our power grids is a game of roulette….

Nuclear power plants, despite that they might end up literally bursting with thermal energy, are designed such that they’re unable to power their own cooling pumps – in the event that there's a local failure of the power grid…. Newer designs use steam power to run turbine driven water pumps, but they still require electricity to open valves and turn those pumps on…. The U.S. Nuclear Regulatory Commission only requires that nuclear power plants be independently capable of supplying diesel-electric backup power for 72 hours, plus 4 hours worth of battery backup power….

Or this one:

Earth and space are about to come into contact in a way that’s new to human history…. Richard Fisher, head of NASA’s Heliophysics Division, explains what it’s all about:

“The sun is waking up from a deep slumber, and in the next few years we expect to see much higher levels of solar activity. At the same time, our technological society has developed an unprecedented sensitivity to solar storms….”

The National Academy of Sciences framed the problem two years ago in a landmark report entitled “Severe Space Weather Events – Societal and Economic Impacts.” It noted how people of the 21st-century rely on high-tech systems for the basics of daily life. Smart power grids, GPS navigation, air travel, financial services and emergency radio communications can all be knocked out by intense solar activity. A century-class solar storm, the Academy warned, could cause twenty times more economic damage than Hurricane Katrina.

Much of the damage can be mitigated if managers know a storm is coming. Putting satellites in “safe mode” and disconnecting transformers can protect these assets from damaging electrical surges. Preventative action, however, requires accurate forecasting….

Or this one:

A report by the Oak Ridge National Laboratory said that over the standard 40-year license term of nuclear power plants, solar flare activity enables a 33 percent chance of long-term power loss, a risk that significantly outweighs that of major earthquakes and tsunamis.

Or this one:

Almost two years ago I asked the question, “Do Solar Storms Threaten Life as We Know it?” The answer then and even more so now could very well be a scary “yes” – even within the next few years – as an increase in solar activity coincides with the increasing vulnerability of technology-dependent societies to powerful solar storms.

Moreover, while one is not inevitable, should there be a solar strike capable of causing widespread blackouts and crippling disruptions of satellite and radio communications, it’s likely there would be little advance notice, and currently there is virtually no capability to shield much of the planet and virtually no planning on the books to recover from the potentially disastrous consequences.

peter responds:

There’s a large and growing literature on the possibility of a solar flare disaster – not all of it (of course) as alarming as the sites you cited. Here for example is the Northeast Power Coordinating Council, with a typical-sounding bureaucratic reassurance link is to a PDF file that they’ve got it covered:

Those utilities most affected by solar activity since 1989 have developed procedures which establish a safe operating posture and which are initiated by criteria for their respective systems.

That NPCC statement does leave you wondering whether utilities not “most affected by solar activity since 1989” are just sitting around hoping they stay lucky. And the statement doesn’t quite have quite the pizzazz of the title given to a February 2011 meeting on the problem, put together by the American Association for the Advancement of Science: “Space Weather: The Next Big Solar Storm Could Be a Global Katrina.”

Judging from my quick perusal of some of what’s online, there appears to be a consensus or near-consensus on these points:

  • Solar flares follow a cyclical pattern, and we’re entering a period of increased solar activity.
  • Our technology is a lot more vulnerable to solar flares than it was the last time we hit the top of the cycle. Our telecommunications and power grids are the ones most at risk.
  • Noticeable problems are almost inevitable. Serious problems are likely enough to be worth preparing for. Catastrophic problems aren’t impossible.
  • Most preparation so far has addressed lower, likelier, less-than-catastrophic levels of solar activity.
  • Relatively little has been done in the U.S. or anywhere in the world to get ready to cope with the possibility of catastrophic, long-lasting damage to the world’s telecommunications and power grids – not to mention a wide range of conceivable (but not guaranteed) knock-on effects like nuclear power plant meltdowns.
  • Most strategies to prepare for a possible solar flare catastrophe and mitigate its possible impacts require long-term planning by governments and other powerful institutions. As far as this cycle is concerned, it’s too late already for a lot of them. But short-term and even individual-level steps can help at least a little.

Here’s the core risk communication problem.

There are lots of potential catastrophes facing the world. I’m not talking about nutty scenarios here – there are lots of those too, and some of them may ultimately turn out less nutty than we think. But I’m talking about the potential catastrophes that at least some serious experts take seriously. A solar flare catastrophe is in that category.

Most of these potential catastrophes won’t happen. Something much milder will happen instead, or nothing at all. But sooner or later one of the potential catastrophes on the list will actually happen in all its catastrophic glory. When it does, people who have been sounding the alarm about that particular potential catastrophe will feel vindicated. The rest of us, who shrugged off their warnings or never even heard them, will get really, really angry that the experts didn’t take action (and didn’t push us to take action too).

There’s no reliable way to tell which potential catastrophe is the one that’s going to happen.

So for each potential catastrophe on the list, our choice as a society and as individuals is how much to do now in the way of preventive or preparatory action. We must make that choice knowing that action to prevent or prepare for a catastrophe that doesn’t happen fairly soon constitutes a waste of resources that might otherwise have been devoted to problems we actually face rather than hypothetical ones we might eventually face – health, education, infrastructure, etc.

Actually, we have two choices to make: how much of our resources to budget for prevention and preparedness with regard to all the potential catastrophes on the list; and how to allocate our catastrophe budget among the contenders.

“Resources” here means not just money, but also time, energy, and even worry. (In my vocabulary, worry is of course a species of outrage.) Very few people are experiencing a shortage of things to worry about; our worry agendas are fully engaged. So getting us to worry more about solar flares requires demoting some of the other concerns already on our worry agendas. And worry is the gateway resource. Getting us to worry more about solar flares is virtually a precondition to getting us or our institutions (governments, corporations, etc.) to take significant action.

Mobilizing worry about solar flares

Not that every potential catastrophe on the list is equally deserving of attention. They differ in their likelihood of happening, in their magnitude if they happen, in the feasibility and cost of efforts to prevent them or prepare for them, etc. I’m not qualified to assess these variables and then use them to rank potential catastrophes.

Neither is society at large. When it comes to potential catastrophes, society responds chiefly to the skill (and luck) of those who are working to mobilize worry. As I never tire of telling clients, outrage is a competition. In an article entitled “Watch Out! – How to Warn Apathetic People” (originally written for industrial hygienists), I put it this way:

[W]hen you try to frighten people about a particular risk, think of yourself as competing for your slice of the fearfulness pie. Greenpeace wants your company’s employees to be afraid of genetically modified foods. Moral Majority wants them to be afraid of gay marriage. Their doctors want them to be afraid of eating too much and exercising too little. And you want them to be afraid of workplace hazards. Overall, they’re going to be as fearful as they’re going to be. The only question is whether workplace hazards will get their rightful share of the fearfulness.

In the last few years of the twentieth century, some experts were worried that the transition from 1999 to 2000 could devastate the world’s computers in much the same way you think solar flares could devastate the world’s power and telecommunications grids. (The concern was that computers had been programmed to pay attention to only the last two digits of a year, and wouldn’t know that “00” comes after “99.”) Those who sounded the alarm about the Y2K “Millennium bug” were skillful (or lucky); they captured a lot of resources. Whether Y2K preparedness worked or whether the concern was overblown in the first place is still a hotly debated question. Either way, you could do worse than to study the risk communication strategies of Y2K alarmists and see what you can adapt for use with regard to solar flares.

In the twenty-first century, two potential catastrophes have so far managed to arouse enough public and political concern around the world to actually garner sizable amounts of action. Despite the extraordinary success of these two compared to other potential catastrophes – solar flares, for example – their advocates feel like failures. They haven’t changed the world nearly as much as they think they must to stave off or even mitigate the disaster they think is coming.

The two potential catastrophes I’m talking about are global warming and an influenza pandemic (especially an H5N1 “bird flu” pandemic). You could profitably study their risk communication strategies as well.

I have written only one substantial article about “Climate Change Risk Communication,” but others have written a great deal about it. If you have library access or don’t mind paying, you might start with the June 2012 issue of Risk Analysis, which includes a special section on “Climate Change Risk Perception and Communication.”

As for pandemic risk communication, I have written about it ad nauseam – so much that it has its own index on my website. Skip the stuff about the swine flu pandemic of 2009–2010, which focuses mostly on the slow-motion discovery that swine flu was a very mild pandemic. You’re not interested (yet, anyway) in how to acknowledge that what happened wasn’t a catastrophe after all; you’re interested in how to warn that what might happen could be a catastrophe. So start with my writing about bird flu.

Specific examples aside, warning people about potential catastrophes is “pre-crisis communication” – a hybrid of precaution advocacy and crisis communication. My best generic writing about pre-crisis communication is a piece I published in 2003 entitled “Obvious or Suspected, Here or Elsewhere, Now or Then: Paradigms of Emergency Events.” link is to a PDF file  Even though this article was written for emergency planners and emergency managers, not citizen activists, the obvious-here-future and suspected-here-future sections should help.

See also the last section of my long website column on “Worst Case Scenarios.” Most of the column is about why and how companies and governments should acknowledge worst case scenarios they’d rather ignore or minimize. But the last section turns the tables and addresses the question that preoccupies you: how to use worst case scenarios to sound the alarm about a risk you want people to stop ignoring or minimizing.

I won’t try to summarize everything in these and other articles that bears on how to arouse public concern about solar flares. But here are a few quick bullet points to get you started:

  • Look for teachable moments – moments when people are paying more attention than usual. Concentrate your limited resources on those moments. This may mean saving your ammunition for another newsworthy solar flare like the one on Valentine’s Day 2011.
  • Don’t spend a lot of time complaining to the public about the public’s unresponsiveness to solar flare warnings. Telling people what jerks they are not to be worried isn’t an effective way to get them worried.
  • Like a smart insurance salesman, avoid overstating how likely your solar flare worst case scenario is. Instead of implying that a catastrophe is virtually inevitable, ground your case for action on how horrific it might be.
  • Find stories and symbols that are memorable, and put them at the center of your campaign. You can’t arouse worry/fear/outrage with statistics. For the same reason, find spokespeople who are personable and Oprah-worthy, not just technically credible.
  • By definition, talking about potential catastrophes is speculating. Go ahead and speculate. But speculate responsibly: Make it clear that you know you’re speculating. Don’t sound overconfident.

Above all, don’t expect instant or total success. Squeezing a new risk onto society’s worry agenda is tough. It typically takes a generation or more – as it did for the fight against cigarettes, as it did for the battles on behalf of hard hats and smoke alarms and seat belts, as it is doing for the concern about obesity. Let that sink in: Even if you’re skillful and lucky, it still typically takes a generation or more to make the partial-but-perceptible progress that those issues have made.

And even significant progress is no guarantee against backsliding. My two “good” examples of pre-crisis communication, global warming and influenza pandemics, both actually declined on the U.S. public’s worry agenda in recent years. Prevention and preparedness proponents are now casting about for ways to turn things around and resume their prior progress.

Of course if the catastrophe you anticipate occurs in the meantime, that will solve your problem. We’re all very good at preparing to fight the last war.

One final discouraging point in this discouraging response: In thinking about catastrophe prevention and preparedness, it’s worth considering the views of the late Aaron Wildavsky, a political scientist who wrote very thoughtfully about risk management issues. Wildavsky argued that modern societies tend to invest too heavily in trying to prevent and/or prepare for specific potential catastrophes. Given the impossibility of predicting which potential catastrophes will actually happen, he wrote, most of that investment ends up wasted, and societies ends up less prepared than they might have been for the potential catastrophes they didn’t predict. According to Wildavsky, investing in resilience is usually a better use of resources, because it enables societies to cope with a catastrophe that nobody predicted.

If Wildavsky is right, then the most valuable precautions work against a wide range of hazards, many of which weren’t even thought of when the precautions were implemented. And the most valuable preparedness is all-hazards preparedness.

Think in these terms about the things you want people, companies, and governments to do about solar flares. Which ones will help protect us only against solar flares? And which ones are investments in resilience?

Persuading the Boy Scouts of America to accept atheists

name:Jeff
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Volunteer activist
date:July 10, 2012
location:Colorado, U.S.

comment:

I’m an atheist, an Eagle Scout, and the father of a young boy who wants to join the Cub Scouts, but he cannot join the Cub Scouts without professing a belief in God. If we frame “atheist civil rights” as a risk communication problem, what would be the best way to advocate for the Boy Scouts of America (BSA) to voluntarily change its policies?

The BSA’s stated position is that belief in God is required in order to be the best kind of citizen. I have tried taking the BSA’s position seriously, by empathizing. Here is my argument:

I want to begin by clarifying that I am not raising a constitutional or legal question at all. Let’s assume that the Boy Scouts of America (BSA) has the constitutional right to associate or not associate with whomever it wishes.

I’m coming at this from a very different angle. Again, I’m stipulating that the BSA has the legal right to exclude atheists and agnostics from membership. But what if its reason for choosing to exercise that right is flawed?

Let’s conduct a thought experiment. Imagine a bowling club that had a policy of excluding anyone with red hair. (So far as I know, redheads are not a “protected class.” If I’m wrong, then pick some other group.) The bowling alley has the constitutional right to exclude redheads. But why is it doing that? Imagine it released an official position statement that stated: “The bowling alley maintains that redheads are immoral people and therefore requires that members be either bald or not have red hair.” Everyone, I think, can agree that this would be a stupid reason for excluding redheads from membership. Not only is that a stupid reason, it would also be bigotry. Redheads would rightfully be offended and non-redheads would condemn that sort of bigotry.

How is this any different from the BSA’s policy of discrimination against atheists and agnostics? The BSA’s stated reason for discriminating is that belief in God is required to become the best kind of citizen. In the BSA’s own words:

The Boy Scouts of America maintains that no member can grow into the best kind of citizen without recognizing an obligation to God. In the first part of the Scout Oath or Promise the member declares, “On my honor I will do my best to do my duty to God and my country and to obey the Scout Law.” The recognition of God as the ruling and leading power in the universe and the grateful acknowledgment of His favors and blessings are necessary to the best type of citizenship and are wholesome precepts in the education of the growing members.

From that perspective, there is even more reason to allow non-theists to join the BSA. If non-theists are so “morally defective,” then what better course of action than to allow them to join an organization that, other than its policy of discrimination against homosexuals and non-theists, promotes good moral values? If non-theists are morally handicapped, why not give them as much “moral support” as possible to ensure they turn into adults with the best kind of moral character?

Can you imagine a church or Sunday school group banning non-Christians or even just non-theists? Of course not! They welcome them. They view it as an opportunity for evangelism. By the same logic, then, why can’t the BSA see the presence of non-theists as an opportunity for moral evangelism, i.e., trying to get boys to develop the best kind of moral character?

How can non-theists break out of what seems like a vicious catch-22: Many people believe non-theists are selfish and untrustworthy, but in the case of the Boy Scouts of America non-theists are blocked from joining a respected organization that promotes community service?

Regarding the “untrustworthiness” of non-theists, consider this article link is to a PDF file: Gervais, Will M., Azim F. Shariff, and Ara Norenzayan, “Do You Believe in Atheists? Distrust Is Central to Anti-Atheist Prejudice.”Journal of Personality and Social Psychology 2011, Vol. 101, No. 6, 1189–1206.

Here is the abstract:

Recent polls indicate that atheists are among the least liked people in areas with religious majorities (i.e., in most of the world). The sociofunctional approach to prejudice, combined with a cultural evolutionary theory of religion’s effects on cooperation, suggest that anti-atheist prejudice is particularly motivated by distrust. Consistent with this theoretical framework, a broad sample of American adults revealed that distrust characterized anti-atheist prejudice but not anti-gay prejudice (Study 1). In subsequent studies, distrust of atheists generalized even to participants from more liberal, secular populations. A description of a criminally untrustworthy individual was seen as comparably representative of atheists and rapists but not representative of Christians, Muslims, Jewish people, feminists, or homosexuals (Studies 2-4). In addition, results were consistent with the hypothesis that the relationship between belief in God and atheist distrust was fully mediated by the belief that people behave better if they feel that God is watching them (Study 4). In implicit measures, participants strongly associated atheists with distrust, and belief in God was more strongly associated with implicit distrust of atheists than with implicit dislike of atheists (Study 5). Finally, atheists were systematically socially excluded only in high-trust domains; belief in God, but not authoritarianism, predicted this discriminatory decision-making against atheists in high trust domains (Study 6). These 6 studies are the first to systematically explore the social psychological underpinnings of anti-atheist prejudice, and converge to indicate the centrality of distrust in this phenomenon.

How would you recommend I communicate on this issue?

peter responds:

The article you reference at the end of your comment pretty persuasively documents that many Americans consider atheists less trustworthy than believers because, as the abstract succinctly puts it, “people behave better if they feel God is watching them.” This reminds me of a Talmudic passage I ran across (and admired) many decades ago, which suggested that among the righteous God especially loves atheists, because it’s harder to act righteously without the help of religion.

This belief – that religion is a crutch and that atheists are less ethical and less trustworthy because they lack the crutch – is psychologically complicated. If I feel that I need my belief in God to help me behave honorably, then I will naturally feel diminished by the idea that somebody else is able to behave just as well as me without any religious beliefs at all. Of course I don’t like feeling diminished. So I project onto others my belief that I would misbehave if I didn’t know God was watching me … and thus I distrust them.

I haven’t a clue whether or not it’s true that believers are more ethical/trustworthy, on average, than nonbelievers. We certainly find the full range of good and evil in both believers and nonbelievers. Depending on what religious values a believer espouses, it seems reasonable to me that those values might help in some ways and hurt in others – perhaps (I’m guessing here) facilitating adherence to formal codes of conduct but inhibiting more flexible virtues like tolerance.

In addition to lacking God’s guidance and oversight, there is another sense in which atheists could understandably be seen as less trustworthy than believers, at least in a society in which most people are believers. The decision to dissent from the mainstream marks someone who thinks for himself/herself … and who therefore can’t be trusted to think and act as everyone else does. Many atheists are proud that they can’t be “trusted” to conform. (In subcultures where atheism is de rigueur, the positions are reversed. I have served on university faculties where religious fervor seemed pretty nonconformist and atheism felt conventional.)

Whatever the overall impact of religion on people’s behavior, there’s obviously a lot of variation. It strikes me as reasonable to think that having a religion can help children grow up to be good citizens, if the religion has “good values” with regard to its particular society’s norms. It seems crazy to imagine that children without a religion can’t grow up to be good citizens. The official position of the Boy Scouts of America (BSA) is in the middle: that religion is necessary to help boys grow up to be “the best kind of citizen.”

The Scout Law specifies that “A Scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent.” The BSA seems to be saying that all twelve virtues on this list, not just reverence, are necessary to be the best kind of citizen. The BSA wants Scouts to learn to be reverent for the same reason it wants them to learn to cooperate with other scouts, help old ladies across the street, and build campfires: because it believes learning those sorts of things will help the kids grow up right.

You agree with the BSA about cooperation, helpfulness, and campfires, but disagree about reverence. Actually, you might even agree with the BSA that having a religion is good for kids. But you don’t happen to have one, your son doesn’t have one either, and you’re unhappy that the family’s atheism is a barrier to your son’s access to the rest of the Scouting package.

Addressing anti-atheist outrage

So your question is how to persuade the BSA to let your son be a Cub Scout, despite his/your atheism. And by extension, your question is how to persuade the BSA to get religious reverence out of its list of core values to which all Scouts must pledge allegiance.

I’m not sure this is a risk communication problem at all. It may be just a policy debate. The BSA’s inclination to offer the Scouting experience only to children who are already reverent can be seen as a kind of cherry-picking; why not take on the harder cases? But the BSA’s insistence that reverence is one of its core values and it therefore has to exclude children who declare themselves to be unalterably irreverent seems unassailable. (Would the BSA also exclude a child who refused as a matter of principle to be, or even try to be, trustworthy, loyal, helpful, or friendly? I’m not sure. But the logic is the same. Those are core values too.)

If you want the BSA to let atheists in without abandoning its religious tenets, you need to have some vision of how it might do so without hypocrisy. If you want it to abandon its religious tenets, you’re asking it to become a different organization. That’s not impossible – organizations change – but it’s a policy debate that calls for a lot more than risk communication!

But insofar as risk communication can help, the essence of the problem is outrage management. You’re looking for ways to ameliorate the outrage that some BSA administrators and others (Old Guard alums, parents, churches that sponsor troops or packs, etc.) feel when they contemplate letting atheists in … or, rather, when they contemplate letting avowed atheists in so they can dissent from, undermine, or even mock one of the fundamental values of Scouting.

The “argument” you lay out in your comment doesn’t do the job, I think. You describe it as “taking the BSA’s position seriously, by empathizing.” But it doesn’t feel empathic to me. It feels more like a reductio ad absurdum. Analogizing the BSA excluding atheists to a bowling alley excluding redheads strikes me as particularly unempathic (and unfair). Even the comparison to churches seems off-target to me, both logically and empathically. Churches may open their doors to all people; many actively seek out nonbelievers in hopes of converting them. But the Scouts are analogous not to a church but to a religious fraternity, which might sensibly restrict itself to members of the religion. Similarly, an organization of blonds might sensibly exclude redheads, and an organization of atheists might sensibly exclude theists.

But I think your comparison of being an atheist to being handicapped has real promise. The BSA view that nonbelievers can’t become the best kind of citizen does seem to frame atheism as a handicap. Here’s my effort to craft the handicap comparison into an empathic appeal:

I understand and accept that the BSA believes religious faith helps children become good citizens, and even that the BSA believes children cannot become the best citizens they can be without religious faith. So of course you want to help Scouts develop their religious values – their reverence – as a way of helping them grow into good citizens. From your point of view, an atheist child is handicapped. Even though I don’t agree, I can accept that that’s what you believe.

But the Scouts accept children with other handicaps. You would accept a wheelchair-bound child who was unable to participate in some Scouting activities, even though you believe those activities (if the child were able to do them) would be valuable in the child’s development.

I wish you could find it in your hearts to do the same thing for an atheist child.

Maybe my child will find God in the Scouts. That might make me uncomfortable, but I so highly revere most of the values Scouting stands for that I am prepared to take that risk. Or maybe my child will find other ways to be spiritually reverent, even though he lacks the belief in a personal God that most Scouts have. Or maybe my child will acquire greater respect for other Scouts’ religious beliefs; maybe he will learn tolerance for traditional religion from his fellow Scouts while teaching them tolerance for atheism. Or maybe my child will simply benefit from other aspects of Scouting to help him develop into the best citizen he can be, despite what the BSA sees as his atheism handicap.

I understand that official openness to atheist children will require some rethinking of long-held Scouting traditions. I respect that change is difficult and can be slow – especially changes that may feel threatening to some … to Scouting traditionalists in the organization’s hierarchy, to religious parents of current Scouts, to churches that sponsor so many Scout troops. But Scouting has achieved comparable changes in its past, becoming ever more open to diversity while sustaining the values that give Scouting its unique value.

Can Scouting become open even to non-believing children, while sustaining the values that give Scouting its unique value? For the sake of my child, and other children who are now excluded on grounds of atheism, I hope so.

Addressing your own outrage

I suspect you have an outrage management problem in another sense as well: You may need to manage your own outrage. You may be outraged that the BSA won’t let your son be a Cub Scout, or that the BSA has such explicit, unapologetically anti-atheist policies. You may be outraged at a whole range of ways you feel American society discriminates against atheists; your quarrel with the Scouts may be carrying some of the animus that originated with these other slights.

It isn’t surprising that you are outraged (if you are). Most controversies are symmetrical; if one side is outraged, the other side usually is too. But your own outrage can get in the way of empathic outrage management.

Your own outrage can also get in the way of accurate situational analysis. You provided a link to an anti-scouting website (www.bsa-discrimination.org) that’s full of evidence about the BSA’s bias against atheism. Some Scouting documents quoted on that site come close to claiming that atheists are bad people – which is a far cry from claiming that reverence is a core Scouting value that facilitates good citizenship. But a lot of the most offensive documents date from the 1980s and 1990s.

I found other sites more balanced. I especially benefited from an unofficial website entitled “Scouter,” which had a long 2010 discussion thread on atheism and Scouting. Searching the official BSA website for “atheism” got me nothing, but searches for “religion” and “God” produced an assortment of materials reflecting current BSA thinking. Another BSA site, www.bsalegal.org/, focuses exclusively on the organization’s responses to various legal challenges, most of them about membership (though more about excluding gays than about excluding atheists); it’s certainly not balanced, but it is full of good documentary evidence, and good hints at what provokes BSA outrage and what might ameliorate it. Wikipedia articles on “Religion in Scouting” and “Boy Scouts of America Membership Controversies” are useful introductions with lots of reference links.

I spent only a couple of hours total on these various sites, and I’m far from an expert, but here are some of my impressions:

  • The Scouting world is divided on the wisdom of continuing to exclude atheists. Plenty of Scouting enthusiasts and even Scout officials are on your side in wanting the policy changed. Some of these internal allies seem to think external pressure can help their cause; some think it hurts.
  • In many Boy Scout troops and Cub Scout packs – especially those sponsored by public schools in diverse communities, as opposed to those sponsored by churches in homogeneous communities – the exclusion of atheists is unenforced. Scouts do have to take the Scout Oath (promising to “do my best to do my duty to God and my country”) and pledge allegiance to the Scout Law (which includes “reverence”). But these are treated as aspirational goals, not actionable commitments. Scouts don’t have to achieve them, though they do have to promise to try.
  • Insofar as the exclusion of atheists is enforced, it is usually interpreted very loosely. Various documents on the BSA official site point out that questioning one’s religious values is consistent with reverence, that not every religion believes in a personal “God,” that it’s best to let each individual Scout (and the Scout’s parents and religious leaders) determine what reverence means to that particular Scout, that the key is for the Scout to be open to the spiritual dimension of life and respectful of other people’s religious views.
  • Many other Scouting organizations around the world have gone further than the BSA in explicitly accommodating nonbelievers. Despite considerable internal debate, there isn’t much evidence that the BSA plans to do likewise.

In the U.S., a Scout who insisted that religion was a lot of bunk and other Scouts were idiots for believing in God might very well run afoul of the reverence standard (as well as other Scouting standards, such as courtesy and kindness). But a Scout who quietly and respectfully affirmed (when asked) that he didn’t share other Scouts’ beliefs in a personal God would probably pass muster in most troops. As long as the Scout could come up with an interpretation of the Scout Oath and the Scout Law that he could live with, nobody would be likely to cross-examine him on exactly what interpretation he had in mind.

That may not be good enough. I can imagine a close-knit homogeneous community in which atheists were widely condemned, in which Scouting was a key to social acceptance, and in which a quietly, respectfully atheist child suffered yet another arrow of intolerance when refused entrance into the Cub Scouts.

That bothers me. The fact that an aggressively outspoken atheist intent on proselytizing on behalf of atheism would find the world of Scouting closed to him bothers me much less.

Perhaps mistakenly, I have the impression from your comment that what bothers me least – whether the Scouts are willing to make room for an aggressively outspoken atheist – is what bothers you most. Your outrage that the BSA has an official anti-atheist policy sounds more heartfelt to me than your desire to get your son into the Cub Scouts. That may explain why your effort to craft an empathic argument strikes me as more argument than empathic.

Jeff responds:

I think you are correct that I did not give an empathic argument and instead gave a reductio ad absurdum argument.

One quick comment: At the end of your message, you refer to “an aggressively outspoken atheist intent on proselytizing on behalf of atheism.” For the record, I did not intend to suggest that the BSA should welcome, with open arms, an atheist who wants to proselytize other Scouts on behalf of atheism. In fact, I would consider proselytizing of any kind within Scouting to be both inappropriate and weird.

Along the same lines, I do not think it would be appropriate for Scouts, atheist or otherwise, to mock or ridicule the religious beliefs (or lack of beliefs) of others. Reading your very interesting response has convinced me that, if atheists want to gain acceptance into the Boy Scouts, they will need to make it very clear they consider proselytizing on behalf of atheism and mockery of religious beliefs to be inappropriate and incompatible with the other points of the Scout Law.

Using the Precaution Adoption Process Model to figure out how to persuade people to wear masks against sandstorms in Iran

name:Khalil Jassempour
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:MPH student
date:July 9, 2012
email:kjassempour@yahoo.com
location:Iran

comment:

I live in Ahwaz in the south of Iran. In my city sandstorms became common about ten years ago. Many people breathe this polluted air.

I have observed that people don’t use masks to prevent the dusty air from being inhaled into their lungs.

Can we use the PAPM to survey the attitudes of people in Ahwaz toward using masks in dusty air, to help plan an intervention to persuade people to use masks?

peter responds:

The Precaution Adoption Process Model (PAPM) divides the process of deciding to take a precaution into seven distinct stages between ignorance and completed preventive action (or an established habit):

  • Unaware of the issue;
  • Aware of the issue but not personally engaged;
  • Engaged and deciding what to do;
  • Planning to act but not yet having acted;
  • Having decided not to act;
  • Acting; and
  • Maintenance (continuing to act as needed).

The PAPM is one of several contending “stage theories” of precaution adoption. Though they define and measure the stages a little differently, all stage theories assert that deciding to take a precaution isn’t a single continuum reflecting how likely someone is to take action. Rather, the precautionary decision is a series of qualitatively different transitions, with each transition influenced by a different set of factors. Categorizing people according to which stage they’re currently in can thus lead to more focused interventions aimed at helping them make the transition to the next stage.

Each transition has its own communication strategies:

  • Getting people from unaware to aware is mostly an agenda-setting task, for which media information is well-suited.
  • Getting people who are aware to become engaged is mostly about telling them how serious the risk is.
  • Getting people who are engaged to decide to do something is more about telling them how relevant the risk is to them personally and what precautions make sense for them.
  • Getting people who have decided to act to take action is partly about reminding them (again and again, sometimes) and partly about the foot in the door – convincing them that the recommended first precaution is both easy and effective.
  • Getting people who have acted to keep on acting depends mostly on positive reinforcement.

My friend and colleague Neil Weinstein was the principal developer of the PAPM. At the time (the late 1980s and early 1990s), Neil and I were working together on research about how to get people to test their homes for radon and fix the problem if they found one. (Radon is a decay product of uranium in the soil that can accumulate in people’s homes and cause lung cancer.) Our radon studies were among the first to use the PAPM, and in the process they tested the PAPM and helped validate its usefulness. Two of those studies are on my website: “A Model of the Precaution Adoption Process: Evidence from Home Radon Testing” and “Experimental Evidence for Stages of Health Behavior Change: The Precaution Adoption Process Model Applied to Home Radon Testing.”

In the 20-odd years since then a substantial PAPM literature has developed, mostly without my further involvement. (I did collaborate on a PAPM literature review published as a book chapter in 2008 – but it’s not online.)

The question is whether the PAPM is a good tool for you to use in figuring out why people in Ahwaz don’t routinely wear masks when the sandstorms blow, so you can focus your interventions on the stages people really are in.

To the extent that you don’t already know (and can’t guess) which stages most people are in, the answer is definitely yes. It’s hard for me to imagine that anyone in Ahwaz is unaware of the sandstorm issue, but the other stages may be relevant:

  • Some people may be aware but not engaged – perhaps seeing the sand as a minor unpleasantness but not actually realizing that it threatens their health and the health of their loved ones.
  • Some people may be engaged but not sure what to do. At this stage, credible information is crucial, especially information about what kinds of masks are available and how well do they work.
  • Some people may have decided to wear masks but never gotten around to it; they need reminding.
  • Some people may have worn masks from time to time but not really established the habit; they need positive reinforcement.

Your guesses will be better than mine about which stages most people are in. But for what it’s worth, I’m inclined to guess that a lot of people are in the fifth stage: They have considered wearing masks during sandstorms and pretty much decided against it.

Surgical masks are uncomfortable. (Even hospitals have a tough time persuading health care workers to keep them on.) And they can be a considerable expense. And they look geeky. More informal “masks” like the Saudi ghotra – garments that evolved in places with frequent sandstorms – don’t work quite as well but may make more sense in other ways. But clothing has personal and cultural significance, and convincing people to adopt a garment borrowed from a different culture might be every bit as difficult as convincing them to wear surgical masks! Again, you will have a much better intuition than I could possibly have on what mask-like garments would be easiest to convince the men and women of Ahwaz to adopt.

In terms of my vocabulary, I’m suggesting that your problem may not be chiefly precaution advocacy about sandstorms; it may be outrage management about masks. If so, outrage management principles may turn out more useful to you than the PAPM. For example, empathically acknowledging people’s reasons for resisting masks may matter every bit as much as finding a suitable mask-like garment to recommend. For more on this distinction – doing “precaution advocacy” via outrage management with regard to a disliked precaution – see this article on “Getting Workers to Wear PPE.” link is to a PDF file

When do safety communications backfire?

name:Sean G. Kaufman
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:Director of Training Programs, Center for Public
Health Preparedness and Research, Rollins School
of Public Health, Emory University
date:June 8, 2012
location:Georgia, U.S.

comment:

I am attempting to find some examples of safety policies that were put into place that actually increase the risk to safety rather than decrease it. For example, programs that are punitive towards employees reporting risk may actually reduce the reporting rates – thereby increasing overall risk at work as a result of lower reporting rates (should the organization actually be learning and changing as a result of incident surveillance programs).

I cannot find many examples – after searching the literature and Google – so I thought I would reach out in hopes you could direct me somewhere, or point to a couple of examples.

peter responds:

I think you’re right that safety policies can backfire. Some policies may actually do more harm than good – but that’s probably rare. But safety policies that do more good than harm can still do some unacknowledged harm. Noticing the harm can enable organizations to take steps to mitigate it.

I’ll focus my response on communication-related examples … except for #1 below, which is too important to leave out.

number 1
In his 1988 book Searching for Safety, and elsewhere as well, the late Aaron Wildavsky wrote about the ways in which prevention interferes with preparedness and resilience. Trying to figure out what might happen and take steps to prevent it, Wildavsky thought, inevitably left organizations less able to cope with the unexpected. Wildavsky’s argument that risk resilience is a more valuable asset than risk anticipation – and that the two are in direct competition for resources – has obvious implications for governments’ safety regulation and organizations’ safety protocols.
number 2
Telling employees that safety is your "top priority" – which companies do often – can easily backfire and undermine safety. Employees know that productivity and profit are the company’s key priorities. Pretending that safety preempts productivity and profit (rather than making the case that safety helps achieve them) lacks credibility. Employees may then draw the mistaken conclusion that since you’re lying about safety being your top priority, you must not care about safety at all – and may "loyally" disregard safety procedures. I make this argument at length in my 2005 article, “Selling Safety: Business Case or Values Case”; link is to a PDF file I think the business case for safety is both more honest and more sustainable. See also this 2001 Guestbook entry, which talks about some ways organizational safety messages get undermined when employees hear a different message from middle management.
number 3
Companies often maintain signs (typically at the plant gate, for example) specifying how long it has been since the last recordable accident. The goal of such a sign is presumably to motivate employees to be careful in order to sustain the record. It can backfire in either or both of two ways: by motivating employees not to report accidents; and by persuading employees that accidents are rare and precautions are therefore low-priority. I urge clients to balance information about the long time since the last accident with information about the short time since the last near-miss.
number 4
The risk homeostasis hypothesis, developed by Gerald J.S. Wilde, suggests that people act so as to maintain the level of subjective risk they’re comfortable with – neither frighteningly high nor boringly and/or inefficiently low. So precautions that visibly reduce risk may lead people to change their behavior in ways that make the risk higher, back in their comfort zone. Road engineering improvements, for example, often enable people to drive faster, yielding a quicker trip rather than a safer one. But there are plenty of counterexamples where risk homeostasis doesn’t seem to be operating. Seat belts, for example, have dramatically reduced the traffic fatality rate; people didn’t simply drive faster once they were belted. I suspect homeostasis is most powerful when we’re talking about risk appetite rather than risk tolerance. Employees in dangerous occupations often like risk; that’s why they wanted to become fighter pilots (for instance) in the first place. Safety precautions may therefore be resisted because they take all the fun out of the job. With this in mind, I once advised a mining industry client to take employees bungee jumping on weekends. For similar reasons, safety improvements may work best when they’re invisible, or even when they make the situation look/feel more dangerous than it actually is.
number 5
Many companies have too many safety SOPs, some of which are trivial or silly, or even unwise under some circumstances. This can lead employees to decide that safety SOPs are (generically) trivial, silly, or unwise. I have sometimes advised clients to distinguish different levels of coerciveness/importance among safety SOPs: Batch A is obligatory, no exceptions, you’re in trouble if you’re caught violating one; Batch B is obligatory but we know there are exceptions, and here’s the procedure for getting your situation exempted; Batch C is recommended and still SOP, but you can use your own judgment and grant yourself an exemption if you think there’s a good reason to do so. The existence of Batches B and C, I think, increases compliance with Batch A because it clarifies that management realizes not all SOPs are created equal.

Risk communication aspects of the debate over H5N1 transmission studies

name:Joe
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Biotech company EHS manager
date:May 17, 2012
location:California, U.S.

comment:

Peter, while I have the greatest respect for you and have been to trainings by yourself and Vincent Covello in years past, I have to tell you that this article is very flawed both in the “facts” used and in the conclusions.

About five years ago I did the work to prepare my global organization for responding to a potential pandemic based on my concerns over H5N1, SARS and a few other pathogens. At that time I tried to get as complete an understanding of the state of the science that I could achieve with my background in chemical engineering and environmental science. Within a few years the public discussion caught up with the risk management trade’s discussion on pandemics, and we saw serious resources put into preparedness planning at the international, national and local levels. We have relatively robust systems in place and are working to improve the major deficiencies in prediction, vaccination, and control.

The research and the researchers you are unhappy with have been an integral part in trying to get the answers risk managers like me need to make good decisions related to these preparations. Their emotional response to the criticism of a faceless unknown committee is very understandable to me. Their efforts to remain professional are to be praised, and their lack of sophistication regarding risk communication seems to be something they are working to rectify.

I would argue that the Decide – Announce – Defend strategy you refer to better describes the NSABB’s approach than the scientists’ approach. And the target of the NSABB decision was not a few papers but the fundamental process of scientific communication among peers: publication for peer review.

Scientists value this in a way equivalent to the first amendment of the U.S. Constitution. To me their outrage is appropriate and reasonably targeted.

With regard to the facts:

number 1
We know the 60% number is not accurate in the way you used it. It is the percentage of those who sought hospital care that died. The number of people infected is certainly much larger and there are good studies that indicate that this flu [H5N1] is currently in or near the typical flu case fatality range of 0.1% - 2%.
number 2
This flu strain is not exceptional in its lethality, as you characterized it. Ebola, Nipah virus, HIV, Hepatitis B, SARS, and many other pathogens are very similar to it or worse than it in virulence.
number 3

There is little mystery in how the four BioSafety Levels of protection are determined for a given pathogen:

1 = not a human pathogen
2 = human pathogen that is not airborne-transmissible (e.g. HIV, Hep B)
3 = human pathogen with life-threatening disease that is airborne and for which we have treatment or vaccine (e.g. flu, yellow fever)
4 = human fatal pathogen with possible airborne transmission and no known treatment

The BSL system had been incredibly successful at stopping lab-acquired infections and loss of containment, notwithstanding unproven speculation about the source of H1N1 (note that it has an excellent reservoir in pigs where it experiences little evolutionary pressure).

number 4
These experiments are not as unusual as you suggest. The value in publishing them is in accelerating our ability to develop an effective treatment or vaccine, thus building our defenses against a natural or unnatural outbreak. Their work is totally reproducible by any competent biologist with a supply of ferrets and virus (both readily available).

My view is the NSABB did not understand the history of research in this area, made a kneejerk decision to ban the details, and was caught by surprise when knowledgeable people protested the impacts of its decision on all scientists.

Warmest regards and please keep on with the business of improving our risk dialogues.

peter responds:

I have posted three commentaries on this issue so far. (This Guestbook entry will be the fourth.) Here they are in chronological order:

Summary of the controversy

These three commentaries were written a month apart and cover somewhat different ground. But all three are responses to the controversy that arose after the NSABB recommended to the U.S. government that it should recommend to the editors of two scientific journals (Science and Nature) that they edit out some methodological details before publishing two research papers they had already accepted for publication. The two papers described the first successful efforts to bioengineer the H5N1 flu virus to transmit through the air (via aerosol or droplets) between mammals. (The mammal used in both studies was the ferret.) The findings of these two studies are thought to shed light on how an H5N1 pandemic might someday be launched among humans – whether by nature, by accident, or by intention.

The NSABB was worried that unless the papers were redacted their value to terrorists and other bad guys might outweigh their value to influenza scientists and pandemic preparedness professionals. This balancing with regard to “dual-use research of concern” (DURC) was very much the sort of task for which the NSABB had been created.

Importantly, the NSABB was never authorized to censor the two papers, nor was the U.S. government. The first amendment was never under threat here. The journal editors have always been legally free to publish what they want, regardless of what the NSABB and the U.S. government recommended. And the authors of the two papers have always been legally free to self-publish on the Web, regardless of what the journal editors decided. (I’m leaving aside a brief dispute over whether the senior author of one of the papers, who works at a lab in the Netherlands, might need a Dutch export license before publishing or even presenting his research. He objected to the requirement that he apply for the license, but applied anyway and was granted it.)

The controversy is pretty much over for now. The authors of the two papers made some revisions; the NSABB met again and decided (under considerable pressure) that the two revised papers should be published after all; one is already out and the other is expected shortly. But of course the underlying issues aren’t settled:

  • What sorts of research, if any, shouldn’t be done at all because it’s too dangerous?
  • When very dangerous research is done, what precautions should be taken to reduce the risk of accidental or intentional releases?
  • After the results are in, should the papers be freely published so everyone, even bad guys, can absorb the lessons to be learned?
  • Above all, who should make these decisions? Just scientists? Is there any appropriate role for governments? For the public?

My comments focused mostly on three complaints:

number 1
I complained that both sides in the H5N1 publication debate too often sounded more like advocates than scientists, appearing to use evidence as ammunition on behalf of predetermined positions rather than assessing evidence to see how it should affect those positions.
number 2
I complained that both sides (especially the pro-publication side) too often sounded contemptuous of public concerns (especially concerns about doing and publishing this sort of research) – pursuing one-sided efforts to “educate” the public rather than two-sided dialogue in which public concerns were genuinely and respectfully attended to. And I worried that contemptuous-sounding arguments for unfettered science displayed an arrogance that might itself lead to stricter regulation of science.
number 3
I complained most passionately that the debate over whether the two papers should be published had “hijacked” what could have been a teachable moment about the risk of an H5N1 pandemic and the need for more attention to pandemic preparedness. For months, neither side in the publication debate seemed to have much to say about what should be done to better address the risk of an H5N1 pandemic other than publish or suppress the two papers.

The outrage of scientists

Your comment makes a number of technical points that aren’t really germane to the risk communication focus of this website. But you make one very strong, very important risk communication point that bears underlining. Any restraint on the free flow of scientific publication strikes at the heart of the scientific enterprise. So when a U.S. government agency recommended that two research papers on transmission of a bioengineered H5N1 influenza strain shouldn’t be published as written, many scientists saw that recommendation as very, very threatening to their freedom.

“To me,” you write, “their outrage is appropriate and reasonably targeted.”

As someone who has earned a living for 40 years helping clients understand other people’s outrage as valid – even when it’s directed at them – I should have paid more attention all along to the outrage of the authors of the two papers, their colleagues, many virologists whether they work on flu or not, and even scientists in other disciplines. Funding and publication are the two linchpins of scientific research. Everybody gets it that governments have a right to withdraw government funding. When publication, too, is threatened by government, even in the form of a nonbinding recommendation, outrage is an understandable and perhaps even inevitable response.

And “non-binding” government recommendations can feel awfully binding. I have made a major point, here and elsewhere, that journals are legally free to publish what they want and scientists are legally free to self-publish what the journals turn down. But the career implications of publishing a paper after the U.S. government has successfully persuaded a major journal to withdraw its acceptance of that paper would be, to say the least, daunting. Whether or not that qualifies technically as “censorship,” it can certainly feel like more pressure than is consistent with scientific autonomy.

Is the entire edifice of scientific publication threatened by a single instance? The problem with slippery slope arguments is that it’s very hard to judge in advance which slopes are actually slippery and which are not. To me, as a non-scientist who has worked with government agencies on influenza communication for more than a decade, H5N1 looks uniquely threatening. So when a government advisory body that has never before recommended against publishing a scientific paper decides that two H5N1 papers should be redacted before they’re published, that looks to me like a one-off. But it’s obviously not a one-off for a scientist who writes many papers about H5N1! And anyone who sees H5N1 as just one among many dangerous pathogens, as you do, would have good reason to see the NSABB’s recommendations (the first set) as a precedent and a foot-in-the-door, not as a unique response to a unique threat.

There are other scientific venues in which the U.S. government has opposed or even prevented publication – computer encryption and decryption, for example. The H5N1 publication controversy isn’t the first such controversy, though it may be the first for influenza virologists. Interestingly, these precedents from other scientific fields have seldom been mentioned in the H5N1 controversy. Does that mean that we’re already on the slippery slope? Or does it mean that the slope isn’t so slippery after all, that periodic infringements on scientific autonomy don’t necessarily set the stage for more infringements elsewhere? The case is arguable either way.

In my own language: It’s debatable whether this controversy has posed a big or small hazard to the autonomy of science. I think it was small; you think it was big. Either way, the outrage among many scientists was substantial. Even though I recognized scientists’ outrage from the outset, I have not taken that outrage sufficiently into consideration in my writing about the controversy.

I stand by my three complaints. But I should have voiced my complaints with a lot more empathy toward the outraged scientists.

It was nonetheless unwise for pro-publication scientists to express their outrage in ways that came across as arrogant and unresponsive. Many of those who opposed publication were also outraged. Two hazards have preoccupied the critics of this research and its publication: the possibility that bad guys might be inspired and guided by the two papers to create or steal a bioengineered H5N1 virus and launch their own pandemic; and the possibility that expanded research in H5N1 biotechnology might lead to a laboratory accident that launched a pandemic. I am not qualified to assess the size of these two hazards. Maybe they’re small. But people who are outraged about them obviously think they’re substantial.

When scientists who support publication come across as cavalier, patronizing, contemptuous, or disingenuous about these two hazards, that increases the outrage of those (scientists or laypeople) who are worried about them. And the increased outrage leads to increased hazard perception. Sooner or later, the increased outrage might also lead to increased interference with scientific autonomy. That’s why I found myself worrying that scientific arrogance could threaten science more than the NSABB did.

But exactly the same dynamic works in the other direction. Scientists whose concerns about publication censorship are dismissed or ridiculed naturally get more outraged as a result, and their increased outrage increases their conviction that scientific integrity is under siege. Similarly, casual references to “arrogance” and even to “mad scientists” and undocumented assertions about biosafety lapses exacerbate the outrage of publication proponents.

Both sides in the publication debate have communicated in ways that exacerbated the other side’s outrage. And both sides have found it difficult to recognize and change how they were communicating because of their own outrage.

My critiques of each side’s communication haven’t been nearly empathic enough. In particular, I have been criticizing pro-publication scientists for outraging their critics without focusing enough on the possibility that many of them were too outraged to take advice (especially unsolicited and unempathic advice) on how to respond to their critics more respectfully.

Technical disagreements and technical ammunition

Your comment details several technical disagreements with claims or implications in the article you read. I don’t want to spend much time debating these points, since I’m not a flu expert and this isn’t a virology website.

For the record, though, let me note quickly:

  • I don’t share your confidence that “we have relatively robust systems in place” to cope with a serious flu pandemic. Of course “relatively” is relative; I grant you that in some ways our systems are better than they were. But whether you look at vaccine and antiviral manufacturing capacity or hospital surge capacity or infrastructure and supply chain resilience, I find “half-empty” a lot more compelling than “half-full.” And that’s just in the developed world.
  • I haven’t a clue what the actual case fatality rate of H5N1 is today – that is, what it would be if we could figure out how many mild, undiagnosed cases there have been and add them to the denominator. Almost all experts agree with you that the real number is lower than the current 59% figure, though not many seem to think it’s as low as you think it is. What really matters, of course, is what the CFR will turn out to be if H5N1 ever acquires efficient h2h transmission capability – and nobody can guesstimate that with any confidence.
  • As you say, there are other human diseases as lethal as H5N1 or worse, such as Nipah and Ebola. But flu is much more contagious than those very lethal diseases. What’s scary – almost uniquely scary – is the specter of a flu strain that is simultaneously as lethal as the World Health Organization says H5N1 is now (that debatable 59% figure again) and as contagious as flu is usually (striking more than 10% of the globe’s population each year). This is the H5N1 disaster scenario: A typically contagious but unprecedentedly lethal strain of influenza attacks a worldwide population with virtually no natural immunity and no vaccine stockpile. Nobody can say how likely that scenario is.
  • Regarding laboratory safety, two things are obviously both true: (a) There hasn’t been a major human disease outbreak that provably started in a lab within the BSL system; and (b) There have been a number of infections that started in labs within the BSL system and had limited spread outside the lab. Assessing lab safety is difficult in part because so much information about lab accidents is suppressed. But even if we had good data, we’d have trouble deciding whether frequent safety infractions that do little or no harm should be seen as evidence of resilience and defense-in-depth or warning signs of a disaster that hasn’t quite happened yet.
  • I’m completely unqualified to judge whether the value of the two studies to those trying to prevent or prepare for a pandemic is greater or less than their value to someone who might want to launch one. This is obviously an important question in deciding whether the papers should be published (or should have been published). Expert opinions differ. I have none of my own. And the information available for me to consider is unbalanced: Like other laypeople, I have much greater access to the data and opinions of public health experts than to the data and opinions of bio-warfare experts.

Two clusters of technical questions underlie this controversy.

The first cluster focuses on the danger posed by H5N1. How likely is it to acquire the ability to transmit easily from human to human? If it did acquire that ability, how deadly would it be likely to be to the humans who caught it? How contagious would it be? Would it probably burn itself out or would it probably go pandemic? How long would it take us to mass-produce a vaccine against it? How effective would the vaccine be in various subgroups? How effective would antivirals be in the meantime? How well would supply chains and social institutions stand up to the stress?

The second cluster focuses on the impact of the two papers. Would they make a devastating H5N1 pandemic less likely (by helping guide surveillance, for example)? Or would they make a devastating H5N1 pandemic more likely (presumably by cluing in bad guys or worsening the odds of a lab accident)?

Of course the second cluster doesn’t matter much unless you lean toward a worrisome answer to the first cluster. The impact of the papers is an important question only if H5N1 is a scary virus. That didn’t keep some proponents of publication from asserting two incompatible propositions: that H5N1 isn’t an especially dangerous virus so we don’t have to worry much about lab accidents or bad guys in connection with the two papers; and that it’s crucial to get the two papers published because they can help avert a potentially horrific H5N1 pandemic.

One of my main critiques of both sides in the controversy, in fact, was their tendency to use any argument they could find to support their position, seemingly regardless of whether that argument was compatible with their other arguments – and at least sometimes regardless of whether they actually believed it.

Consider for example these three narrow technical questions:

  • How useful are ferrets as an animal model for predicting human flu transmission?
  • How many mild or asymptomatic cases of H5N1 have there been that never got diagnosed?
  • How likely is it that the subtype of H1N1 that disappeared in the 1950s and then reappeared in 1977 (and circulated until 2009) was released from a laboratory that had samples from the 1950s?

These three questions have literally nothing to do with each other except this: They were all usable as ammunition for or against publication of the two papers.

If you were pro-publication, it would help your case to assert:

  • that ferrets are an unreliable model (so efficient ferret-to-ferret transmission in the lab doesn’t necessarily mean the strain could launch a human pandemic);
  • that there have been lots of undetected cases of H5N1 (so the disease is far less lethal than the World Health Organization’s 59% figure implies); and
  • that the 1977 strain didn’t result from a lab accident (so there’s no precedent of a lab-related global influenza outbreak).

If you were anti-publication, on the other hand, it would help your case to assert:

  • that ferrets are a good model (so we have probably created a pandemic flu virus in the lab);
  • that there haven’t been very many undetected human H5N1 cases (so the disease is unprecedentedly lethal); and
  • that the 1977 strain probably came from a Russian or Chinese lab (so we’ve seen at least one lab accident before that led to a global flu outbreak).

What bothers me is how seldom I have run across an expert with mixed opinions, an expert whose position on all three technical questions (and plenty of others) wasn’t predictable based on his or her position in the publication debate.

I can’t find many experts who said “Even though I agree that ferrets are a good animal model, here’s why I still support publication…” or “Even though I agree that ferrets are an unreliable animal model, here’s why I still oppose publication….” I can’t find many experts who said “You’re right that there haven’t been many mild cases, but I’m still pro-publication…” or “You’re right that 59% is way too high, but I’m still anti-publication….” I can’t find many experts who said “Our lab safety record is terrific, but I have to admit H1N1 in 1977 was probably a lab accident” or “We have a serious lab safety problem, but the 1977 H1N1 outbreak probably didn’t come from a lab accident….”

On issue after issue, I saw scientists choosing up sides and then marshaling their evidence. That’s how lawyers assess evidence: as ammunition they embrace or disdain depending on which side they’re on. It’s not supposed to be how scientists assess evidence. Scientists who use evidence to prove their hypotheses rather than to test them are being deceptive. If they don’t know it, then they’re being self-deceptive as well.

And when scientists communicate, they’re expected to bend over backwards to be fair. Even if all the facts deployed to advance a case are accurate, scientists aren’t supposed to leave out equally accurate facts that might lead the audience to question their conclusion. Instead of cherry-picking facts, scientists pride themselves on acknowledging the flaws in their case and the sound arguments of their adversaries.

Cherry-picking facts isn’t just bad science. It is also bad risk communication. It exacerbates mistrust and increases the outrage of opponents.

You haven’t persuaded me to stop thinking this cherry-picking strategy was unwise. But you have helped me remember that it wasn’t rooted in simple dishonesty. It wasn’t even rooted in poor strategic thinking (though it certainly was poor strategic thinking). It was rooted largely in the outrage of its perpetrators.

Ron Fouchier as a risk communicator

A good way to illustrate this evidence-as-ammunition misuse of science is to examine the public communications of Ron Fouchier, a scientist at Erasmus Medical Center in Rotterdam and the senior author of one of the two papers.

For several months in late 2011 and early 2012, Fouchier appeared to be trying to arouse interest in his study. His messaging was all about how dangerous he considered the H5N1 virus and how terrifying (but incredibly useful) he considered his own soon-to-be-published study. But as the controversy over publication grew, Fouchier became less focused on arousing interest and more focused on allaying concern. And his messaging altered to match his new goal.

I’m inferring the goals, of course. But the messaging is on the record.

The change in Fouchier’s public messaging can be dated precisely. It came on February 29, 2012, when he participated in a panel discussion sponsored by the American Society for Microbiology. Actually, the change probably dates back a little earlier, when Fouchier spoke at a February 16–17 World Health Organization meeting in Geneva. But the WHO meeting was confidential, whereas the ASM panel was (and is) on the Web.

Let’s track Fouchier’s risk communication about his own study before, during, and after his ASM presentation.

September 2011

We’ll start with September 12, 2011, when Fouchier presented his research at the Malta meeting of the European Scientific Working Group on Influenza (ESWI). There is no transcript or video of the presentation, but three science journalists covered it.

Here’s how The Influenza Times, the conference newspaper, reported Fouchier’s key result the next day:

“This virus is airborne and as efficiently transmitted as the seasonal virus,” said Fouchier. His research team found that only 5 mutations, 3 by reverse genetics and 2 by repeated transmission, were enough to produce this result. “This is very bad news, indeed,” said Fouchier.

Katherine Harmon’s story in the September 19 Scientific American paraphrased Fouchier on the key result:

It wasn't until “someone finally convinced me to do something really, really stupid,” Fouchier said, that they observed the deadly H5N1 become a viable aerosol virus. In the derided experiment, they let the virus itself evolve to gain that killer capacity. To do that, they put the mutated virus in the nose of one ferret; after that ferret got sick, they put infected material from the first ferret into the nose of a second. After repeating this 10 times, H5N1 became as easily transmissible as the seasonal flu.

The third account of Fouchier’s Malta presentation was Debora MacKenzie’s article in the September 26 issue of New Scientist. MacKenzie also reported that Fouchier said the new H5N1 strain transmitted easily in ferrets. She quoted him directly: “‘The virus is transmitted as efficiently as seasonal flu,’ says Ron Fouchier.” But MacKenzie also reported something that wasn’t in the other two stories: that Fouchier had said the virus was deadly to ferrets when transmitted through the air. Here’s what she wrote:

Then the researchers gave the virus from the sick ferrets to more ferrets – a standard technique for making pathogens adapt to an animal. They repeated this 10 times, using stringent containment. The tenth round of ferrets shed an H5N1 strain that spread to ferrets in separate cages – and killed them.

These two claims – that the new strain transmitted through the air as easily as seasonal flu and that it killed ferrets when thus transmitted – were undisputed for several months, until Fouchier himself disputed them in February 2012. Back in September 2011, neither Fouchier nor anyone who heard him speak at the Malta conference challenged the accounts of his presentation in The Influenza Times, Scientific American, and New Scientist.

Nobody affirmed the three accounts either. But that’s hardly surprising. Reading an erroneous news story about a presentation you heard might prompt you to post a comment. You’re not likely to post one pointing out, “Yes, that’s what I heard too.”

November 2011

In October, the U.S. government asked the NSABB to consider whether Fouchier’s paper and one other should be suppressed or redacted. Although the NSABB’s recommendation to redact wasn’t announced until December, the issue was heating up well before then.

On November 22, Science sent reporter Martin Enserink to Rotterdam to interview Fouchier for a story in its online website, “ScienceInsider.” Enserink’s November 23 story called Fouchier’s bioengineered strain “a man-made flu virus that could change world history if it were ever set free.” It went on:

In a 17th floor office in the same building, virologist Ron Fouchier of Erasmus Medical Center calmly explains why his team created what he says is “probably one of the most dangerous viruses you can make” – and why he wants to publish a paper describing how they did it. Fouchier is also bracing for a media storm. After he talked to ScienceInsider yesterday, he had an appointment with an institutional press officer to chart a communication strategy.

In Enserink’s story, Fouchier was eloquent about the importance of his study in revealing that a catastrophic H5N1 pandemic is possible. He said nothing that suggested a desire to back off the earlier reports that his virus was easily transmissible and deadly, that it could kill a ferret when a nearby ferret coughed.

Could all four science journalists have misheard and/or misquoted Fouchier? I suppose it’s conceivable. Then what about his own institution? Between November 27 and November 29, four separate documents were posted on the Erasmus Medical Center website: a news release and a Q&A, each of them in both Dutch and English. The timing suggests these documents were probably outputs of Fouchier’s November 22 meeting with his press person.

The news release began (after a boldface summary) with this sentence: “Of the 600 people who have to date been infected with the H5N1 virus worldwide, 60 per cent have died.” Nothing later in the release pointed out (as you do in your comment) that this figure is based on confirmed cases only, and omits from its denominator mild or asymptomatic cases that never got diagnosed. Fouchier is one of many flu scientists who sometimes criticize others for using the 59% (or 60%) figure without qualifiers and sometimes use it without qualifiers themselves … depending on whether they’re trying to tamp down H5N1 concern or trying to arouse it.

The Erasmus Medical Center release also reiterated Fouchier’s claim that his virus transmits easily in ferrets and implied – without quite saying – that it transmits easily in humans as well:

Scientists worldwide have been concerned with the question whether the [H5N1] virus could change into a virus that can spread among humans. “We have discovered that this is indeed possible, and more easily than previously thought,” says Ron Fouchier, researcher at Erasmus MC. “In the laboratory, it was possible to change H5N1 into an aerosol transmissible virus that can easily be rapidly spread through the air. This process could also take place in a natural setting.”

Screen shot of the November 28, 2011 Erasmus Medical Center press release
about the Fouchier study (English version) (last accessed May 16, 2012)
http://www.erasmusmc.nl/perskamer/archief/2011/3502352/?lang=en

The accompanying Q&A offered an even clearer version of the news release’s extraordinary implication (untested, thankfully) that Fouchier’s virus can spread among humans. The very first sentence read: “Erasmus MC researchers have discovered that the avian influenza virus spreads more easily among humans than previously thought.”

Screen shot of the Erasmus Medical Center FAQ about
the Fouchier study (English version) (last accessed May 16, 2012)
http://www.erasmusmc.nl/perskamer/faq-vogelgriep/?lang=en

My wife and colleague Jody Lanard and I originally thought that this might be an error – that Fouchier’s press person might have carelessly elided from the fact that transmission among ferrets raises concern about possible human transmission to the implication that transmission among humans has been proved. So in January Jody emailed the Erasmus website, the Erasmus Medical Center press office, and finally Fouchier himself, suggesting that this sentence should be changed. She didn’t get an answer, and it hasn’t been changed.

We also wondered if “spreads more easily among humans than previously thought” might have been a Dutch-to-English translation error. So Jody looked at the Dutch version of the FAQ, which reads: “Onderzoekers van het Erasmus MC hebben ontdekt dat het vogelgriepvirus zich gemakkelijker onder mensen kan gaan verspreiden dan tot nu toe gedacht.

Onder mensen kan gaan verspreiden” means “can be spread among humans.” It was not a translation error.

Neither the November 28 news release nor the accompanying Q&A said anything about whether the ferrets on the receiving end of Fouchier’s aerosol transmission experiment died. Two months earlier, Debora MacKenzie had written in New Scientist that the ferrets died. That had become a widespread impression about Fouchier’s research. Here was a perfect opportunity to say clearly whether or not aerosol transmission of Fouchier’s mutated H5N1 was deadly to ferrets. Fouchier and the Erasmus Medical Center press office did not take that opportunity.

January 2012

Flash forward two months to January 20, 2012, when Jeffrey Kofman’s story on the controversy was posted on the ABC News website. (The story was scheduled to air on “World News with Diane Sawyer” on January 20, but it got preempted by other news and didn’t actually air until February 20.) Datelined Rotterdam and headlined, “Researchers Pause Work on Bird Flu That Could Kill Hundreds of Millions,” the story contained these two paragraphs:

ABC News was given an exclusive inside look at some of the testing facilities the Rotterdam researchers used. With Fouchier as our guide, we donned protective clothing and face masks and passed through three levels of security to see the ferrets he uses for testing.

Fouchier explained how his lab assistants exposed the ferrets to the altered virus and placed unexposed ferrets in cages nearby. All 40 ferrets died.

As in the case of Debora MacKenzie’s September 2011 story, neither Fouchier nor anyone else from Erasmus Medical Center disputed the ABC News report that Fouchier killed ferrets via aerosol transmission. As of May 16, 2012, the story is still on the ABC News website. It has 23 comments, all from January 20-23. Most of them focus on the pros and cons of the research; none of them challenges Kofman’s reporting.

Also on January 20, Science reporter Martin Enserink interviewed Fouchier about a 60-day research moratorium declared that day by Fouchier and 38 other flu researchers. Fouchier told Enserink that he was in touch “on a daily basis” with Yoshihiro Kawaoka of the University of Wisconsin, the senior author of the other article the NSABB had recommended redacting.

Perhaps those daily conversations had something to do with Kawaoka’s decision to speak publicly about his own research for the first time. In a January 25 article in Nature, Kawaoka emphasized that his virus was not deadly to ferrets:

Our results also show that not all transmissible H5 HA-possessing viruses are lethal. In ferrets, our mutant H5 HA/2009 virus was no more pathogenic than the pandemic 2009 virus – it did not kill any of the infected animals. And, importantly, current vaccines and antiviral compounds are effective against it.

In its coverage of Kawaoka’s article, Science reviewed the history of the controversy, contrasting Kawaoka’s research to Fouchier’s as follows:

Fouchier, who has discussed his work at scientific meetings and with the media, concocted a transmissible H5N1 in ferrets by both manipulating viral genes and repeatedly passaging the virus through the animals to help it adapt to that host. This virus was highly lethal.

That paragraph didn’t directly state that Fouchier’s virus was lethal to ferrets via ferret-to-ferret respiratory transmission. But like virtually all coverage of Fouchier’s studies until a month later, the reporter, Jon Cohen, had that impression link is to a PDF file and repeatedly conveyed that impression during his months of covering the controversy.

February 2012

On February 6, New Scientist ran Debora MacKenzie’s summary of the controversy, in which she repeated the no-longer-new “facts” that Fouchier had created a lethal virus that could “spread through the air like ordinary flu, while staying just as lethal.” Again, neither Fouchier nor anyone else disputed these claims.

Only people who were present know exactly what Fouchier said at the February 16-17 World Health Organization Geneva meeting. But in the days that followed, rumors started circulating among flu cognoscenti that Fouchier had given a presentation that significantly altered participants’ understanding of his work. Two weeks later, one person who attended the WHO meeting, the NIH’s Tony Fauci, said that Fouchier’s and Kawaoka’s presentations had provided the WHO participants with “new data on two manuscripts” and “substantially clarified … original data in one manuscript.”

“Substantial clarification” is one way to interpret what Fouchier did when he participated in the February 29 ASM panel. He said his bioengineered H5N1 strain spread among ferrets but not easily. And he said most of the ferrets that caught the virus via aerosol transmission barely got sick, and none of them died.

In his presentation, Fouchier showed charts indicating that the 2009 swine flu (H1N1) virus spreads much more efficiently in ferrets than his mutant H5N1 strain. He referred to “misperceptions” in the media that the mutated virus “would spread like wildfire,” stating that in fact the efficiency of spread “cannot be deduced from our experiments.”

“To then extrapolate that this virus would spread like wildfire in humans,” he said, “is really, really far-fetched at this stage.”

As for lethality, he said:

The second misconception is that the virus would be highly lethal if this would ever come out [of the lab]. But also here there is some facts to explain…. Now the [lab-mutated] virus that we have used does cause disease when we put it in the nose at very high titers…. But if we now look after aerosol transmission, we actually see no disease, no severe disease at all, in any of the seven animals that received virus by aerosol.

Fouchier summarized both points unambiguously in the Q&A:

These [lab-mutated] viruses do not kill ferrets if they are sneezed upon…. If anything, our data suggest that this virus spreads poorly.

At no point in the ASM panel (and at no point since then) did Fouchier indicate that what he was saying now represented any kind of change from what he had been saying all along – at Malta, in media interviews, or in his original paper.

A few days later, Mike Coston summarized Fouchier’s about-face spectacularly on his “Avian Flu Diary” blog. “If after watching the ASM video,” he wrote, “and reading this report, you aren’t thoroughly confused, you obviously aren’t paying close enough attention.”

Fouchier also went out of his way at the ASM meeting to counter his previous alarmist take on the risk of an H5N1 pandemic. No more off-the-cuff remarks about H5N1 being “probably one of the most dangerous viruses you can make,” as he had told Martin Enserink in November. Here’s what he told his ASM audience:

It’s also important to note … that when ferrets are pre-exposed to seasonal flu, they are fully protected from developing severe disease [after exposure to his H5N1 strain]. So if we compare that to humans and you all have been infected previously with seasonal flu, it would be unlikely that you would have no cross-protection against a virus like H5N1. And so very few individuals would actually develop severe disease but most of them would be protected by cross-protective immunity.

Tony Fauci of NIH was on the ASM panel with Fouchier, and used the occasion to announce that NIH had asked the NSABB to reconvene and reconsider its recommendations. Pushed to explain why, he referred to “old data that’s clarified and new data that’s juxtaposed with the old data” – which sounds to me like code for Fouchier’s new messaging.

Fauci and other NIH officials have denied link is to a PDF file they asked the NSABB to reconsider, emphasizing that they wanted it to look at revised manuscripts, not to reassess the original papers. Although this comes across to most outsiders as a disingenuous distinction, there is no way to actually judge the matter, since outsiders will never be permitted to compare the original and the revised papers.

March–April 2012

On March 2, “ScienceInsider” ran a story by Jon Cohen and David Malakoff entitled “NSABB Members React to Request for Second Look at H5N1 Flu Studies.” It reported reactions to the ASM panel from seven (out of 22) NSABB members.

Nearly all the commenters stressed that their concern about publishing Fouchier’s paper was unabated by Fouchier’s ASM presentation. Irrespective of new or clarified data about lethality, Fouchier’s study extended the host range (to ferrets) and mode of transmission (via aerosol) of a dangerous pathogen – reason enough to think hard before publishing his methods.

But one NSABB member, Michael Imperiale of the Department of Microbiology and Immunology at the University of Michigan Medical School, went further, commenting: “What Ron [Fouchier] is saying now is not what was in the paper. We were led to believe by the paper that aerosol transmission is also lethal.”

This is the only testimony we have from someone who read the paper Science had initially accepted and got the same impression from that paper that MacKenzie got from Fouchier in Malta and Kofman got from Fouchier in Rotterdam: that aerosol transmission of Fouchier’s virus killed ferrets. We have no testimony from anyone who read the paper and got the opposite impression, the one Fouchier offered at the February 29 ASM panel. Reading the original paper could settle the matter. But outsiders will not be permitted to read the original paper.

From February 29 on, Fouchier took many opportunities to tell the world that his experiment wasn’t especially scary, that wild-type H5N1 might not be so scary either, and that any misunderstanding of his work was attributable to media misreporting.

Here, for example, is Fouchier in a March 26 “This Week in Virology” (TWiV) interview with Vincent Racaniello:

Racaniello:So Ron, the literature has a record of this and it was originally written that the virus that you ended up with after this passage in ferrets was transmissible and virulent. And a couple of weeks ago at an ASM biodefense meeting you reported that it was transmissible but not virulent. So I’m wondering if you can clarify.
Fouchier:Well, there’s a lot of quotes in the press that are simply wrong…. But what was also in the original manuscript and what I also presented in Malta is that if the ferrets receive virus by aerosol they only get sick, they don’t drop dead at all.

And here is Fouchier at an April 3–4 flu meeting of the Royal Society in London:

There’s no doubt in my mind that H5N1 does not have the supposed case fatality rate of 60%.

Fouchier’s reversals and the flu world’s response

It is important to add that Fouchier has also said many thoughtful and wise things about H5N1: about the importance of better surveillance so mutations in the direction of human-to-human transmissibility are likelier to be identified; about the need for influenza pandemic plans that address the possibility of case fatality rates higher than one or two percent; etc. The four key reversals I keep pointing to are these:

  • H5N1 in the wild has a human 59% or 60% case fatality rate versus the real rate is much lower.
  • The scenario of a catastrophic H5N1 pandemic is credible versus that scenario is extremely low-probability.
  • Fouchier’s mutated virus transmitted easily via aerosol in ferrets versus it transmitted only with difficulty.
  • The ferrets died versus they barely got sick.

The last of these four reversals – lethality – is the most stunning. It is the hardest to understand as a misunderstanding, or even as a mere difference in spin.

There is no recording or transcript of Fouchier’s Malta presentation. And although copies presumably still exist of Fouchier’s original Science paper, the public will probably never get to read it. So we may never know for sure whether the Malta presentation and the original paper said or implied that the mutated virus was lethal via aerosol. We do know that at least one science writer (MacKenzie) and at least one NSABB member (Imperiale) got that impression – and no one has come forward to say they heard the Malta presentation or read the original paper and did not get that impression.

Similarly, we don’t know exactly what Fouchier said about lethality in his one-on-one interviews with Kofman and other journalists. We do know what Kofman reported he said, and we know that there were no interviews before ASM that led to stories reporting that the ferrets didn’t die. And we know that nobody, not even Fouchier, wrote to correct the record of published reports that Fouchier’s team had found a way to kill ferrets via aerosol transmission of H5N1. It has never been easier to add a comment to a website: “That’s not what I heard” or “That’s not what I said” or “That’s not what I meant.” There have been no such comments.

But at ASM and since ASM, Fouchier has said that most of the ferrets that caught the virus from other ferrets in his lab barely got sick, and none of them died. The only ferrets that died, he now says, had H5N1 inserted manually way down in their tracheas, virtually at the entrance to their lungs.

For a few weeks after Fouchier’s February 29 panel presentation, the tiny world of flu researchers and flubies was abuzz with rumors. Had Fouchier reconsidered his own data? Did he have new data? Had his original paper been unclear? If so, how could a paper that unclear have survived peer review? Might the paper have been “clear” but misleading, perhaps even dishonest? Was Fouchier communicating inconsistently or even irrationally, perhaps because of the pressure of controversy? Or had Fouchier simply been hyping his findings because he wanted to arouse attention, and then decided he’d better downplay his findings instead when all the attention looked like it might threaten publication?

What fascinated me even more than these questions about Fouchier’s apparent about-face was the public reaction of the flu research community. Long-term supporters of publication wanted the NSABB to reconsider in light of Fouchier’s new messages that his virus was not lethal via aerosol, and only weakly transmissible. Opponents of publication said lethality and even efficiency of transmission had never been the issue; flu viruses often become more or less lethal and more or less transmissible after adapting to a new host, they said, so what really mattered was that the two studies had expanded the range of species in which H5N1 could transmit.

Neither side said in the mainstream media that they smelled a rat – though I certainly did, and I was convinced they did too. If it wasn’t a cover-up (how do you “cover up” questions?), it was at least an airbrushing of the sequence of events and the questions they raised.

The dominant meme that arose wasn’t that Fouchier had misled everyone about his work. It was that the media and the public had misunderstood his work. To their discredit, scientists who had been equally misled mostly went along with that meme. At worst, some pointed out publicly that the original paper had been “unclear” or “confusing” and needed to be “clarified.” But few if any scientists publicly used the word “misleading,” and none came anywhere near the possibility of dishonesty.

I find it outrageous – though not really that surprising – that the flu science guild has united in defense of the reputation of one of its own. This protective response may well have been augmented by the fact that Fouchier had become the poster child for unfettered scientific publication. Scientists who wanted to advocate on behalf of publishing Fouchier’s paper would have found it awkward to criticize discrepancies in how he had described the work. Scapegoating the media for misreporting and the public for misunderstanding is an easy cheap shot.

Several virologists (and two NSABB members) have told me privately that they and many of their peers are outraged at Fouchier. But unlike the freely expressed outrage of scientists at the threat of publication censorship, the outrage of scientists at Fouchier’s miscommunications has been almost entirely suppressed.

Note: My wife and colleague Jody Lanard did much of the research for this Guestbook response.

Arousing “counter-outrage” about where your activist opponents get their funding

name:Ron
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Communications executive
date:May 9, 2012
location:Canada

comment:

Pipeline companies used to be boring, regulated utilities that diligently proposed projects, walked them through the regulatory process, and got them approved without much fanfare. But these days Canadian pipeline projects such as the Northern Gateway to Prince Rupert, an expansion of the Transmountain pipeline from Alberta to Vancouver, and the Keystone XL pipeline from Canada into the U.S. have been the center of major public activist campaigns because of their connection to the “tar sands” industry, the source of heavy oil that is sometimes called “the most toxic substance on earth.”

There has been much talk about how foreign foundations such as the Tides Foundation and the Pew Charitable Trust, among others, have been funneling hundreds of millions of dollars into Canadian ENGOs to fund major anti-oilsands and anti-pipeline campaigns without having to disclose the sources of their funding. Canadian politicians and right-wing journalists have claimed that this foreign money is being used to unfairly influence our country’s business, and what looks like a huge grassroots movement is really a carefully orchestrated obstructionist campaign.

Some commentators even claim that U.S. vested interests, which stand to gain from slowing the development of Canada’s oilsands industry, are using the invisibility cloak available to anonymous donors to big foundations to exert huge influence on our political and regulatory systems. The most radical critics have said this amounts to a long-term, covert propaganda campaign that has used misinformation to turn the public against the Canadian energy industry.

In other words, in the name of conservationism, U.S. big business is spending big money to protect its interests without having to disclose its intent. Some hint that U.S. East Coast refining oligarchies are doing everything they can to prevent a change in the locus of the North American refining and distribution system; others suggest those who have invested big in alternative energy are successfully demonizing their competition.

Peter, here’s my question: If you were a pipeline or oilsands company, what would you do to respond to this? Is there a special category of outrage that is created by vested, undeclared business interests and a specific way to deal with it?

peter responds:

Before responding to your question, I need to acknowledge my prior connection to Vivian Krause, the person who has done the most to reveal U.S. funding of Canadian ENGOs’ anti-oilsands and anti-pipeline campaigns.

In 2006, I received an email from Vivian, then as now a self-employed writer/researcher living in Vancouver. She had run across my website the year before, and had decided that outrage management was crucial to several issues she cared about, especially salmon farming. (She had previously worked in that industry.) Vivian had drafted materials on how the Canadian aquaculture industry and the British Columbia government could better address environmental outrage about farmed salmon, and wondered if I would send her some comments.

I thought her work was wonderful, and a few months later I posted a revised version of her PowerPoint presentation   on this site, where it remains.

Vivian, however, moved on. While she stayed interested in outrage management (a 2010 generic PowerPoint presentation link is to a PDF file is on her website), she became more interested in investigative reporting, digging up a great deal of information on how U.S. foundations were funding ENGO campaigns to “demarket” Canadian farmed salmon, thereby greatly benefiting the Alaskan salmon fishing industry. She was also critical of some of the technical content of those campaigns, but what most interested third parties was her evidence about funding.

More recently, she has applied the same focus to U.S. funding of Canadian opposition to oilsands and oilsands-related pipelines. And she’s gotten a lot of attention. The issue has taken off in Canada’s mainstream media, and has been pushed hard by some supporters of oilsands and pipeline development. Vivian is still in the middle of the maelstrom, but these days she has lots of allies and lots of critics.

Readers interested in knowing more about Vivian’s work should check out her website.

Sauce for the goose

I see nothing wrong with giving money to organizations and issues you support. And I see nothing wrong with publicizing who gave how much money to which organizations and issues. Of course if the names of contributors aren’t publicly available, they can’t be publicized; in that case, I see nothing wrong with pointing out who stands to benefit from a particular campaign and speculating about whether those who stand to benefit might be funding the campaign.

Both the U.S. and Canada have laws governing who’s allowed to give how much money to whom for what purposes, and whether/when the names of contributors have to be reported or can be kept secret. The pros and cons of restricting contributions are hotly debated, as are the pros and cons of permitting those contributions to be anonymous.

At least one principle should be clear: What’s sauce for the goose is sauce for the gander.

If it’s legitimate for a corporation to contribute to an industry advocacy group, and legitimate for an environmentally concerned citizen to contribute to an ENGO, then it’s legitimate for a corporation to contribute to an ENGO – whether or not the corporation has a business interest in that ENGO’s cause.

And if it’s legitimate to comment on how much money an industry advocacy group spent on a campaign, and who benefited, and where the money came from (or might have come from), then it’s legitimate to comment on the same questions with regard to an ENGO.

As far as I know, there aren’t a lot of legal objections being raised to U.S. foundation support for Canadian ENGO campaigns against oilsands and pipelines (and salmon farming). For the most part, at least, it looks like the U.S. foundations got the money legally and gave the money away legally, and then the Canadian ENGOs spent the money legally.

One exception: In Canada, according to Krause, “the only type of political activity that charities are allowed to engage in are those that further a charitable purpose.” Her investigations have aroused new government interest in whether any Canadian ENGOs that are organized as charities might be crossing the line into partisan political activity that doesn’t qualify as charitable.

Some Canadian ENGOs have complained about the chilling effect of hostile publicity, and implied that since their contributions from the U.S. are legal criticizing them publicly is somehow wrong. I don’t deny that there is a chilling effect. In fact, the chilling effect is the point: Publicly criticizing legal behavior you object to is probably the best way to deter that behavior. It’s also a good way to raise questions about whether the behavior should be made illegal, or whether it might turn out to be illegal already if subjected to tougher regulatory scrutiny. Environmentalists use publicity for these purposes endlessly, effectively, and honorably. The principles are no different when they’re on the receiving end.

In fact, I think there’s a pretty good case to be made that the world especially needs that kind of public criticism of environmentalists, and of public interest causes generally. The decline of the mainstream media has decimated the ranks of investigative reporters; there are plenty of angry rants on the Web, but not so much muckraking journalism. What little investigative reporting still gets done is overwhelmingly focused on governments and corporations. Nonprofits, for the most part, get a free pass.

I have argued elsewhere that insufficient public scrutiny tends to make do-gooders simultaneously dishonest and self-righteous, whether they’re NGOs or NGO-like arms of government. See for example my 2009 speech on the need for nonprofits and public health agencies to “Trust the Public with More of the Truth” and my 2010 Guestbook entry on CDC misrepresentation of swine flu mortality data.

So I’m okay with lots of public discussion in Canada over issues like these:

  • How much money U.S. ENGOs have given to Canadian ENGOs, First Nation tribes, and others to support their opposition to oilsands and pipelines.
  • Whether all that money distorts the priorities of the Canadian recipients – that is, whether they’re fighting so hard on these issues in large measure because there’s money to subsidize the fight.
  • Whether U.S. money should play such a hefty role in the resolution of Canadian controversies.
  • Where the U.S. foundations got the money, and whether they’re passing through contributions from U.S. corporations that stand to make or lose huge sums depending on how these Canadian controversies are resolved.

The last question is the most inflammatory and the most speculative. The Machiavellian scenarios are endless. If the Keystone XL pipeline to the U.S. goes through but the Northern Gateway pipeline to British Columbia is defeated, for example, Alberta oil will have a clear path to the U.S. but not to Asia. Without Asian price competition, U.S. purchasers will be able to drive a harder bargain. So if a U.S. ENGO seems more deeply committed to defeating Northern Gateway than Keystone, commentators are entitled to wonder if it might be getting money from U.S. companies that want to buy Canadian oil cheap.

We’re all free to spend our money however we wish, as long as it’s legal. Contributions to NGOs (Canadian or American, pro-industry or anti-industry) may be motivated by intellectual conviction, or by ideology, or by outrage, or by self-interest. All these motives seem respectable to me.

If funding information is publicly available, discussing who funded what and speculating about why is legitimate. If funding information isn’t available, speculating about who might have funded what is also legitimate. If such speculation leads to greater transparency about funding sources, that’s fine by me. And if the speculation and the transparency deter some contributions altogether – if some contributors would rather not give than be suspected or known to have given – I’m okay with that too.

The strategy of arousing “counter-outrage”

So: Vivian Krause and others dig for whatever dirt they can find about who is funding Canadian anti-oilsands and anti-pipeline activist campaigns. And “politicians and rightwing journalists” (as you put it) and other supporters of the oilsands and the pipelines use the dirt to mobilize outrage against the activists.

The battle is symmetrical. It’s legitimate for activists to try to arouse outrage against oilsands and pipelines. It’s obviously just as legitimate for the targeted industries and their supporters to try to arouse outrage – let’s call it “counter-outrage” – against the activists.

It’s legitimate. But is it smart? I don’t think so.

Perhaps I’m underestimating the power of Canadian resentment of the behemoth to the south. Perhaps Canadian citizens with green leanings and grave reservations about Canada’s fossil fuel industry would feel compelled to change sides if only they realized that a lot of the funding for Canada’s ENGOs comes from the U.S. But I doubt it. I haven’t seen many letters-to-the-editor in Canadian newspapers asserting that “I hate oilsands and oil pipelines but I hate meddling American activists more.”

Nearly all the arguments I can find that U.S. funding undermines the legitimacy of Canadian ENGOs come from people who pretty obviously would be against the ENGOs’ position whether they had U.S. funding or not.

In other words, the funding argument might be a good way for oilsands and pipeline proponents to rally supporters. But it’s not a good way to win over those who are undecided or ambivalent. And it’s certainly not a good way reach out to those who are leaning the other way but potentially reachable.

There are moments when building solidarity really is the top-priority goal, when rallying supporters matters so much that alienating neutrals is a price worth paying. At those moments, working to arouse counter-outrage makes sense.

But not usually. In general, I think, what I am calling “counter-outrage” – trying to mobilize outrage against the outrage-purveyors – is not sound strategy. I think public health professionals are making a mistake when they try to mobilize outrage against anti-vaccination activists, for example. And I think oilsands and pipeline companies would be making a mistake to try to mobilize outrage against environmental activists.

It’s fighting on the activists’ turf. In most controversies between activists and industry, the activists own the moral high ground. They’re less wealthy and less profit-driven. They’re more altruistic and more ideological. It’s certainly debatable which value system ends up doing more good – but it’s not really debatable which is the do-gooder value system. When capitalists try voicing moral indignation against do-gooders, they’re fighting an uphill battle.

Similarly, I doubt that a large corporation can sustain a claim that it is the powerless oppressed victim of giant activist groups. I have worked with oil company executives who really feel that way, but I have never seen an oil company successfully convince the public that it’s David and the activists are Goliath.

Questioning the validity of economic ties to the U.S. is also a tough thing for most Canadian corporate executives to do with a straight face. Not all Canadian oil companies are the Canadian arms of U.S.-based multinationals. But nearly all have financial ties to the U.S. that run at least as deep as those of Canadian ENGOs. Do they really want to start a fight over who’s authentically Canadian and who’s a marionette dancing on strings made in the U.S.A.?

Nor are most of my industry clients in a position to criticize anyone else for making or receiving anonymous contributions to advocacy causes.

I might consider recommending a counter-outrage strategy to an industry client that had unequivocal evidence of inarguably dishonorable behavior on the part of an NGO – behavior so offensive it made the worst things the company had ever done, the worst things it was even accused of doing, look benign by comparison. And even then I’d hesitate. The public is quite capable of punishing the NGO for its misbehavior without thinking any better of the company that ratted it out.

If third parties want to raise hell about foreign funding of Canadian environmental groups, Canadian industry shouldn’t try to stop them from raising hell. But it shouldn’t join them in raising hell either.

If asked about the funding issue, I’d pivot on it: “We’ve read the evidence that much of the funding for the environmental groups that oppose us has come from foundations in the U.S. While some of our supporters may be outraged at this foreign funding, we think it’s really a side issue. The main question isn’t the source of their money; it’s the validity of their arguments. Let’s keep the focus on whether what we do is good or bad for Canada and the world.”

Counter-outrage is good for morale. Its internal appeal makes it potentially useful for rousing the troops. But its internal appeal is also dangerously seductive. Playing offense is more fun than playing defense. It’s a relief to be the outraged victim for a change instead of the accused perp. Since companies often feel genuine outrage at their critics, any excuse to voice their outrage is very tempting.

But counter-outrage rarely rings true to the public and rarely wins the day.

Good for Vivian Krause for raising the issue of U.S. foundation funding of Canadian ENGOs. Following the money is always a good idea, whether it’s ENGO money or corporate money. But while the funding issue is legitimate, I believe it is the wrong issue for Canadian oil companies and pipeline companies to emphasize.

Whenever a big company is under attack, it has four strategic options. In order from most to least fruitful, they are:

  • Acknowledge and improve
  • Defend
  • Hide
  • Counterattack

Internal morale aside, the most productive approach is almost always to address stakeholder concerns and try to reduce stakeholder outrage: to acknowledge your organization’s prior misbehaviors and current problems, find ways to improve, give your critics credit for the improvements, etc. Defending against the attack is less productive than this acknowledge-and-improve approach. Hiding and hoping the controversy will go away is less productive than defending. And counterattack is least productive of all.

I can’t blame the oilsands and pipeline companies for getting some pleasure out of seeing the ENGOs in hot water for a change. But they would be unwise to pile on.

How should Yahoo CEO Scott Thompson have apologized for padding his résumé?

name:Frank
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Business school professor
date:May 9, 2012
location:Virginia, U.S.

comment:

I’ve been looking for examples of ethical problems I can use in class that I think students will understand and relate to. The controversy over Scott Thompson, CEO of Yahoo, caught my attention.

Thompson was accused by activist shareholder group Third Point of padding his résumé, claiming a bachelor’s degree in computer science that he didn’t actually have, in addition to the accounting degree he did have. The accusation turned out true, and Thompson apologized.

I would be interested in your take on his apology.

For what it’s worth, here’s what I think. Within hours of the Third Point charge he should have issued a statement like this:

First, I want to thank Third Point for identifying an error in our records and calling this problem to my attention. This is an error and it was my fault. I don’t know when this got into my bio, but at some point a number of years ago I suspect I wanted my bio to look better than it did. That was foolish of me. I firmly believe today that what counts is the work you do, not the major on your college diploma.

But it was a mistake and I was responsible. Regardless of my contract I believe a CEO should only serve with the full support of his board of directors and if this error is of material concern to the board then I am prepared to resign. However, I believe the board hired me because of what I have accomplished after college and not because of courses I took in college.

Finally if Third Point is not happy with the way Yahoo is being managed then I suggest this may not be the right company for it to invest in.

What would you have advised Thompson to say?

peter responds:

Usually when my clients need to apologize, they have done something that did actual harm to actual victims. Under those circumstances, the basic steps toward forgiveness are as follows:

  1. Admit you did it.
  2. Give people a chance to berate you.
  3. Say you’re sorry – making sure to express regret, show sympathy for those who were harmed, and above all take responsibility.
  4. Explain why it happened – were you stupid, evil, or what?
  5. Make it right if you can – compensate your victims and take steps so it won’t happen again.
  6. Do a penance, a public humiliation that rubs your nose in your misdeed.

Order matters. People don’t pay much attention to an apology that preempts their need to berate you first. And they don’t respond well to offers of compensation before you have apologized.

The situation here is a little different. Scott Thompson didn’t exactly hurt anybody by padding his résumé, so there’s not much need to make it right (#5), other than correcting his résumé. But he still needs to admit what he did, sit still for the criticism, say he’s sorry, explain why it happened, and do a penance.

Thompson is stuck at step #1. He hasn’t really admitted he did it (intentionally padded his résumé) yet.

The article you cite quotes a memo from Thompson to Yahoo employees:

I want you to know how deeply I regret how this issue has affected the company and all of you…. We have all been working very hard to move the company forward, and this has had the opposite effect. For that, I take full responsibility, and I want to apologize to you.

As the article points out, “Thompson’s memo to Yahoo’s staff included no explanation for how the mistake happened. His apology was solely for the impact the scandal has had on the company, not for the act itself.”

Worse still, Yahoo’s statement, as quoted in the article, claimed it was all an “inadvertent error.” I think any apology that calls Thompson’s résumé-padding an error is a non-apology. I find it impossible to believe that he didn’t know what his degrees were in or that he didn’t know what his résumé said.

Your proposed apology also has this fatal defect.

I have two other problems with your draft:

  • It focuses disingenuously on whether Thompson’s lack of a computer science degree affects his job performance. That’s not the issue. The issue is his lack of integrity.
  • It starts with a saccharine, insincere thank you to Third Point and ends with a gratuitous swipe at Third Point.

What Thompson should have said depends a good deal on what the truth is. I’d have proposed something like the following (I’m making up some facts as I go along):

Many years ago, trying to look more qualified on paper for jobs I thought I could handle, I added a non-existent computer science degree to my résumé. This wasn’t a mistake. It was a lie.

Often since then I have tried to figure out a way to correct the record. I couldn’t just stop saying I had that degree. I had claimed it too many times already. People would naturally have asked why I wasn’t mentioning it anymore. I would have had to admit the lie, not just stop lying. That’s what I should have done, of course, but I just wasn’t brave enough to do it.

I kept telling myself that it no longer mattered what I studied in college back in the 1980s. My job record was what mattered now. That’s true, of course. But it’s also irrelevant. Nobody is questioning whether Yahoo should have a CEO who doesn’t have a computer science degree. Most people aren’t even questioning whether Yahoo should have a CEO who told a lie when he was a young man. But many are questioning whether Yahoo should have a CEO who lacked the courage and integrity to come clean right up until yesterday when Third Point did it for me.

That’s a fair question, and I’m frankly not sure of the answer. If I decide this means I can’t do my job properly, I will resign. If the Yahoo Board decides this means I can’t do my job properly, it will fire me. Much depends on whether our employees and our customers are prepared to forgive me. Even more depends on whether they are prepared to trust me. If they’re not, I will have no one to blame but myself.

Reactions?

Frank responds:

Frankly I like your version better, but I suspect it requires a degree of candor that a $26 million CEO is unlikely to muster. I guess I was trying to move the marker towards truth and you’re pointing out (fairly) that once you start towards truth you can’t go halfway.

I particularly like your point that the problem was not so much in making the error initially but in figuring out how to get out of it as time passed.

Why are people so politically inactive? Is it denial? Are they sheeple? What can be done?

name:“Avalanche”
field:Disillusioned change agent
date:May 4, 2012
location:Georgia, U.S.

comment:

I’ve been thinking a lot about you these past few days, while listening to some reactionary podcasts, and some economics podcasts, and some “how can we save the country” podcasts.

There’s lots of whining about how soooo many people aren’t voting, have given up on voting, aren’t interested in voting, and how can we “make” people want to vote; and then there’s a whole slew urging people to quit voting because the game is so completely rigged, and it’s merely a shiny distraction for the “children” (i.e., childish Americans) to keep them from paying attention to what’s really going on.

But what keeps coming to mind is your seesaw, and the whole idea of people being frightened into denial. I think many folks (and maybe I’m too generous) have given up; they don't see any way to have any effect, to make any changes, to do anything to save themselves from the fast-approaching cliff edge. The “powers that be” with their loyalty to those who anointed/appointed them; our “foreign overlords” with their desires and plans (and actions!) for something other than the health and safety of America/Americans; our so-called “representatives” with their allegiance to their money-suppliers and their own careers (and not to America/Americans) – I think more and more people are realizing they really don’t have any effect on the “guardians of our future.”

I think people are in denial about our future because they do not see a way to have a preventative effect; they have a sense of hopelessness and helplessness (no action steps within their reach). I am certainly one-such.

But I have friends and associates who are still trying valiantly to “save America,” to “awaken the sheep” (which, to me, only gets you awakened sheep, not the necessary wolves (or even sheepdogs) that might have a salutary effect). They hope that by “educating” the sheep they can somehow “turn the flock” away from the disastrous cliff edge. (To change the metaphor, my late husband used to say: “no cowboys without cattle.” You can’t stampede wolves (or sheepdogs!), and if you have “herd” animals you’ll get cowboys!)

You don’t write much about recovery from denial. You caution against raising the fear of your audience so high that they tip over into denial. (I just finished reading Gladwell’s The Tipping Point.) But you don’t (that I’ve found) talk about trying to pull people back from denial. What advice would you give to those windmill-tilter friends of mine? How do you coax people lost in denial back into the possibility of action? How much of that recovery would be risk communication and how much risk (“hazard”) management?

Does one just write off the folks who’ve gone past appropriate fear into denial?

peter responds:

You don’t have to be a political extremist who talks readily about “foreign overlords” to worry about the issues you’re raising. These issues are so big and so amorphous that there’s a risk our dialogue may end up sounding like the sorts of things college sophomores say to each other late at night in the dorm. Even in my most vainglorious moments I don’t actually think risk communication can save the world. But I’ll try to suggest ways that might work to coax at least some people back into political involvement.

It seems to me that you are raising – and sometimes conflating – three issues:

  • How to woo people back from denial.
  • Why Americans are politically uninvolved – is it denial or something else? – and what to do about it.
  • Why people act like sheep and what to do about that.

I’ll take them in that order.

How to woo people back from denial

Imagine a scale that measured the intensity of people’s interest and involvement in an issue. The various levels of interest are all arrayed on this scale, from mild to passionate. The part of the scale down near the bottom, below mild interest, is “apathy” territory, going down to a theoretical zero point of no interest at all. The area up near the top of the scale is more complicated. When someone’s interest in the issue gets very, very intense, a psychological circuit-breaker may be tripped. Now, instead of seeming very interested, he or she seems apathetic – and may actually feel apathetic on the conscious level. But unlike apathy, denial is unconsciously motivated. If I’m really “just not interested” in your issue, that’s apathy; your issue didn’t make my list. If I can’t bear to get interested (or remain interested), if the issue threatens my sense of how the world works or arouses emotions I cannot tolerate, that’s denial.

I like the circuit-breaker metaphor because it captures denial’s psychological role: to protect us from cognitive and emotional content we can’t handle. But the metaphor is misleading insofar as it implies that denial is dichotomous, that people are either “in” denial or not. People don’t usually click suddenly from intense interest to avoidance. Instead, we start to feel threatened so we get less interested.

That’s why the strategies for helping people recover from denial are pretty much the same as the strategies for keeping people out of denial. Please note: I’m not talking here about catatonia or other severe states of mind into which people sometimes retreat when reality is unbearable. Helping patients recover from these conditions is a subspecialty of psychiatry; it isn’t risk communication. But wooing people back from garden-variety issue-denial is a lot like preventing people from going into denial in the first place. It’s all about delicate adjustments in the fuzzy area where the issues that interest us start to threaten us as well.

If thinking about an issue makes people feel threatened – that is, if they’re in/near denial with regard to that issue – harping directly on the dire threat obviously isn’t a good strategy. You don’t want to say upsetting things to people who are already having trouble bearing how upset they are.

That doesn’t mean false reassurance is the answer. False reassurance should virtually never be on the communications menu between adults – and with people in denial, it would probably backfire anyway. Denial is defensive. People in denial tend toward paranoia. They’re hiding from themselves how they feel, trying to convince themselves that they’re just not interested or it really isn’t a big problem. They often project their self-deception into a generalized suspiciousness that others are lying to them. Their nose for hypocrisy is keen. So false reassurance, like dramatic warnings, can easily push them deeper into denial.

Here are some approaches that can help with people in or near denial:

  • Legitimize the threat. Don’t try to make people feel more threatened, and don’t try to pretend they’re not feeling threatened. Instead, validate how threatened they feel. You usually can’t do that directly. “You look really upset” is unacceptably intrusive. So you deflect it: “A lot of people find this hard to think about.”
  • Be a role model. There’s added value if you deflect the threat onto yourself instead of a third party: “I find this hard to think about.” Show that you are bearing the threat, even though you’re not finding it easy. The best crisis leaders don’t come across as fearless or happy; instead, they’re visibly bearing their fear and misery without going into denial. That can help the rest of us bear these feelings better ourselves.
  • Provide action opportunities. People get less stuck in denial when they have things to do. Better yet, provide choices of things to do, so you’re not just keeping people busy but keeping their minds busy too. Both things to do and things to decide help give us a sense of control; if we can’t control the outcome, at least we can control how we respond to the problem.
  • Focus on victims who need to be helped and potential victims who need to be protected. Most people are willing to tolerate more sense of threat while still staying “on task” if we’re working on behalf of those we love than if we’re working on behalf of ourselves alone.
  • If appropriate, focus also on malefactors who need to be caught and punished. Unless it escalates into out-of-control rage or is itself denied, anger and even hate work a lot like love. These intense negative emotions can help us bear more psychological threat with less resultant denial.
  • Stress determination but not necessarily optimism. Be candid – but gently candid – not just about how upsetting it is to think about the issue and keep working on it, but also about the barriers to actual progress. Instead of false optimism, set a tone of determination, rather like Winston Churchill in the dark days of early World War II: “We will fight them on the beaches….”

These are generic recommendations. But it always pays to diagnose the specific sources of denial in the specific case at hand. What exactly is psychologically threatening about this issue for these people? Then you can fine-tune the listed approaches based on your diagnosis. The bottom line: When people are feeling too threatened to keep working on the issue you want them to work on, find ways to help make the threat more bearable for them.

Why Americans are politically uninvolved

Although your comment focuses on denial, the situation that provoked the comment – the low political involvement of so many Americans – doesn’t strike me as mostly a problem of denial.

You say that many people, yourself among them, “have given up” on political change and are in denial because “they have a sense of hopelessness and helplessness.”

Hopelessness is thinking the problem probably can’t be solved; helplessness is thinking you probably can’t do much to solve it. When the problem is fixing what’s wrong with the world, or even the U.S., or even the U.S. political system or the U.S. economy, these strike me as pretty rational judgments. We need to find ways to keep trying despite our pessimism. But pessimism isn’t denial. If anybody’s in denial here, isn’t it the folks who are optimistic about hope and change?

It’s true that hopeless and helpless feelings can lead to denial. If you think you probably have cancer, for example, and feel hopeless/helpless about the prospect of remission, you may convince yourself that it’s not cancer. Or if you feel like your desire to become a lawyer or your affection for an acquaintance is likely to be thwarted, you may convince yourself that you never wanted to go to law school anyway or that you really don’t like him or her very much after all. The denial is a way of deep-sixing those hopeless/helpless feelings.

Similarly, people who have always been interested in politics sometimes suddenly announce to their friends that they’re not interested anymore. That could be denial. (It could also be that they’re not interested anymore.) Politics had come to seem hopeless; they had come to feel politically helpless; those feelings were uncomfortable. So they made an unconscious decision to “lose interest” to protect themselves from the discomfort.

But most people who don’t bother to get politically involved (or even vote) aren’t trying to convince themselves of anything. They’re not deep-sixing their hopeless/helpless feelings. Far from being unconscious, those feelings are front-and-center. When they say it’s a waste of effort to try to change the system, they mean what they say. I hope they’re wrong. But they might be right.

So let’s distinguish denial (“I can’t bear what I’m feeling”) from hopelessness/helplessness (“I believe there’s no point in trying”). And then let’s distinguish both from garden-variety apathy (“I’m really not interested anymore; I’ve got my own life to worry about”). I suspect that hopelessness, helplessness, and apathy all play crucial roles in people’s reluctance to get politically involved. I suspect that denial plays a comparatively minor role.

Low political involvement isn’t a new problem in the U.S. (or elsewhere). And it hasn’t gotten bigger, at least not in recent years. I accept that there is some truth to the “bowling alone” hypothesis that we’re all busy on our computers and smart phones instead of out participating in our communities and churches and local athletic groups; by many measures American civic involvement is down. But whether you agree with Occupy Wall Street or the Tea Party or neither, you should at least notice that both are upwellings of grassroots political involvement. It has never been easier to recruit people to become involved than now, in this age of social media.

I’m not going to say anything here about how to address political apathy. For my take on dealing with apathy, see this website’s Precaution Advocacy Index.

How do you address political hopelessness/helplessness – especially when it feels pretty justified? The obvious answer is to try to bolster your audience’s sense of self-efficacy. But I think that may be – in part, anyway – the wrong answer.

“Low self-efficacy” is a psychological term for what we’re calling hopelessness/helplessness. The psychology literature tends to see high self-efficacy as almost always a good thing, paying surprisingly little attention to the problem of over-confidence – that is, undue self-efficacy, a belief in oneself that exceeds one’s actual ability to get the job done. The literature focuses mostly on how to build people’s self-efficacy, especially in domain-specific ways – how to convince people they can quit smoking, learn math, etc.

According to the literature, the best way to increase people’s self-efficacy is to arrange for them to have mastery experiences; success breeds confidence. But there are other ways, most notably modeling (“if she can do it so can I”) and social support (“if he says I can do it, maybe I can”).

I don’t question any of that – but I think it’s probably close-to-irrelevant when someone’s low self-efficacy is mostly a rational response to reality. “It’s not that I lack self-confidence. I’m a pretty efficacious person in my own spheres. But the country is really screwed up and I’m just one person!”

Instead of cheerleading for efficacy, my hunch is that it will help more to get on the other side of the risk communication seesaw:

  • “This feels almost futile, doesn’t it?”
  • “At most we can only help a little – and maybe not even that!”
  • “Even if we win a battle or two, we’re probably not going to change the world.”

The myth of Sisyphus – the Greek king compelled to roll a boulder uphill forever – is powerful precisely because so much effort really is Sisyphean. It can’t be good risk communication strategy to try to convince people that it isn’t … or, worse, that they should feel like it isn’t whether it is or not.

And yet things do change. Human nature and the human condition don’t change, but the specifics of life do. You can’t reach my age of 67 without noticing that lots of things you thought would never happen have happened, from the fall of the Soviet Union to the widespread acceptance of homosexuality to the breathtaking reduction in childhood infectious diseases. And you can’t reach 67 without noticing that people and nations keep falling into the same traps they fell into centuries ago.

I think it is good strategy to acknowledge and even proclaim that change and unchangeability are both characteristics of life; that it’s hard to tell which of our efforts might make a (probably small) difference and which efforts are doomed from the outset; and above all that it’s more fun (“fun” in the most serious sense) to take your best shot than to stand around watching.

Just as we can’t reliably tell when our efforts might make a difference, we also can’t reliably tell how horrific our problems are. Your comment refers to the “fast-approaching cliff edge.” I have that feeling too – but I don’t trust it. The fact that most people feel hopeless/helpless isn’t itself proof that disaster looms. Most people have always felt unable to change the course of events, and most people have always spotted a fast-approaching cliff edge or two. More often than not the cliff edge they saw (or looked away from) wasn’t as cataclysmic as they imagined.

Is the economy really in worse shape than ever before? Is the country really less governable or less well-governed than it used to be? Are these foolish questions because H5N1 or global climate change is gonna kill us all anyway? The answers are probably no and no and no. At least, many cliff edges of the past have turned out to be less close or less steep than people thought. Maybe ours are too – a point that’s probably worth making when people are in or near denial.

Why people act like sheep and what to do about that

I’m not comfortable calling people sheep or cattle (or “sheeple” – the derogatory portmanteau of “sheep” and “people” that gets a lot of use on Internet conspiracy sites). A big piece of what I do for a living is urge clients to listen harder to publics. Nobody listens to sheep.

But I agree that most people most of the time are focused on their own lives and not on the Big Issues, unless some big issue is their job or their lifework or at least their hobby. For most people, paying attention to big issues is an occasional thing.

And for most people, figuring out where they stand on big issues isn’t about reaching an independent judgment based on the data.

People’s convictions are typically grounded in four factors:

  • Reference groups – what do the people whose opinions I care about think about this issue?
  • Values – what long-term values of mine does this issue illustrate, or at least trigger?
  • Emotions – how do the various positions on this issue make me feel?
  • Precedent – what have I said or done that tells me that’s the side I’m already on?

Information? Evidence? Data? Sadly, they don’t make the list of the main determinants of our opinions. They do play a crucial role afterwards, however. We collect information as ammunition, not to figure out what we think but to tell ourselves and others why we’re right.

The most sheep-like of these four factors is of course our allegiance to reference groups. It can’t be coincidence that most people take the same position on most issues as the position their reference groups take – their friends and families, their coworkers and neighbors, the strangers they most admire. In part we pick our reference groups because we agree with them about most things. But to a far greater extent we choose to agree about most things with the people in our reference groups.

“Choose” is arguably the wrong word here. We don’t choose to agree with our reference groups as much as we find ourselves in agreement with them. This should come as no great surprise. We’re mostly reading the same columnists, watching the same TV shows, and checking the same websites. And we’re paying close attention to each other, picking up cues about what we all think.

Among other things, relying on reference groups is a shortcut. We don’t have time to study up on dozens of issues. We study up on a few – and our friends may absorb their opinions on those few mostly from us. For the rest, we absorb our opinions mostly from our friends.

Of course we may have a few opinions that diverge from reference group norms. We may display these areas of iconoclasm proudly to the group, or we may see them as shameful or dangerous secrets. Either way, if the areas of iconoclasm get too plentiful we need to find new reference groups.

For most people, iconoclasm is a pleasure (if it’s ever a pleasure) only when displayed to a group whose opinions we don’t value. Disagreeing with people whose opinions we value is painful, and we try to do it as seldom as we can. This is at least as true of people with unusual opinions as it is of people with conventional opinions. The more extreme and unpopular the viewpoint, the more intolerant its adherents are likely to be of apostasy. Those who are inclined to call everyone else sheeple are among the most insistent that their peers toe the ideological line. Anyone who spends much time surfing the Web knows that.

None of this makes us sheep or cattle or sheeple. It makes us, simply, people. We’re too busy in our lives to reach independent judgments about every issue. And we don’t want to be that independent, that autonomous, that lonely. We value our connections to other people, and adhering to reference group opinions most of the time doesn’t feel like too high a price to pay. Most of the time it doesn’t feel like a price at all; we don’t even notice we’re doing it.

What is to be done about our reliance on reference groups? Nothing.

Of the four determinants of opinions I listed – reference groups, values, emotions, and precedent – reference groups and values are exceedingly stable and resistant to change. So change efforts work on mobilizing the reference groups and values that are already there in your target audience. You appeal to people’s group allegiances and to their values. You try not to challenge either one.

It’s easier to mobilize people’s existing emotions than to change their emotions – but changing emotions is more feasible than changing reference groups or values. That’s why precaution advocacy tries to get people more upset about serious problems, while outrage management tries to get people less upset about small problems.

Precedent is the most changeable of the four factors. It’s relatively straightforward to persuade people to say or do something small that they haven’t said or done before (perhaps using reference groups, values, or emotions as your motivator). Then, having secured your foot-in-the-door, your “behavioral commitment,” you flood them with information validating what they just said or did: how wise and valuable it is, how grateful you are, above all how clearly it shows that they’re on your side! Suddenly you’re not urging them to change positions anymore; you’re urging them to stick to their (new) guns. This is what cognitive dissonance-based change strategies are all about: Get people to do/say something new that points in your direction, and then support the hell out of the new instead of challenging the old.

If you’re skillful and lucky, maybe your new supporters will bring their reference groups along with them.

What’s the job of a state health department risk communicator?

name:Eric Jens
field:Risk communicator, Georgia Department of Public Health
date:April 23, 2012
location:Georgia, U.S.

comment:

Your recent articles have been most thought-provoking, especially to me this week as I have begun my new role as Risk Communicator for the Georgia Department of Public Health. I am in the process of meeting many of the professionals within our system and progress seems to be going well.

Is there any aspect of this role that someone in my position too often overlooks or should pay more attention to?

Whenever you have time for even a short answer, I would be interested to hear it.

peter responds:

What a wonderful question to ask yourself on your first week in a new job:  What do people in your job too often neglect to do?

As you probably know, 20 years ago there almost certainly wasn’t a job called “Risk Communicator” in the Georgia Department of Public Health or any state health department. There were public information people who publicized the agency’s achievements and responded to controversies, and there were health education people who produced factual materials about various diseases and other health problems.

Despite labels like “public information” and “health education,” these communicators often functioned as advocates. The health educators advocated on behalf of behaviors the health department considered healthful (from getting vaccinated to keeping spoilable foods in the refrigerator), while the public information people advocated on behalf of the health department itself (especially when it came under attack).

Some health departments have added job titles like “Risk Communicator” without really changing what their communication professionals do. Others really have tried to rethink the job, not just rename it.

Here are some of the things I think health department risk communicators should do that they sometimes don’t realize they should do … and sometimes aren’t permitted to do:

Think explicitly about what you’re trying to accomplish.

I accept that occasionally all a communicator wants to accomplish is “information” or “education” – getting out certain facts without really caring how people evaluate those facts or what they do about them. But usually your goal is to change people’s opinions, attitudes, or behaviors in specific directions. Making that goal explicit rather than unstated (or even unconscious) can help you achieve it more effectively.

In a peculiar way, being more explicit about your communication goals also helps you be more honorable in how you achieve those goals. When communicators pretend to themselves that they’re just telling people facts, it doesn’t usually occur to them to wonder whether they’re cherry-picking the facts that help make their point and leaving out the facts that work against them. That crucial question is likelier to come to mind once we accept that there is a point we’re trying to make and there are some facts that work against us. Then we can decide consciously whether integrity requires us to include the latter facts in our communications.

In risk communication, I think three goals are paramount:

  • Sometimes you believe people are insufficiently upset about a serious risk, and your goal is to arouse more concern and thus more precaution-taking. That’s precaution advocacy.
  • Sometimes you believe people are excessively upset about a small risk, and your goal is to calm them down so they won’t take or demand precautions you consider unnecessary. That’s outrage management.
  • Sometimes you believe people are appropriately upset about a serious risk, and your goal is to help them bear how upset they are and help them choose wise rather than unwise precautions. That’s crisis communication.

Risk communication begins with an explicit decision about which of these three tasks you’re trying to accomplish.

Understand – and preach inside your agency – that outrage is principally the cause of hazard perception, not its effect.

For the most part, it is not the case that people get upset because they think a risk is serious; rather, people tend to think a risk is serious because they’re upset.

This means that if you want people to think a risk is serious and therefore take recommended precautions, you need to get them more upset about that risk. If your agency can’t stomach the word “upset,” you can settle for its wimpy cousin “concerned” – but “aware” won’t do the job. Precaution advocacy is mostly about arousing a stronger emotional response to some risk.

And if you want people to think a risk isn’t so serious and therefore stop taking (or demanding) unnecessary precautions, you need to get them less upset. Outrage management is mostly about ameliorating the strong emotional response people are already having to some risk.

Treat people’s outrage with respect, even when you think it’s greater than the hazard justifies.

Ameliorating people’s strong emotional response to a risk – their outrage – doesn’t mean ridiculing or rebutting that response. People’s outrage may be greater than the hazard justifies, but it isn’t ever random or foolish and it always deserves to be treated with respect.

In fact, one of the key jobs of health department risk communicators is persuading your technical colleagues that respecting people’s outrage is a prerequisite to ameliorating it.

Another key job is helping your technical colleagues understand where the outrage is coming from. Whenever you think people are excessively upset, always ask yourself why. And don’t settle for easy answers – “they’re technically ignorant”; “they’re being manipulated by activists”; etc.

Pay particular attention to these two possibilities:

  • Could the hazard be greater than my agency is admitting?
  • Are there things my agency is doing/saying or has done/said in the past that are exacerbating people’s outrage?

Once you think you understand where people’s outrage is coming from, you can start thinking through a respectful way to ameliorate it.

This is just as true of internal outrage as it is of external outrage. One of the most common and least recognized barriers to agencies doing good outrage management is the outrage of the agencies’ own personnel – outrage at ordinary citizens they think are responding foolishly to some risk and at critics they think are unfairly attacking them. It’s hard to be respectful and empathic about other people’s outrage when you’re drowning in your own outrage and don’t even realize it.

Really effective risk communicators don’t just figure out how best to address stakeholders’ outrage. They also find ways to identify, surface, and ameliorate their colleagues’ outrage (and their own outrage) at those stakeholders.

Don’t see yourself as the agency’s mouth. You’re also its ear.

Odds are many people in the Georgia Department of Public Health think your job is to be the department’s mouth:  They figure out what needs to be said, and you find better ways to say it.

Often it will be hard – and sometimes impossible – to persuade them that you’re not just a mouth. Whatever you do, don’t let them persuade you that you are!

One of your key jobs is telling your technical colleagues how things look to the public, and even to critics of your agency. That’s what I mean when I say you’re the agency’s ear, not just its mouth. An effective risk communicator probably spends as much time explaining the public to the agency as explaining the agency to the public.

Decades ago, public relations researchers discovered that companies that utilized their PR people just as mouthpieces usually ended up with poor reputations. The most admired and successful companies were the ones that gave their PR people a seat at the policy-making table, listening to their explanations of the public’s perspective before deciding what to say … and, more important, before deciding what to do.

Public health departments have been slower to realize how crucial it is to listen to their communication professionals – and in many agencies it is the risk communication people who are struggling to make that realization happen.

The purpose of thinking like the public isn’t just to enable you to explain to your technical colleagues how the public feels. It’s also to empower you to push as hard as you dare for producing risk communication materials that respond empathically to how the public feels, instead of ignoring or ridiculing how the public feels.

Think like an investigative reporter.

Try also to think like an investigative reporter. What is your agency trying not to reveal? What questions is it trying not to answer – perhaps even trying to hide the questions themselves? What misjudgments or even misbehaviors is it trying to conceal? What grounds do critics have for mistrusting your agency, and in what ways is the agency continuing to earn that mistrust by not acknowledging its critics’ valid points?

Above all, what is it about your agency’s talking points that doesn’t make sense to you? What’s making your investigative reporter b.s. detector vibrate?

Figure out where the holes are in the story you’re being asked to tell. And then dig into them. Eventually you’ll need to decide whether and how hard to advocate for a more candid approach. Sometimes you may agree that it’s best to paper over some of those holes. Or you may lose the argument and be required to paper them over. But unless you start out thinking like an investigative reporter, you won’t even know where the holes are, and you’ll end up papering them over without being aware of them.

Think like the public; think like your agency’s critics; think like an investigative reporter. Do that every time you’re interviewing a technical colleague for a news release you have to write, and every time you’re revising a pamphlet a technical colleague has already drafted. Don’t let yourself focus too quickly on finding clearer, more effective ways to say what your agency wants said. First figure out for yourself what your agency should be saying.

Outrage in Korea about U.S. beef and mad cow disease

name:Min Won Lee
field:Student
date:April 21, 2012
email:mwmw0110@korea.ac.kt
location:South Korea

comment:

I'm a student at Korea University majoring in health science.

Nowadays I’m learning about risk and crisis communication related to mad cow disease. In 2008, South Korea had a huge demonstration against the FTA, which allows importing USA cows including bone. And it’s still controversial.

In this case, I’d say the “hazard” is importing cow bone from the USA. However, I’m curious what the “outrage” is. Could I say the “outrage” is just the demonstration in 2008?

peter responds:

As you know, South Korea banned the importation of U.S. beef in 2003 after a case of mad cow disease was discovered here. An attempt to reopen the market failed in 2006 when the South Korean government found bone fragments in the U.S. beef. In 2008, the South Korean market for U.S. beef was reopened as part of the free trade agreement between the U.S. and South Korea; the U.S. made that a condition for the deal. Despite massive demonstrations in Korea, the deal stuck, and South Korea is now one of the world’s biggest importers of U.S. beef.

I don’t know whether it’s a significant hazard or not to eat beef imported to South Korea from the U.S. – though I very much doubt it.

Either way, I think I know where the outrage comes from. Among the factors at work:

  • Mistrust of the U.S. government (and perhaps also their own government) by some South Koreans, who suspect that U.S. beef exporters are less careful than they should be, less careful than the U.S. government promised, and less careful than they are with meat sold domestically. The possible presence of too much bone is one issue. The possible export of older cows (which are likelier to have mad cow disease) is a bigger one.
  • Dread at the prospect of mad cow disease. However unlikely Korean consumers are to get the disease from beef (wherever the beef comes from), the possibility of a little-understood disease that lies in wait for decades and then rots your brain is both terrifying and disgusting.
  • Anger (and perhaps shame) that the U.S. government had the clout to force the deal on the South Korean government, against the interests of the country’s domestic beef industry and against the wishes of its consumers. After the 2008 demonstration, South Korean President Lee Myung-bak apologized for failing to take public opinion into consideration when negotiating the free trade agreement … but his government didn’t back off the agreement.
  • Resentment of the contempt that many observers (in both Korea and the U.S.) showed for their resistance. One blog wrote about “Mad Korean Disease vs. Mad Cow Disease.” Others picked up on the fact that the demonstrators carried candles and labeled them “candle zombies.”

The demonstrations in 2008 were expressions of outrage by people who believed the hazard of imported U.S. beef was unacceptably high. Beyond doubt, those who were outraged thought they were outraged because of the hazard of BSE from U.S. beef. That’s equally true for those who are still outraged about the agreement.

But one of the linchpins of my hazard-versus-outrage model is my claim that outrage causes hazard perception far more than hazard perception causes outrage. Even though South Korean protesters thought they were outraged because the hazard was high, I would argue that in fact they thought the hazard was high because they were outraged (for the reasons I listed above, among others).

That’s true regardless of how high the hazard was (or is). For both serious hazards and trivial ones, outrage is mostly a cause of hazard perception, and only secondarily a result of hazard perception. That’s why activists who believe a particular hazard is serious need to work to increase public outrage (a task I call precaution advocacy), while those who think that hazard is trivial need to do what they can to reduce the outrage (a task I call outrage management). Neither side gets very far if it ignores outrage and focuses instead on providing data to demonstrate that the hazard is serious or trivial.

As a student trying to understand risk and crisis communication, you might want to practice designing a precaution advocacy campaign to arouse outrage about mad cow disease in Korea or an outrage management campaign to diminish mad cow disease outrage in Korea. Better yet, do both.

Apologizing for your predecessors

name:Nicole Hunter
This guestbook entry
is categorized as:

      link to Outrage Management index

field:State government
date:April 4, 2012
email:nicolehunter1708@gmail.com
location:Australia

comment:

I am wondering what you suggest to clients who need to apologise for their predecessors’ bad decisions or misbehaviour?

I come across a few clients myself who have to wear the decisions of previous Boards, staff or even different political parties. What if the decisions they made were poor in hindsight (or even in foresight?), and now the current staff, Board or others need to explain the situation? This is especially a problem if the decision cannot be reversed.

peter responds:

Companies and government agencies have no compunctions about taking credit for their predecessors’ accomplishments. “Protecting the environment for umpty-ump years” or “proudly serving the community since nineteen-whatever,” they boast. But when it’s a predecessor’s misdeed they’re being asked to own, they want no part of it. “That was a different management.” “Those were different times.” “I was in seventh grade when that happened.”

From the perspective of your stakeholders, of course, your company or your agency is the same company or agency it was back then. Continuity is the default assumption. If you want to challenge that assumption, you have to do so wholesale, announcing “a whole new philosophy” or at least a new approach to X or Y. You can’t simply cherry-pick the prior bad decisions you want to disavow.

In fact I advise clients to go to the opposite extreme – not only embracing responsibility for prior bad decisions but making a point of the fact that they’re doing so. That enables you to acknowledge that you’re tempted to disavow your predecessor’s actions, and counterproject people’s likely assumption that you’re trying to disavow your predecessor’s actions. “I wish I could say this has nothing to do with me or anyone who’s here now. Even though it’s true that nobody here now was here then, it’s also irrelevant. We’re here now, and we’re accountable for everything our company/agency did before we got here.”

In response, some of your stakeholders may get on the other side of the risk communication seesaw and tell you it’s not your fault. That’s fine. Your position should be steadfast: It’s your organization’s fault and you’re the current representative of your organization, so of course it’s your fault.

The same analysis applies, by the way, to the current unwise decisions or misbehaviors of other tentacles of your company or agency. As I have often had occasion to point out to clients, in many cases activists have been more successful at globalization than corporations have. Say you’re running the Belgian or Peruvian arm of a multinational, and have a meeting scheduled tomorrow with a hostile activist group. Chances are the group has been in touch with activists in other parts of the world who are fighting your company on their home turf; for weeks now the groups have been cross-posting on each other’s websites and exchanging tips and tactics. “Why should we trust your promises,” someone is likely to say at tomorrow’s meeting, “considering what you did just last month in Sri Lanka!” It will help for you to know what the Sri Lanka controversy is about. It won’t help for you to complain that you’re just the company’s Belgian or Peruvian plant manager and have nothing to do with its decisions in Sri Lanka.

There’s a partial exemption for governments – not government agencies, but actual governments. Especially when one political party takes over control of the government from another, the public really does see the new government as new. In the U.S., we’re now well into the fourth year of President Obama’s term, and there are still endless debates over which current problems Obama is responsible for and which he can blame on the Bush administration.

But even here, continuity takes precedence. Obama may be able to blame Bush for some of the problems he faces, but he still has to face them. They’re his problems now. And when he’s talking to people who have suffered as a result of a prior administration’s decisions, he doesn’t get to disavow those decisions. They were the decisions of the government he now heads; he has to own them even if he would have handled things differently … and yes, sometimes he has to apologize for them.

Apologizing, by the way, doesn’t necessarily mean admitting that your predecessor did something wrong. It just means admitting that what your predecessor did worked out badly for the people you’re talking to. You know this in your own life. If someone jostles you in a crowded elevator and makes you bump into someone else, you don’t turn and say: “It’s not my fault. I was jostled!” Instead, you say you’re sorry.

Of course if your predecessor really did do something wrong, you should own up to that. But apologies are appropriate for bad outcomes regardless of good motives.

I can think of one exception, one situation where you really needn’t apologize for a predecessor’s behavior – and that’s when you took over explicitly to put a stop to that behavior. Suppose dissatisfied shareholders kicked out the old management and made you CEO, for example, or suppose the prior government was overthrown and the rebels brought you in to clean up the mess. Assuming everybody knows you represent the new regime, you’re free to say nasty things about the old regime. You still have to make right the things your predecessor mishandled. But you don’t have to apologize.

With that exception, there are two positions that won’t work for a company or agency talking about a predecessor management’s bad acts or bad decisions. You can’t ignore what the former management did as if it had nothing to do with your current situation. And you can’t simply criticize what the former management did as if it had nothing to do with your current organization. It’s your liability now: yours to acknowledge; yours to correct (if you can) and compensate for; and yours to apologize for.

That still leaves a range of possible positions that can work. Here are some of the options:

number 1

“They didn’t know then what we know now.” When talking about historical emissions and “heritage” waste sites, for example, a company can often credibly say that emissions and wastes now known to be dangerous were considered benign back in the day. Of course you shouldn’t say this if it isn’t true, if your predecessors knew or should have known that they were endangering their employees, their neighbors, and their posterity. But when it’s true, go ahead and say it – still apologetically, of course: “If only they’d realized how much harm they were doing….”

Note an implication of this position that’s likely to be more obvious to your stakeholders than to you: Given that your predecessors made ignorant errors that have turned out dangerous, it’s worth wondering what ignorant errors you might be making now that will turn out dangerous in years to come.

number 2

“They didn’t have the same values then that we have now.” This too may apply to pollution history; even if your predecessors knew the dangers of their emissions or waste products, at the time other worries were considered paramount and pollution was considered unimportant. Like past ignorance, outmoded values don’t give you a free pass on your predecessors’ behavior. You still have to own the behavior, you still have to apologize, and you still have to make amends. But you don’t have to own the outmoded values themselves; you can be simultaneously apologetic and aghast.

This is how the descendents of slave-owners should talk about slavery, for example, and how the successors (and beneficiaries) of imperialist governments should talk about their predecessors’ empires.

number 3

“They weren’t regulated then the way we are now.” Be careful not to claim #2 (your organization’s values have changed) when the truth is #3 (your organization’s constraints have changed). Even if they want to, governments can’t do things that the public will no longer tolerate, and companies can’t do things that governments or the public will no longer tolerate. Everyone is regulated by the mood of the times, the zeitgeist. Everyone needs what some of my clients have taken to calling a “social license to operate.” I urge my clients not just to point out that they face tougher constraints than their predecessors, but also to admit that they wish they didn’t – and in many cases have lobbied hard to keep the constraints as lax as possible.

Saying all this concedes at least implicitly that you’d very likely still be doing what your predecessors did if you could get away with it the way they did. That has not just the virtue of truth (when it’s true), but also the virtue of outsourcing power. It might make you feel good to say your values are more enlightened than your predecessors’ values. But it probably makes your stakeholders feel good to hear you say they have the whip hand now and your organization has reformed because it had no choice. This is what Australian mining companies should say to their aboriginal stakeholders, for example: “Our predecessors oppressed your people because they could. Nowadays we can’t, even if we wanted to.”

number 4

“I can’t imagine what they were thinking.” If a predecessor’s behavior mystifies you, say so. But don’t just say, “There’s no way we can know what motivated a decision that was made 50 years ago” – that just sounds defensive. Instead, elaborate and document your mystification: “This strikes us the same way it strikes you. It seems like an incomprehensible thing to do. Since we are responsible for the decision, and responsible for trying to make it right, we have gone back in our records to try to get a better understanding of what motivated it. And frankly, we don’t get it.”

But you need to be careful about claiming you’re mystified by a predecessor’s behavior if critics are likely to reply that they’re not in the least mystified. If you know what your critics think your predecessor had in mind, at least mention that hypothesis and say what you think of it.

number 5

“I know what they were thinking – and it’s awful!” If a predecessor’s behavior is all too comprehensible, don’t try to sound mystified. “We’ve gone back into the files and the paper trail is embarrassingly clear. The CEO at the time knew what he was doing, knew he shouldn’t do it, and did it anyway.” You’ll probably want to seek legal counsel before giving potential plaintiffs such potent ammunition – but, paradoxically, there’s a good chance your candor will actually reduce stakeholder outrage and thus the motivation to sue.

You might want to split the difference between #4 and #5. A mining client a few years ago faced a whistleblower’s revelation that a generation earlier the company had been warned by consultants that a tailings impoundment might collapse and endanger a nearby neighborhood. A lawyer memo still in the files had recommended releasing the information, but the site manager at the time decided to keep it secret (and keep the neighborhood unknowingly at risk) while he fixed the impoundment. When it all came out a generation later, the current management said of the prior management, in effect, “We don’t pretend to know all the factors that led to that decision, but we do know this: A facility manager that made such a decision today would be fired, period.” And then it added: “But there’s no reason why you should believe that. We realize your trust in us has taken a hit, and we will need to earn it back.”

number 6

“I might easily have made the same mistake.” Sometimes the apology-worthy thing a predecessor did wrong makes you thank your lucky stars the problem didn’t arise on your watch, because you know in your heart you might very well have done exactly the same thing. If that’s what you’re thinking, that’s what you should say. “In hindsight, it was obviously the wrong thing to do, and we owe you an apology on behalf of the prior management that did it. But I don’t want to sound superior. I’m not as sure as I wish I were that in their shoes I would have done better than they did. It looked like a tough decision at the time.”

This also works in reverse, when a well-regarded former management empathizes publicly with a bad call the new management has made. One of ex-President Bill Clinton’s finest moments came when his successor, George Bush, invaded Iraq in part because he thought Iraq’s dictator Saddam Hussein had weapons of mass destruction. When no WMDs were found and Bush was attacked for grounding a war on a false premise, Clinton pointed out repeatedly that he had seen the same intelligence data and come to the same mistaken conclusion as Bush.

All of these options, of course, are compatible with apologizing for what your predecessor did. If what your predecessor did worked out badly for the people you’re talking to, you apologize – period.

Pink slime

name:Paul Holmes
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Media
date:March 30, 2012
location:United Kingdom

comment:

Pink slime – what went wrong?

I am interested in what the industry/company could have done in advance of this crisis to insulate or prepare their business, and in what they could have done once the crisis went viral and mainstream to protect themselves.

peter responds:

Pink slime – usually referred to in the industry as “lean, finely textured beef” (LFTB) – is a beef byproduct widely added to hamburger meat. At least it was widely added until about a month ago, when the controversy went viral and supermarkets, fast-food restaurants, and school systems started abandoning the product.

LFTB is made from beef trimmings and connective tissue. The fat and meat are separated mechanically, using heat and centrifuges; then the fat is processed into tallow, and the remaining beef and connective tissue is squeezed through a tube and treated with ammonia to kill bacteria. Hamburger meat that contains LFTB at all may be as much as 25% LFTB.


(Image Credit: Brian Yarvin/Getty Images)

Invented in the 1990s, LFTB was first labeled “pink slime” in 2002 by Gerald Zirnstein, then a microbiologist at the U.S. Department of Agriculture (USDA). He used the term in an email to a colleague, but the email soon became public and critics grabbed onto it for obvious reasons. Despite occasional news stories, the issue didn’t make much headway until 2011, when food activist Jamie Oliver featured it on his ABC television show, “Jamie Oliver’s Food Revolution.” By the end of 2011, McDonald’s and two other fast food chains had stopped using LFTB. The story went truly viral after a March 7, 2012 ABC News story featuring Zirnstein.

Although some have claimed that pink slime is dangerous, The USDA insists it isn’t. I’m not qualified to judge the question, but Zirnstein’s objection on ABC wasn’t health-based; he said the unlabeled use of a cheap meat byproduct is “economic fraud.” And pink slime passes muster with Doug Powell, the Kansas State University professor whose irreverent and influential food safety blog, “Barfblog,” addresses the issue at length, complaining only about the failure to label so people have a choice (a complaint with which I fervently agree).

In any case, food safety isn’t what turned the public against pink slime. Nor is economic fraud the key issue for the public. It’s disgust at the very idea of adding ammonia to highly processed meat waste and calling it hamburger – plus the disgust automatically aroused by the word “slime.” (Beyond doubt, the term “pink slime” is a brilliant piece of hostile jargon.)

Periodic outbursts of disgust

You ask what went wrong with regard to pink slime. I’m not sure anything went wrong. What happened is akin to a natural disaster. Here’s why I think so.

A lot of meat is icky. It’s icky in ways that are natural (think about excrement and bacteria) and in ways that are industrial (think about how food animals are housed and slaughtered).

People who live on farms know all that, and get used to it. People who live in cities and suburbs, where most of us live, sort-of know it too. But we’d rather not think about it. We like to pretend that meat is pristine, all the while knowing it isn’t. Our awareness is so close to the surface that “sausage factory” is a popular metaphor for places (like legislatures) where people do disgusting-but-useful things that should be done only behind closed doors.

So the meat industry has always had a choice. It could rub people’s noses in the ickiness – take school kids on slaughterhouse tours, for example. Or it could collude with the public in pretending that meat is pristine. This pretty much had to be an industry-wide decision, if not a society-wide decision; one company that came out of the closet alone would be crucified.

In the U.S., at least, the meat industry has always chosen to collude in the pretense.

An inevitable side-effect of that choice is that occasionally an especially vivid example of ickiness gets through people’s defenses, and we overreact. Our overreaction serves to protect us from further revelations; it preserves the pretense that the rest of the food supply is still pristine.

If this analysis is sound, there is literally nothing the meat industry could have done to prevent the pink slime debacle – other than coming out of the closet empathically but matter-of-factly about meat’s icky side. Coming out of the closet would also enable the industry to talk about various reductions in ickiness it has implemented (ideally giving full credit to the activists who forced the changes). It’s hard to point to improvements in a problem you’re not prepared to mention.

In short, periodic outbursts of disgust are the price the industry pays for its decision to prettify. It’s a price the industry is prepared to pay in preference to the only alternative. This time the product that got a shellacking is LFTB. Next time – and there’s sure to be a next time – it’ll be something else.

Handling the controversy

That said, I do think the meat industry and the LFTB corner of the meat industry could have handled the controversy a lot better than they did.

The industry (and USDA) responses I’ve seen run the gamut from defensive to offensive. Most of the responses are understandably but ineffectually outraged at the public’s outrage. Few if any are empathic with the public’s natural and understandable revulsion at the discovery that a significant portion of the hamburger we eat is salvaged meat waste sprayed with ammonia – perhaps a kind of meat technically (or perhaps not), but certainly not quite what we mean when we say “meat.”

Industry statements and pro-industry news commentaries are collected on a website launched by the biggest LFTB producer in the country, Beef Products, Inc. (BPI). Entitled “Get the Facts on Lean Beef Trimmings,” the website has the contentious URL “beefisbeef.com.” Much of what’s on the site really is facts, useful albeit one-sided. But too much of it is angry and unempathic. My favorite bad example is a March 23 open letter from CEO Eldon Roth, originally part of a BPI full-page ad in the Wall Street Journal. The letter’s headline captures its aggressive, unempathic tone: “‘Pink Slime’ Libel to Cost This Country Jobs.”

What would I do in Eldon Roth’s shoes?

He has basically two options. He can keep a low profile and try to hang onto as many customers as he can, hoping that the fuss will blow over without new regulations or permanent stigma and former customers will begin to return.

Or he can make his case, empathically, respectfully, and candidly.

That would mean using the term “pink slime” a lot, sometimes even without quotation marks. Euphemisms never cut it in controversies. The only path forward is to use the uncomfortable label critics are using to talk about the uncomfortable issues critics are raising.

Tougher still, it would mean conceding that millions of people understandably found the idea of pink slime – and the television footage that went with it – seriously disgusting. I don’t suppose Roth could honestly claim that even he finds it a bit disgusting, but maybe he can find a family member to quote to that effect. The price of admission for asserting the safety and value of LFTB is acknowledging the repulsiveness of pink slime.

Roth might even go a step further and say there are a lot of things about food in general and meat in particular that many people find disgusting … or would find disgusting if they knew about them. (He has to say this carefully, so as not to sound like he’s saying, “…so why pick on us?”) Normal people tend to lose their appetites when they visit the sausage factory, he could say – and courtesy of ABC, YouTube, and the rest of the media millions have visited his particular “sausage” factory in the past month. And at least for now, they want their hamburger without pink slime.

That is of course a disaster for Roth, his employees, and his shareholders; he’s free to say so. But he needs to talk far more about people’s normal “Oh, how gross!” adjustment reactions than about his and his employees’ private disaster.

People can and usually do get over their adjustment reactions. I have a hamburger-loving relative who decides to grind her own beef (for a few weeks anyway) every time she sees a “beef-yuck” story. But you can’t hurry people through an adjustment reaction. Instead, paradoxically, empathizing with how natural the reaction is helps people get through it more quickly.

It wouldn’t help for Roth to try to generate sympathy for his company’s and his employees’ financial distress. And it certainly doesn’t help when meat industry officials try to support the beleaguered LFTB industry by dissing the hysterical public. That just makes the adjustment reaction take longer, and you can go bankrupt waiting for the customers you misperceive and mislabel as hysterical to get over it.

Similarly, it didn’t help when three meat-state governors and two lieutenant governors got together in late March to try to rescue LFTB with a media event at which they toured an LFTB plant and then – predictably – scarfed down pink-slime-containing hamburgers for the cameras. This particular exercise in mockery of the public’s concerns is often called “doing a Gummer,” after the British agriculture official who famously tried to feed his daughter hamburger at the height of Britain’s mad cow disease crisis. For some other recent Gummer examples, see our December 2011 column on “Over-Reassuring Thai Crisis Communication about the Great Flood.”


From left, Iowa Gov. Terry Branstad, Texas Gov. Rick Perry, and Nebraska Lt. Gov.
Rick Sheehy eat hamburgers containing pink slime / LFTB, following a news
conference in South Sioux City, Neb. Photo from CBSnews.com.

Roth should also tell us about the downsides of living without pink slime, if there are any. Will it make hamburger less nutritious, less healthy, or less safe? Will it make hamburger more expensive? Will it require raising and slaughtering significantly more cattle to replace the LFTB that will now go uneaten – and if so will that have economic or environmental impacts worth mentioning? I don’t know the answers to these questions, and Roth should be leery of overstating the societal costs of his private disaster. But he should certainly point them out … all the while forthrightly acknowledging his ickiness problem.

Finally, Roth should express his hope – not confidence, just hope – that people will be able to get past their visceral reaction to pink slime. Toward that end, he might productively acknowledge that a big piece of the public’s reaction results from having been blindsided. So a commitment to labeling (and an apology for having resisted labeling previously) might make a difference. The label needn’t say “pink slime,” but “LFTB” won’t do the job, nor will “lean, finely textured beef.” How about: “Treated with ammonia to kill bacteria”?

Supermarkets that have been persuaded to remove products with pink slime from their shelves might be persuaded to restock them alongside LFTB-free products if there were appropriate labels that allowed consumers to make their own decisions, based on cost, safety, nutrition, disgust, and whatever other factors they found meaningful. That’s the compromise recently reached (under pressure from both sides) by the Iowa-based Hy-Vee grocery chain.

Is any of this – or even all of it – likely to resuscitate pink slime? I don’t know. Maybe the product is doomed. Adjustment reactions aside, people who have been intentionally blindsided don’t forgive easily.

Coming out of the closet is a gamble. But so is the alternative strategy, keeping a low profile. One is a gamble that the public will forget the truth; the other is a gamble that the public will accept the truth. Worse than either gamble is the industry’s current strategy: counterattacking with half the truth while pretending that the other half – that lots of people find pink slime disgusting – doesn’t exist.

The widespread insistence that sources should “speak with one voice”

name:Jon
This guestbook entry
is categorized as:

      link to Crisis Communication index       link to Outrage Management index

field:Epidemiologist
date:March 16, 2012
location:Connecticut, U.S.

comment:

While taking a FEMA course on the Incident Command System, I recalled your article “Speak with One Voice: Why I Disagree.”

What has been the response to your article?

peter responds:

As far as I can tell, I haven’t made a dent.

Most of the advice I give to clients is reasonably compatible with the advice other risk communication experts give. Our clients may not take our advice, but the advice itself doesn’t deviate that much from one consultant to the next.

Speak with one voice” is an exception. Nearly all the experts advise that sources should synchronize their messaging, internally and externally – especially in crises and controversies. Message consistency is one of the principles underlying FEMA’s Incident Command System, so I’m sure it was covered in the course you took.

I am steadfast – and almost alone – in my view that this is bad advice. My view is:

  • that people don’t freak out (even though they don’t like it) when officials or experts disagree, as long as it’s clear that the officials or experts are aware and respectful of each other’s opinions;
  • that there are benefits to allowing opinion diversity to show, especially letting the public in on the reality that everyone isn’t necessarily on the same page on every issue; and
  • that when dissenters are forced to adhere to a fake consensus, the disagreement almost invariably leaks, journalistically and psychologically, doing more damage to public confidence than if it had been candidly acknowledged from the outset.

When I present my view at seminars on crisis communication or outrage management, I get pushback chiefly on two grounds.

The first objection I typically hear is that management won’t permit anyone to express opinions that diverge even minutely from official policy. I know that’s often true. There are disagreements so fundamental that expressing them is worth some risk to your career – but that’s obviously the exception. Expressing routine disagreements isn’t worth that risk. “If your boss won’t let you explain that there’s more than one opinion on an issue, then don’t,” I tell seminar participants. “But try to let your subordinates do so when you’re the boss.” The incident commander in a crisis, of course, functions like any other boss.

The second common objection is that people in controversial or stressful situations are likely to seize on any divergence of opinion as evidence that your organization doesn’t know what it’s doing. There is some truth to this objection, I think.

Of course sometimes your organization really doesn’t know what it’s doing, and disagreements well out in a way that demonstrates that this is so. In my view it’s not a bad thing for the public to learn the truth, though I can understand why the top brass might think it is. Expressions of disagreement are especially likely to undermine your credibility and reputation – deservedly or not – when they come across as disorganized or disrespectful. When different officials insist that different positions are the official position, for example, the public rightly gets the impression that you don’t have your act together. The same is true when disagreement is expressed in a tone that sounds rebellious, surly, or contemptuous.

Sometimes there’s also a transition problem. If people are used to an agency or company that speaks with one voice, their ears will naturally prick up when someone unexpectedly departs from the party line, and they’ll start trying to figure out why.

But people quickly adjust when management stops trying to sound monolithic and matter-of-factly explains that of course many options were considered and of course there was robust debate and of course some of the “losers” in that debate still prefer the option they championed … when these realities are presented not just as natural and inevitable but as strengths that lead to better decisions.

That’s what I believe – but I’m still very much in the minority. I have examples I think prove my point (there are plenty in the article) – but so does the other side.

When I present this debate to seminar audiences, I usually conclude the discussion by pointing out that I haven’t tried to speak with one voice on the subject of speaking with one voice; instead, I have explicitly (but respectfully) disagreed with the majority view. “Did learning that risk communicators disagree on this issue make you trust risk communication as a field any less?” I ask. Nobody ever thinks it had that effect – a lesson I hope stays with some of them as they consider the pros and cons of speaking with one voice.

Let me finish with two examples, one discouraging and one that is at least partly encouraging, both drawn from my work on influenza controversies.

Together with my wife and colleague Jody Lanard, I have long been a connoisseur of the ways public health professionals distort and exaggerate the efficacy of flu vaccination in order to persuade the public to get vaccinated. (I share their goal, but I disapprove of their distortions and exaggerations.) Last year we interviewed a number of local health officials about some flu messages that had been generated on the national level and passed down – messages we considered less than honest. When we pointed out how the messages diverged from scientific evidence, some of our interviewees were surprised, even shocked and aghast.

But others gave us a verbal shrug. “Yeah, we know,” they told us, with varying degrees of candor. “But we really can’t afford to be out there alone saying something different from what CDC and HHS are saying. ‘Speak with one voice,’ you know.”

That was the discouraging example. Here’s the somewhat more encouraging one.

Since last December, flu scientists have been locked in a battle over two papers reporting successful bioengineering of the H5N1 flu virus. H5N1 (usually known as “bird flu”) is incredibly deadly to humans … but almost completely unable to transmit from human to human – which many think is the only reason why we have been spared a catastrophic H5N1 pandemic. The two research teams tinkered genetically with H5N1 to produce two new strains that are transmissible in ferrets and thus potentially transmissible in humans. The battle focuses on whether the two papers should be published with their methodologies intact, and on whether further research along the same lines should be permitted. Scientists on one side are worried about research autonomy and censorship, and excited about the possibility that continuing research could lead to breakthroughs that might help prevent a pandemic. Scientists on the other side are worried about laboratory accidents and human malevolence, fearful that continuing research could actually launch a pandemic.

It is a hard-fought battle on both sides – and it has been fought as fiercely in public as in private. To the best of my knowledge, no one on either side has suggested (at least publicly) that scientists should speak with one voice on this question.

But this second example has its discouraging elements too. In much of their H5N1 skirmishing, scientists on both sides have demonstrated nastiness toward each other and contempt for the public. The diversity of opinion hasn’t been suppressed, but it hasn’t been consistently respectful either. And while the two sides are openly making their cases – which is good in my judgment – each side has periodically tried to discipline its adherents into putting forward a united front. On H5N1 bioengineering, science seems to be trying to speak with two voices – better than one, but a far cry from the actual diversity of scientific opinion that’s out there.

Deepwater Horizon in perspective: the dynamics of blame

name: Knut
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Master’s student
date:March 6, 2012
location:Norway

comment:

I am currently writing my master’s thesis on BP and the Deepwater Horizon incident.

I was wondering if you could shed light on the following:

  • How was the crisis perceived by the stakeholders, and in what ways were BP attributed with the responsibility?
  • How was BP’s crisis response adapted to #1?
  • How has the relationship between #1 and #2 affected the reputation of the company?

With many thanks for your time.

peter responds:

This is the ninth commentary on this website about the 2010 Deepwater Horizon oil spill. The other eight were all written in 2010, as the crisis was ongoing. Here they are in chronological order:

Now that some time has passed, I welcome the opportunity to reconsider the spill in the context of your three questions.

How did stakeholders see the crisis, and to what extent did they blame BP?

To answer this question properly I’d need to conduct a survey – or at least study the results of other people’s surveys.

But it’s pretty clear that most people in the U.S. – public and stakeholders alike – blamed BP for the Deepwater Horizon spill. Transocean, the owner of the rig, probably wasn’t well-known enough to be a satisfying villain. Halliburton, though well-known and much-disliked for other reasons, wasn’t closely enough tied to the accident. (It handled the cementing that may have contributed to the leak that led to the explosion that started the crisis.) BP was the biggest of the three companies. BP owned the oil that ended up fouling the Gulf and the shore. BP was responsible for hiring and supervising the other two companies. And BP had the principal legal liability; everybody sued BP, which then sued Transocean and Halliburton in hopes of recovering some of what it has paid out and will pay out.

Absent a smoking gun that made people feel like BP was the victim of an incompetent or dishonorable contractor, it was a foregone conclusion that Deepwater Horizon would be seen as BP’s spill.

U.S. President Barrack Obama was also at risk of being blamed. In the early years of his presidency, Obama established a regulatory regime favorable to oil development in the Gulf of Mexico (perhaps trying to balance his advocacy of climate change legislation). He inherited a regulatory agency, the Minerals Management Service, that made life easy for corporate interests in the Gulf, and did little to reform the agency until after the spill. But Obama successfully passed some of the blame to the Bush administration and ducked the rest by being aggressively hostile to BP. The latter strategy might have backfired if it had been seen as scapegoating, but he got away with it.

Arguably the blame ought to be shared by the rest of the oil industry. In the wake of the Deepwater Horizon accident, several of BP’s peer companies claimed they would never have made the mistakes BP made that led to the spill, but they acknowledged that they would have been just as unprepared to cope with it.

And of course we all deserve some of the blame for wanting cheap domestic oil, and thus tacitly accepting the risks that oil exploration and development entail.

But BP got the lion’s share of the blame.

How well did BP’s crisis communications respond to stakeholder perceptions?

Abysmally.

A lot has been written about BP’s crisis communication failures in the weeks and months after the initial blowout – some of it by me. But what’s most crucial in the context of your three questions is BP’s failure to take responsibility. Since most people blamed BP for Deepwater Horizon, BP needed to do one of two things in response: either credibly explain why it wasn’t at fault or contritely acknowledge that it was. There was no way BP could do the former, and it failed miserably to do the latter.

BP did take legal responsibility. It was obvious from the outset that the company would be legally responsible; as the relevant laws are written, it was BP’s accident regardless of what other organizations might have done wrong. So BP’s lawyers, communicators, and senior executives presumably saw no downside and considerable upside to accepting legal responsibility quickly.

It did so not just quickly but impressively. Within days, BP promised to establish a $20 billion fund out of which all legitimate claims would be paid … with more to come if needed. It’s testimony to how outraged Americans were at BP that it got no perceptible credit at all for this extraordinary concession. Instead, people caviled at the qualifier, “legitimate claims,” as if it were somehow stingy of the company to imply that it might actually check to see if a claim was legitimate.

But BP failed to take moral responsibility. In late May 2010, more than a month after the April 22 blowout that precipitated the Deepwater Horizon catastrophe, BP CEO Tony Hayward tried to apologize – and stumbled badly. Here’s what he said, easily the most memorable quote coming out of the crisis:

I’m sorry. We’re sorry for the massive disruption it’s caused their lives. There’s no one who wants this over more than I do. I’d like my life back.

Equally vivid in my mind is the memory of Hayward appearing before the U.S. House Energy and Commerce Committee in mid-June. Asked over and over again whether he could think of anything BP did or might have done that contributed or might have contributed to the catastrophe, Hayward consistently refused to go there, insisting that it was premature to try to assess blame since there were ongoing investigations. I don’t doubt that Hayward was following sound legal advice, and perhaps even sound technical advice; the blowout was less than two months in the past, after all, and there were still plenty of unanswered questions about what went wrong. But it was horrible risk communication advice. Hayward’s refusal to specify anything the company might have done wrong left an abiding sense that BP was evading its responsibility for the Deepwater Horizon explosion and spill.

Bob Dudley, Hayward’s successor as BP CEO, finally got around to apologizing for Deepwater Horizon in March 2011. He chose an oil industry conference as his venue, and began his apology by saying: “This is the first chance I have had to address such a large gathering of industry colleagues and the first thing I want to say is that I am sorry for what happened last year.” It was hard to escape the impression that he was apologizing to the industry for bringing offshore oil exploration into disrepute, not to the inhabitants of the Gulf region for damaging their ecosystem and their economy. Dudley also put a lot of stress on a promise that BP would in future be more careful to police the standards of contractors it worked with, leaving a sense that he was “apologizing” mostly for other companies’ misdeeds on BP’s watch.

Dudley’s apology was obviously an improvement on Hayward’s “I’d like my life back” – but it was far from the heartfelt acknowledgment of moral responsibility that was long overdue.

In taking legal responsibility but not moral responsibility for Deepwater Horizon, BP got the process of forgiveness exactly backwards. Forgiveness requires contrition before restitution. In fact, outraged people tend to get more outraged, not less, when a company tries to compensate them before (or instead of) apologizing; it feels like a bribe. I think that’s the main reason why BP got no credit for its $20 billion pledge. It was offering compensation without having properly apologized first.

BP once knew better. In 1991 the American Trader, a tanker carrying BP oil, had a major spill off the coast of Huntington Beach, California. BP America CEO James Ross flew to the scene, and was asked by reporters whether he considered the spill BP’s fault. Just as Deepwater Horizon wasn’t BP’s rig, American Trader wasn’t BP’s tanker, and Ross could have said, “No, we’re the victim here; that damn tanker spilled our oil!” Instead, what BP did in 1991 was the opposite of what BP did in 2010: It took moral responsibility but not legal responsibility. “Our lawyers tell us it’s not our fault,” Ross said. “But we feel like it’s our fault, and we’re going to act like it’s our fault.” (I have no source for this quote except my memory and my own prior publications; it’s best considered a paraphrase.) Whereas BP’s reputation was devastated in 2010, in 1991 the company’s reputation actually improved in the months after the American Trader spill.

Taking moral responsibility contributes to forgiveness, even without taking legal responsibility. By contrast, taking legal responsibility without taking moral responsibility has no reputational value whatever. Contrition must come before restitution.

Here’s an equally important point: Contrition must continue even after restitution is complete.

As we all know in our personal lives, saying you’re sorry isn’t something you do once and then move on. It’s something you must do again and again, until your victim allows you to move on.

Blogging recently about Whitney Houston, Bobby Brown, and Chris Brown on the website of the Canadian magazine Maclean’s, reporter Emma Teitel got it exactly right:

This latest case also highlighted the secondary sin of the Browns, the one that no one gets a free pass on, no matter how many Grammys he cops: not the abuse of a person, but the abuse of remorse. We might be able to one day forgive a celebrity for beating on a girl, that is – but not for acting like it didn’t happen. We might be able to forget – but only if he doesn’t.

Think of when a friend has done you wrong – when a person you’ve recently forgiven for something you previously thought unforgivable stops apologizing for everything he does and starts having guilt-free fun again, you begin to wonder how contrite he was to start with. In other words, even though you have forgiven him for the original sin, you can’t forgive him for forgiving himself….

What does genuine, effective remorse look like? When 22 Canadians died in the Maple Leaf Foods listeriosis outbreak in 2008, CEO Michael McCain delivered one of the best public apologies in recent history, and more importantly, he kept on delivering. A year after the outbreak, even though the company’s profits had improved since the inevitable drop, McCain took out a full page ad in three Canadian daily newspapers commemorating the anniversary of the tragedy. “On behalf of our 24,000 employees,” the ad read, “we will never forget.” What McCain understood, that the two Browns likely never will, is that the only way people can put the past behind them is if you do not. The option to forget applies only to the victim, or the audience; never the perpetrator. It’s only when this equation is satisfied, as F. Scott Fitzgerald wrote, that “forgotten is forgiven.”

To its credit, BP has continued to talk about the Deepwater Horizon blowout and oil spill; it runs lots of ads “reporting” on how the recovery is going. (After a decent hiatus, it now runs ordinary commercial ads as well.) That’s important: As Teitel points out, the rest of us can “forget” only if we can see that BP hasn’t forgotten. For the foreseeable future, BP should talk about Deepwater Horizon whenever it talks about corporate social responsibility or about issues of environment, health, and safety.

On this dimension, Exxon mishandled contrition after the 1989 Exxon Valdez spill even worse than BP mishandled contrition after Deepwater Horizon. A year or two after the Valdez spill, my family and I visited Exxon’s “Universe of Energy” pavilion at Epcot, part of Walt Disney World in Orlando, Florida. There were endless displays about the environment, but nothing about Valdez! It was an extraordinary icebreaker. Total strangers were commenting to each other, “Can you believe those bastards didn’t even mention Valdez?”

BP is doing a bit better than Exxon did; at least it’s not ignoring the spill and hoping people will forget. But it has still never properly apologized. An adequate apology has to show at least three things: regret, sympathy, and moral responsibility. You have to wish it hadn’t happened, feel bad for the victims it happened to (not for yourself), and acknowledge that you are at least in part to blame. Nearly two years after the Deepwater Horizon accident, BP has still achieved just one of the three: regret.

What has been the resulting impact on BP’s reputation?

Like your first question, this one really calls for survey data. I don’t have any of my own, and I haven’t studied most of the surveys that are available.

For years after Valdez I believed that Exxon was paying an ongoing reputational price for that spill – a price far higher than it would have paid if it had been properly contrite. Every time there’s an oil spill, I used to speculate, millions of people actively hope it’ll be an Exxon spill. Now I suspect they hope it’ll be a BP spill.

Similarly, I used to think Exxon probably had to offer higher salaries to engineers and other employees than its peer companies, a “stigma premium” to people who’d rather not have to tell disapproving friends and family that they took a job with Exxon. The alternative was to settle for less qualified new hires than it might otherwise have gotten … theoretically contributing to future accidents (among other costs). Now, I suspect, BP will have to pay a stigma premium or settle for second-best.

Please note that I have no proof for these speculations. I have shared them with lots of oil industry clients and audiences. Sometimes they tell me I’m right; sometimes they tell me I’m wrong. Odds are they don’t know either.

One difference between the two accidents is worth mentioning. After Valdez, my industry contacts often told me it was ironic that it was Exxon’s spill, because Exxon was considered a real safety leader. After Deepwater Horizon, by contrast, my industry contacts seemed unsurprised, sometimes suggesting that if something like this was going to happen to one of the majors it was pretty likely to be BP. I have no idea if this difference between Exxon’s and BP’s reputations inside the industry bears any resemblance to reality on the ground (and on the rig) – but the reputational difference is real.

Perhaps the most intriguing possible reputational impact of BP’s handling of Deepwater Horizon is the reluctance of many people to notice that the Gulf of Mexico seems to be recovering. Or let me say that more tentatively, because I have no expertise on the current and future health of Gulf ecosystems: People are reluctant to credit claims that the Gulf of Mexico is recovering.

Large numbers of Americans, I believe, are deeply committed to the view that the Deepwater Horizon accident was an irremediable environmental and public health disaster – and, moreover, that it was BP’s irremediable environmental and public health disaster, evidence of BP’s malfeasance. From the limited public opinion data I have seen, this view is held by a number of crucial stakeholder groups – especially environmentally conscious Americans who do not live or work near the Gulf Coast. It’s not just that that’s what they believe. That’s what they want to believe. They want Deepwater Horizon to have been a BP disaster. They will resist evidence to the contrary, even if the evidence might otherwise strike them as persuasive.

This is a classic outrage management problem. People who hate you want to keep on hating you. They don’t want to learn that what they hate you for didn’t turn out so bad after all (far less that it wasn’t entirely your fault in the first place). Logic says they’ll hate you less if they learn these things. “Psychologic” says they’re extremely resistant to learning these things until they hate you less.

In large measure because of BP’s failure to take moral responsibility, I believe, millions of people remain committed to blaming BP for perpetrating an irremediable disaster – and are therefore resistant to any evidence of Gulf recovery. If I’m right, then BP must find a way to help people feel entitled to keep blaming it for Deepwater Horizon even if Deepwater Horizon turns out not to have been so disastrous after all. Only by disentangling blame from disastrousness can BP open up its critics to evidence of recovery. Later, perhaps, that may open them up to blaming BP less.

Part of the problem, of course, is valid skepticism about conflicts of interest. Understanding that it would have no credibility on the topic of Gulf recovery, BP wisely set up a separate organization to fund research into impacts of the spill. But it’s still BP’s money funding the research. Does BP exercise undue influence that might contaminate the results? I don’t know. What I know is that many people assume it does, and want to assume it does – because they don’t want to believe that Deepwater Horizon wasn’t an irremediable disaster.

The animus against BP and the widespread desire not to believe that the Gulf is recovering well also affects researchers. I have heard credible stories about independent academics doing studies of spill impact, finding much less damage than they expected, and deciding not to publish, lest their colleagues think they were shilling for BP.

But there are ways to insulate BP from the research it is funding, ways that could make the results (however they turn out) credible to a skeptical but open-minded observer. The more fundamental problem is how to convert people who are determined to see Deepwater Horizon as a disaster into skeptical but open-minded observers. I don’t think that can be done without taking moral responsibility for the spill.

Conflict-of-Interest Note: I have consulted periodically for BP since the 1980s, including work on its 1990 American Trader oil spill in Huntington Beach, California. In 1992, BP published an admiring profile of me in its house magazine, Shield. My most recent work for BP was in 2003, on the Baku–Tbilisi–Ceyhan pipeline.

I would have liked to help with Deepwater Horizon, but I was never asked – so I commented publicly from the sidelines instead. But in the summer of 2011 I did have some discussions with the Gulf Coast Restoration Organization, a BP-funded operation, about what it would take to create a public that could experience good news about the Gulf as good news, rather than experiencing it as BP propaganda and refusing to credit it. I also wanted to help GCRO avoid overstating the good news, and help it fully acknowledge the bad news and the long-term ecological uncertainties. I did a preliminary consultation along those lines, and GCRO went so far as to get me under contract with its PR agency, so the paperwork was in place to bring me in. But they never brought me in.

“Panic buying” in crisis situations: China’s Fukushima run on salt

name:Gordon
This guestbook entry
is categorized as:

      link to Crisis Communication index

field:Engineer
date:March 2, 2012
location:California, U.S.

comment:

I know this might be a little old but I am new to your site.

I am not sure if you were aware but during the Fukushima disaster there was a quite interesting phenomenon that took place in China – a run on table salt as a result of a comment on the Chinese equivalent to Twitter.

Have you followed this at all? If so, do you have any observations about how the Chinese handled the situation?

peter responds:

A devastating March 2011 tsunami led to serious problems at several of Japan’s nuclear power plants in and around Fukushima. The resulting radiation releases in Japan were significant and could have been far worse. Fears that under some scenarios significant radioactivity might reach other countries – including both China and the U.S. (especially Alaska) – were widespread.

That was the context for China’s now-famous run on salt. One measure of its fame: On February 28, 2012, nearly a year later, I googled “panic buying disease rumor.” Nine of the first ten listings were about the desperate efforts of Chinese consumers to stockpile salt during the Fukushima crisis.

The English edition of the official newspaper People’s Daily began its March 18 story this way:

Worried shoppers stripped stores of salt in Beijing, Shanghai and other parts of China on Thursday in the false belief that it can guard against radiation exposure, even though any fallout from a crippled Japanese nuclear power plant is unlikely to reach the country.

The panic buying was triggered by rumors that iodized salt could help ward off radiation poisoning – part of the swirl of misinformation crisscrossing the region in response to Japan’s nuclear emergency.

The decision to stockpile extra salt was grounded in two beliefs – that Japanese radiation might get to China, and that if it did salt might help. The first belief was credible if the situation in Fukushima had kept worsening. The second belief was mistaken, but it wasn’t foolish.

Radioactive Iodine-131 is one of the likely emissions from radiation emergencies of various sorts. I-131 bioconcentrates in the thyroid, where it can cause thyroid cancer. The best protection is to flood the thyroid with non-radioactive iodine, thus keeping the body from taking up so much I-131. The U.S. has had a recurring controversy for years over whether to advise people living near nuclear power plants to stockpile potassium iodide pills to take in the event of a nuclear emergency. The nays usually win, not because the potassium iodide wouldn’t be needed or wouldn’t work but because the authorities don’t want to frighten nuclear plant neighbors by urging preparedness.

In China as in the U.S., table salt is iodized. So it was rational to think it might help against Fukushima radiation. Unfortunately, there’s not enough iodine in table salt to do the job unless people consumed impossible (and quite possibly deadly) quantities of salt. The World Health Organization (WHO) was widely cited at the time to the effect that it would take 80 tablespoons of salt to achieve the prophylactic effect of one potassium iodide pill.

According to many sources, China’s run on salt was also provoked by a fear that Fukushima radiation might contaminate the salt in the ocean. That’s not so rational. For one thing, most Chinese table salt isn’t sea salt; it’s mined on land. For another, it would take an unimaginably huge radiation release to render sea salt unusable. At that point we’d all have far more urgent problems than finding a safe source of salt.

I don’t know what role the heavily censored Weibo (or perhaps another of China’s equivalents of Twitter) played in the rumors about salt and Fukushima radiation. Most news stories say the rumors traveled mostly via text messages on mobile telephones.

The word “rumor” is usually pejorative; it’s used mostly to describe rumors that are false. But many rumors aren’t false. In this case:

  • Some of the rumors were true – e.g. that Fukushima radiation had leaked into the sea.
  • Some were possible though they never materialized – e.g. that radiation plumes from Japan could reach China.
  • Some were mistaken but rational – e.g. that the iodine in table salt could help protect against radiation-induced illness.
  • And some were a little crazy – e.g. that people should stockpile salt in case Fukushima contaminated the world’s salt supply.

I’m even less comfortable with the term “panic buying,” though People’s Daily and nearly everyone else used it to describe China’s run on salt. In a crisis situation – or what looks like a possible crisis situation – people often overreact initially. Appropriating a term from psychiatry (and from my wife and colleague Jody Lanard), I often call this initial overreaction an “adjustment reaction.” Adjustment reactions are common, automatic, temporary, small, and useful; they are not panic. (See “Tsunami Risk Communication: Warnings and the Myth of Panic.”

One characteristic of adjustment reactions is looking for things to do to help protect yourself and your loved ones during the crisis. People need to do something to feel some control. And sometimes they choose unwisely – not because they’re panicking but because it’s hard to figure out what to do (and whom to trust) early on in a crisis.

So rumors rule the roost. And runs on commodities rumored to help are commonplace. That’s not really “panic buying.”

A few years ago, as our concern about bird flu increased, Jody and I convinced our doctor to give us prescriptions for Tamiflu. We filled the scrips, and ever since Tamiflu has been a component of our travel kits. You could argue that we’re over-cautious or even that we’re selfish – but not, I think, that we’re panicking. In January 2006 we wrote a long column on personal Tamiflu stockpiling, asserting among other things that “[i]t is clear that you’re better off with it than without it.”

Assume for a moment that table salt would have been useful if the Fukushima crisis had worsened and radiation has started drifting from Japan toward China. Given that (mistaken) assumption, it follows that the more Chinese families who already had an ample salt stockpile, the better. If salt had been useful, it would have been most useful already on people’s shelves.

Of course a run on any commodity can easily produce a temporary shortage. The ideal is for as many people as possible to buy as early as possible – before the crisis strikes – so there is time to replenish the supply for the next wave of stockpilers. What you don’t want is for everyone to wait (“stay calm” and take no precautions) till mid-crisis … and then all try to buy at the same time.

During Fukushima, some U.S. residents stocked up on potassium iodide. Supplies quickly ran out, and survivalist blogs were full of advice on how to process iodine from other, more available sources to make it usable for internal consumption. As far as I know we escaped the table salt mistake, but not the fear that Fukushima radiation might make it to our shores.

My advice to authorities for coping with adjustment reactions is not to try to prevent the adjustment reaction, but rather to try to guide it. If possible, offer people things to do that will help. Explain why what they’re tempted to do won’t help (if it won’t), even though it seems entirely rational to clear-thinking non-experts. Make sure your explanations are empathic rather than contemptuous. Tell people that you know they’re not panicking, you see the logic behind their impulse to stockpile iodized salt, but here’s why following that impulse won’t work, or isn’t needed, or both.

In addition to being empathic, arguments against stockpiling need to be sound. The arguments made by U.S. public health authorities against personal Tamiflu stockpiling were almost entirely specious.

Chinese government arguments against stockpiling salt were sound, but they certainly weren’t empathic. (And they almost certainly were highly distrusted.) Official statements and official newspapers emphasized that significant radiation was unlikely to reach China from Fukushima, that even if it did salt wouldn’t help, and that Chinese salt supplies were extensive and mostly land-based. I can’t find officials saying that it’s natural to be afraid, or suggesting other things Chinese citizens might do to protect themselves from possible radiation, or acknowledging that the iodine in table salt makes it seem like a sensible precaution until you know how much salt you’d need to consume to make a difference.

Instead, officials condemned the rumors. Here’s another passage from the March 18 People’s Daily story:

The Chinese government also weighed in Thursday, with Foreign Ministry spokeswoman Jiang Yu saying, “I do not see any necessity to panic.”

In its notice, the Development and Reform Commission also urged local authorities to take “immediate action to monitor the market prices and resolutely crack down on illegal acts, including spreading rumors to deceive the public.”

Michael O’Leary, WHO’s representative in China, called on governments and individuals to “take steps to halt these rumors, which are harmful to public morale.”

I don’t know if China has crisis-driven runs on products more than the U.S. and other countries do. But it makes sense that it might, because the Chinese government is so widely (and wisely) distrusted by its people – and especially distrusted in times of crisis.

This is, after all, the government that in 2003 had falsely claimed that there was no SARS in Beijing. Beijing officials juggled SARS patients in and out of hospital rooms, even hiding them in ambulances in order to keep a visiting World Health Organization team from finding them – an image worthy of French farce if it hadn’t been so deadly. When it came to assessing how serious the SARS outbreak was and what to do about it, the Chinese people were pretty much on their own. Rumor was all they had. So when a rumor circulated that boiling vinegar might kill the SARS virus, a lot of people in China boiled vinegar.

Are Chinese people also more superstitious than Americans? The stereotype says so, but it’s hard to judge. Their superstitions look more superstitious to us; our superstitions probably look more superstitious to them. What’s certainly true is that the (earned) mistrust of government in China is a higher than the (also earned) mistrust of government in the U.S. But ours is high enough; consider all the Americans who mistrust government reassurances about the safety of vaccines.

Bottom line: When your government tells you that salt won’t help, or that significant radiation won’t reach you, the question is whether you believe what you’re told. A lot of Chinese people didn’t.

Even the People’s Daily acknowledged the role of distrust in China’s run on salt, though just barely. The online version of the March 18 story runs 36 paragraphs without mentioning trust. The 37th and final paragraph reads in its entirety: “Others worry the phenomenon showed just how far people’s trust in official information has diminished.” You don’t have to be an experienced China-watcher to sense that the story must have contained more along these lines before the censors got to it.

Western news stories were predictably more willing to ascribe the run on salt to mistrust of government. Here’s the lede of a March 21 Time story:

Remember the great salt rush of China? For those who missed it, here’s a quick summary of what happened:

Following reports of radiation leaks at Fukushima’s Daiichi plant, Chinese consumers got nervous. Their rather understandable fear of radiation was compounded by doubts about their government’s willingness to share information. So they took their safety into their own hands – or tried.

There is also extensive evidence that the Japanese government suppressed a lot of alarming information (and even-more-alarming speculation) about Fukushima. If the Chinese government had wanted to be totally candid with its people, it would have had to say it had no answers to the most important questions.

The key to building trust is candor, especially admissions against interest. If you have told me in the past when something was dangerous, I’m likelier to believe you when you tell me later that something is safe.

Acknowledging uncertainty is also important – and even proclaiming uncertainty. If you sound overconfident and turn out wrong, trust is devastated.

Empathy is fundamental to trust as well. It’s possible to be coldly accurate and slowly build trust; some surgeons do that their whole careers. But trust builds a lot more quickly if you show some warmth, some humanity. Especially useful is acknowledging the validity of people’s fears. Even when those fears are technically mistaken, you still need to validate that they’re natural, normal, widespread, and rational before explaining why you think they’re incorrect. You can’t effectively tell people they’re mistaken if it sounds like you’re telling them they’re stupid.

Social media speed rumors, of course. Social media also speed corrections of false rumors. (Remember: Not all rumors are false! WHO finds out about infectious disease outbreaks in China largely by monitoring the rumor mill.) Twitter and Weibo don’t change the basics of crisis communication: Tell what you know; tell what you don’t know; tell how sure you are; do it all empathically.

It’s almost impossible to correct a false rumor you’re not willing to mention. At least the Chinese government acknowledged that people were buying up salt because they mistakenly thought it might help protect them from Fukushima radiation. My clients, both government and corporate, often hesitate to respond to rumors at all, for fear of spreading them in the process. That might make sense if you’re pretty sure the rumor is expiring, not expanding. But the protocol for correcting false rumors starts with acknowledging them. For the rest of the protocol, see my 2008 column, “Rumors: Information Is the Antidote.”

Is disfigurement an outrage factor?

name:Erich Janka
field:Risk consultant
date:February 24, 2012
location:Austria

comment:

I just read your list of “outrage factors.” link is to a PDF file

Now I wonder if you consider the “extent of disfigurement of victims” as one of these factors and if you can provide some evidence for your judgment.

peter responds:

Disfigurement isn’t on my list of outrage factors, or on any such list I’ve seen. But it’s pretty obviously related to outrage.

If forced to locate it somewhere, I’d probably stick it in under dread (and dread’s close cousin disgust – also not explicitly on the list). Certainly a lot of people dread disfigurement more than death itself. And a lot of people have an extremely strong response to other people’s disfigurement. Some can’t look and others can’t look away; both groups are surely experiencing something in the outrage category.

It’s easy to come up with examples of hazards to which many people overreact because of the disfigurement factor. Leprosy comes immediately to mind. (On the other hand, the disfigurement potential of automobile accidents doesn’t seem to register for most people. We need to drive or like to drive, so we stay in denial about the danger.)

Having used terms like “obviously,” certainly,” and “surely,” I now need to acknowledge that I have no evidence whatever. There may be some; I haven’t scoured the literature. But I don’t have any.

It’s true that my judgments about outrage are grounded in 40 years of consulting. But accumulated experience and “anecdotal evidence” are a poor substitute for a cluster of methodologically sound studies. As researchers in medicine have come to realize, clinical experience – including mine – is usually overconfident and often mistaken. (A single unreplicated study isn’t very reliable either, of course.)

Outrage about cell tower/mast/antenna EMFs

name:Andrew
This guestbook entry
is categorized as:

      link to Outrage Management index

field:Risk management communications
date:February 7, 2012
location:Canada

comment:

We have permitted cell phone providers to install their antennae on a number of rooftops at our large college.

The top-floor occupants of one of the buildings have come forward with high levels of outrage over what they believe are dangerous levels of electromagnetic radiation in their offices. These are scientists and smart people and believe there is ample evidence to support their view.

We tested the area and found the levels are indeed measurably higher than background levels – but several thousand times lower than Health Canada guidelines.

There are things we can do to lower the levels of radiation (reflective paint, ceiling tiles, possibly repositioning the antennae) but they are expensive and eventually everyone will want them!

The university official in charge, my boss, believes we may have no choice but to say that we follow the guidelines and are well within range and so we are unable to do ANYTHING for you. Their outrage will, of course, grow, and sooner or later the media will attend (not that we couldn't manage it but it’s not desirable) and some individuals will surely refuse to come to work, and eventually other building occupants will likely become outraged.

Any advice on how to manage this would be appreciated!

peter responds:

I think your management rightly wants to stress that the EMFs produced by the cell phone towers/masts/antennae on your rooftops are way within government guidelines. I wouldn’t confine myself to just the Health Canada guidelines. Find out what the most stringent cell tower EMF guidelines in the world are, and point out (if it’s true) that the measured emissions from your antennae are well within those guidelines too.

Don’t make your worried stakeholders take your word for it! Encourage them to take their own measurements – both at the source (with due precautions against the risk of falling off the roof) and in the space they’re actually occupying.

In other words, you should establish that what’s going on at your college is going on pretty much everywhere in the world. If Canada’s standards are too lax, everybody’s standards are too lax. And if the antennae on your rooftops are a threat to health, we’re all in deep trouble.

Acknowledge, acknowledge, acknowledge

Then immediately acknowledge that this is possible – maybe not likely, but certainly possible.

You can and should insist that the weight of expert opinion says the risk from cell phone towers is either very low or nonexistent. That’s what it usually means when most studies find nothing but a few studies find a possible problem. The most serious cell phone risk almost certainly comes trying to drive or do other dangerous tasks while talking or texting.

You might also point out that some people think having cell towers nearby actually decreases the health risk to users because the phone that’s nestled next to people’s brains doesn’t have to work as hard to pull in a signal. But don’t lean heavily on this point; the science that suggests cell phones might be dangerous is as wobbly as the science that suggests cell phone antennae might be dangerous.

Despite the weight of expert opinion, a few experts do believe that the world is making a terrible mistake, or at least that the world might be making a terrible mistake and should be much more cautious about this new technology. This is the point you should stress most. There are some studies that point to a possible risk from cell phone antennae; there are some health effects with latencies so long they might not have shown up yet in the research; nothing is ever firmly and forever “settled” in science.

It does happen from time to time that mainstream expert opinion turns out wrong. Expert opinion about the risk of elemental mercury and ionizing radiation, to cite two famous examples, has changed radically over the years. In the 1950s children routinely played with mercury and had their shoes fitted with unshielded X-ray fluoroscopes.

Here are some other things you should acknowledge, if they’re true:

  • “We made this decision without first inviting people in the buildings, and especially those nearest the proposed antenna locations, to get involved.”
  • “When people started raising health concerns, we were initially unresponsive and sometimes came across as almost scornful.”
  • “We’re making some money from hosting the antennae, so we have a conflict of interest when we start asserting that there’s no significant risk.”
  • “The people who are expressing concern are facing this risk, if there is one, without sharing in the financial benefit. This is true even though the money is spent on college priorities, and even though many people do use their cell phones on campus and benefit from good reception.”

You should also acknowledge that anxiety about cell towers is a common concern around the world – so the people expressing such anxiety on your campus are in no way weird. And anxiety about any risk is itself a documented health hazard – so even if your concerned stakeholders turn out to be technically mistaken, they’re still right.

In your shoes I would go into considerable detail about the many ways in which cell phone antennae are a significant source of outrage, independent of the hazard. I have mentioned several already: the control issue implicit in the college making risk-related decisions without consulting affected stakeholders; the fairness issue implicit in the college benefiting from renting space on its rooftops without sharing the proceeds with those who will have to endure the resulting EMF exposure.

For a more detailed assessment of why cell towers arouse outrage, see “Not in Our Back Yard” by Simon Chapman and Sonia Wutzke, a 1997 application of my outrage components to mobile telephone tower controversies in Australia.

Also relevant is my 2010 Guestbook response to an inquiry about why people aren’t nearly as upset about the risk from cell phones as they are about the risk from cell phone towers. In a nutshell: Most people like their cell phones a lot and control their own cell phone use. But they dislike the towers that loom over their homes or workplaces, and they had little or no say in those towers’ placement. So they look for reasons to feel safe using their phones, but are wide open to reasons to resent the towers.

Another Guestbook entry may or may not be relevant to your situation. In 2003, I responded to a detailed Guestbook comment about the appearance of cell phone towers. As the author of the comment put it: “[T]here is certainly something about a lattice tower topped with angular forms and a dozen protruding antennas or high voltage insulators, that says ‘War of the Worlds’ to me.” Your antennae may be unobtrusive or even attractive; if they’re not, check out that comment as well.

All your acknowledgments should help reduce the outrage of the people already raising questions about antenna EMF risk, and help reduce the probable outrage of people hearing about the issue for the first time.

That’s a paradoxical prediction you may have trouble convincing your boss to accept. To people unfamiliar with risk communication, it’s far from obvious why telling people the ways in which they’re right calms their outrage instead of exacerbating it. We are all familiar with this phenomenon when we’re the ones who are outraged. We know how infuriated we get when corporations (or colleges) overstate the case for reassurance and deny or ignore the case for concern, and how substantially we calm down when they start empathically acknowledging our concerns and our good arguments instead. But it’s a tough sell to get the “reassurers” in a controversy to realize that it actually helps calm things down when they acknowledge that there are some valid arguments on the alarming side too.

My most fundamental point here is this: Even if your college isn’t going to do anything about people’s EMF concerns, it makes sense to talk to them about their concerns in ways that don’t add insult to injury – all the more so since a lot of potentially concerned people are watching the dialogue.

But better acknowledgment isn’t my only recommendation. I have two action recommendations as well.

I’m not going to recommend any engineering solutions to mitigate the risk – for three reasons:

  • It’s not my field. You’ve got better people than me to advise you on ways to reduce the electromagnetic field from your rooftop antennae.
  • If I’m reading your comment correctly, your college isn’t thinking seriously about an engineering solution anyway.
  • Engineering solutions aren’t very good at mitigating outrage. In fact, sometimes they can exacerbate outrage (and perceived hazard) by implying that even you think the hazard is serious. For a detailed discussion of the complicated impact of precautions on outrage, see my 2003 essay, “Because People Are Concerned: How Should Public Outrage Affect Application of the Precautionary Principle?link is to a PDF file

The two action recommendations that follow might have some impact on the hazard to which people are exposed. But I don’t see them as ways to reduce the hazard, or even as ways to reduce the outrage by reducing the hazard. I see them as ways to reduce the outrage directly.

Offer to move people

First, I would seriously consider a wide-open promise to move anybody who wants to be moved because of cell antenna EMF concerns.

Such a promise will obviously give concerned stakeholders ironclad control over their own risk exposure. That increase in control will significantly reduce their outrage.

Once the right to move is ironclad, each concerned individual can safely consider the benefits of moving (eliminated or at least much-reduced cell phone antenna risk) versus the inconvenience of moving (the hassles of the move itself; learning the ropes in a new location; losing their favorite parking place; being further from their favorite lunch spots; being further from colleagues, friends, and clerical assistants; etc.). Now that they’re in the driver’s seat, they can – and in fact they must – weigh a possible risk against a certain inconvenience. The antenna EMFs will no longer constitute a risk the college is unilaterally imposing on them. Instead, it will be a risk they’re deciding whether to accept voluntarily for the convenience of not having to move.

By shifting the locus of control, the offer to move people makes the antenna risk similar in outrage terms to the risk of the cell phone itself – and I’m guessing that most of the people objecting to the antennae continue to use their cell phones.

People can still object that they shouldn’t have to move; you should move the damn antennae instead. But how worried can they be about the health risk if the inconvenience of moving deters them from taking action to address the problem? Once they’re entitled to move, people will no longer have a reason to insist (and thereby convince themselves) that the risk is intolerable. Instead, they will have a reason to ask themselves whether it’s really bad enough that they need to take you up on your offer.

Bystanders – who are potential converts to the cause – will probably find your offer to move people fair and responsive. When nearly everybody who has expressed concern decides not to move (by far the likeliest outcome), bystanders will see this as pretty strong evidence that they don’t need to add cell phone antennae to their own worry lists.

I have helped several clients make this sort of offer. One example that comes to mind was a huge IAQ controversy in a U.S. government agency occupying a downtown office building. In that case, nobody opted to move.

If anybody does choose to move, you know he or she is really seriously worried about the risk – in which case accommodating the move is good health policy and good HR policy. Unless you’ve got scads of people looking for an excuse to get out of garrets under the eves, I think this offer will significantly defuse the controversy without significantly increasing administrative costs. It’s bound to be less costly than a big brouhaha.

Of course it can’t be a bluff; you must be willing to move people.

And it can’t be done in a nasty way: “If you’re stupid enough to be worried despite the data, we’ll move you, you jerk!”

Most importantly, it can’t be done belatedly. After outrage has taken on a life of its own, your offer may well be seen as too little too late. Eventually outraged people no longer want their concerns or grievances fairly addressed; they just want the source of their outrage punished. Outrage is much harder to ameliorate once it’s entrenched (though even a belated offer can help with bystanders).

Set up an advisory group

I would also think seriously about setting up some kind of advisory group on infrastructure risks.

If the college were seriously considering engineering steps to ameliorate the antenna EMF risk, I would suggest a narrowly focused advisory group on what to do about the antennae. That kind of narrow advisory committee was a key recommendation of my 2007 column on “Indoor Air Quality Risk Communication.” One of the most common mistakes landlords and employers make in IAQ controversies is to fix things without first consulting with the people who are most upset.

But it sounds like your college isn’t planning to “fix” anything. So a “cell phone antenna EMF risk mitigation advisory group” isn’t a good idea. You’re unlikely to accept any of its advice.

On the other hand, I think an “infrastructure risk advisory group” would make a lot of sense. Such a group would be tasked with considering a wide range of infrastructure risks – the cell phone antennae, of course, but also IAQ, fire, asbestos maybe, vulnerability to extreme weather events and other natural disasters, etc. The college administration would routinely seek its advice on various infrastructure risks, not only risks that administrators considered serious but also risks that were arousing stakeholder concern. Concerned stakeholders should also have direct access to the advisory group, so they could put an issue on the group’s agenda even if the administration preferred to avoid discussing it.

The big advantage of such a multi-risk group is its inevitable need to focus on comparative risk … and comparative everything else. With a dozen or more issues on its plate, the group would quickly see the importance of assessing each one on the various relevant metrics: the size of the risk; the size of the benefit; the cost of various mitigation options; the reliability of the data; etc. Are cell phone antennae really more dangerous than lab accidents? How much do we care about the money the college would lose if it abandoned those antenna contracts, not to mention all the complaints that would start pouring in about poor reception on campus?

Don’t make it a committee of worrywarts exclusively; involve some other folks too. But be sure to involve the most serious worrywarts – not just cell phone antenna worrywarts, but worrywarts about all sorts of infrastructure risks. It’s very instructive for people worried about X to spend time with colleagues who are just as worried about Y and Z (worries they’re likely to consider excessive or even silly).

You might even want to broaden the group’s purview beyond infrastructure risks to the full range of risks confronting college employees and students. A few hours talking about the campus rape rate or the threat of a severe flu pandemic can help put other risks in context. It would be fun and useful to ask the group to consider whether there ought to be campus safety rules restricting cell phone use, so members who are outraged about the antennae can come to terms with whether they’re strangely copasetic about their own phones.

I won’t repeat here all the additional advice about advisory groups in my October 2011 column on “Advice about Advisory Groups.”

name:Gail Diamond
field:Risk management consultant (public safety)
date:October 12, 2012
email:gail@whereistherisk.com
location:St. Vincent & the Grenadines

Gail Diamond responds:

I query your suggestion, “Offer to move people,” since I believe such an offer could convey the opposite of what might be intended.

Could it be that some persons who are outraged enough to consider moving will have their concerns heightened by the offer? “If the company offered to move me then there must indeed be a problem!” I think this could possibly override their other considerations: inconvenience, peace of mind, etc.

Furthermore, if I were a bystander I too might be inclined to believe that there really is a problem. Why on earth would the company offer to move folks otherwise?

I can’t speculate about how many among the outraged group would be apt to think this way but it’s difficult for me to see the NIMBYs wanting to move as opposed to wanting the thing moved!

You indicated that such a policy option would shift the risk from being one that is imposed to one that is voluntary, and that this might help to reduce outrage. I wonder, though, about how ethical this move might be, given that it is not really a voluntary undertaking. While it might ease company conscience, how much does it cross the line by transferring out the risk? I am more comfortable with suggestions that will help to change perception and irrational fears about the effect of EMFs than with those that shift responsibility.

I’m fully persuaded by your other suggestions and will love to hear your feedback on my train of thought.

peter responds:

I think your first point has a lot of merit: Offering to move people who are worried about cell antenna EMFs might convince some of them, and some bystanders, that the risk must be significant.

At least a few would probably reach this conclusion even if management (in this case a college, not a company) went out of its way to explain that it was making the offer out of empathy for those who were worried, not because management itself was worried. “If we believed there was a significant risk,” management could say, “we’d get rid of the cell towers. Or we’d leave the nearby offices and dorm rooms vacant. We’re not doing that. We’re just offering to move anyone who is worried about EMFs from the towers to a different location, and then we’ll move someone who isn’t worried to the original location.”

Such an explanation would mitigate the misimpression that management’s willingness to move people was proof of the seriousness of the risk. But it probably wouldn’t eliminate that misimpression altogether.

At the same time, management’s willingness to move people would reassure those who already considered the risk serious that they weren’t stuck, that they could remedy the problem by moving their office or dorm room. As I pointed out in my original response, the right to move is typically sufficient to reduce many people’s outrage to the point where they choose not to inconvenience themselves by moving.

This dual effect of concessions is quite common, and I should have addressed it. Some people are grateful for the concession, so they calm down. But others see the concession as proof they were right, so they get even more upset. (There is sometimes a third group as well, that is motivated more by greed/self-interest than by outrage and sees the concession as a sign of weakness and an opportunity to demand additional concessions.)

Labeling has a similar dual effect, as I discussed in a recent Guestbook entry on California’s Proposition 37, which would require labels on foods with genetically modified ingredients. Labeling would give people anxious to avoid GM foods better control over what they eat – but labeling would also signal to people not especially worried about GM foods that perhaps they should be.

I’m not sure how to predict which effect will be bigger. In the case we’re discussing here, I think it matters that we’re talking about EMF exposures in people’s offices or dorm rooms, not their homes. Many college students are accustomed to getting a new room assignment every year, and to having little control over what room they end up living in. And employers move their people around pretty much at will; employees often protest, but for the most part they accept the employer’s right to decide where they will do their work. It’s also quite common for students and employees to request a move, for any reason or no reason, and get it.

So if a college said it would be happy to move anyone who was worried about a nearby cell tower to a different room, I doubt too many students and employees would see the offer as a tacit admission that the cell tower risk was significant.

Offers to move people are probably likelier to feel alarming when the people are long settled into homes they own. “If you’re willing to pay me big bucks to uproot my family and start over someplace else, there must be something seriously wrong with my home!” Despite this, offers to move people are often effective in NIMBY controversies. I have worked several times on such home buyouts, in connection with factory emissions, waste management facilities, contaminated groundwater, etc. A badly managed buyout program can sometimes spread the anxiety and empty the neighborhood – the effect you’re worried about. Among the characteristics of a well managed program:

  • The company explicitly points out that fear is itself a health risk. Since a few neighbors have become anxious about the company’s emissions (or whatever the problem is), giving them a chance to move elsewhere is good for them, good for the company, and good for the neighborhood.
  • The company doesn’t leave houses vacant. It turns them over quickly to people who know about the risk and aren’t concerned about it – ideally including some of the company’s own managers.
  • The company works hard to sustain neighborhood property values. One company I worked with offered to buy out neighbors who wanted to leave, but it also offered to pay those who stayed a small sum of money each year to spend on landscaping, painting, or other external home improvements. With the neighborhood getting spiffier every year, most people chose to stay.

Regarding your second point – that it’s more ethical to remove a risk than to offer people a way to escape it – you’re obviously right if the risk is significant. But if we stipulate that the risk is tiny or nonexistent, then offering a way out to people who mistakenly think otherwise seems honorable to me. (If cell tower EMFs turn out to be a serious hazard, a lot of what I’ve said about cell tower risk communication will be way off-target.)

Of course the “way out” needs to be reasonable. “If you’re frightened, go find a different job or a different school” would feel more like a threat than an offer. But “we’ll find you a different room” feels to me like an offer. Some people might counter, “No, I like it where I am. I want you to get rid of the cell tower, not make me move.” But since changing employees’ offices and students’ rooms is pretty commonplace and not terribly onerous, I think most concerned employees and students would experience moving as a reasonable offer. And I’m pretty sure most neutral bystanders would see it as a reasonable offer.

Your second point also needs to be considered in the context of your first point. If the college were to remove the cell towers because some people think they’re dangerous, wouldn’t that send a strong message that the towers really are dangerous – a much stronger message than merely offering the concerned minority a chance to move?

Bird flu risk perception: bioterrorist attack, lab accident, natural pandemic

name:Jonathan C. Waldron
This guestbook entry
is categorized as:

      link to Pandemic and Other Infectious Diseases index

field:Dentist and retired naval officer
date:January 19, 2012
location:Georgia, U.S.

comment:

In the present information age that coincides with high-visibility media coverage of terror acts, reporting of various influenza genetic sequencing experiments – even in a limited way – carries a high risk. While knowledge and science cannot be kept in the dark, I believe that strict protocols need to be established at the national level to contain this information.

Of course the risk of a natural influenza pandemic is certain given enough time. Hopefully, the terror risk will raise the national priority of securing a more universal vaccine. This will not, however, end the threat as the vaccine will not be available in much of the world. Aside from the tragic loss of life, the potential for very significant economic and perhaps political change in the new service-based world economies is unpleasant to contemplate should there be a reoccurrence of the 1918 pandemic, or worse, spreading quickly via modern travel.

peter responds:

The event that provoked your comment is a December 2011 request by the U.S. government that two leading scientific journals edit some details out of papers they are planning to publish for fear that the original versions might help the bad guys intentionally launch a flu pandemic.

The risk communication issues this event raises are real – the need for dilemma-sharing, for example. But the risk perception issues it raises are especially fascinating – especially the contrast among three risks: the risk of an intentional attack, the risk of a laboratory accident, and the risk of a natural pandemic. I’m going to focus my response on risk perception.

Before addressing the differing risk perceptions of these three risks, let me start by summarizing the event itself.

And before I do that, here’s a necessary terminological clarification. The disease we’re going to be talking about is commonly called “bird flu” for short. There are many kinds of bird flu. The experts call this particular one HPAI H5N1. The “H5N1” part refers to one subtype of Influenza A; all subtypes of Influenza A, not just H5N1, can infect birds. The “HPAI” part stands for “highly pathogenic avian influenza”; some subtypes of bird flu, and some strains of H5N1, aren’t deadly enough to deserve that label, and are called “low pathogenic avian influenza” or LPAI instead.

The terminology gets more complicated still when a bird flu starts infecting other species. If it does so with great difficulty, it’s still “bird flu.” But if it starts spreading easily in a non-bird population, flu experts no longer think of it as avian influenza. (This helps explain the expert resistance to the term “swine flu” for a different flu strain that went pandemic in humans, notwithstanding its swine origins.)

In this article, I’m going to refer to HPAI H5N1 interchangeably as “bird flu” or “H5N1.” File away somewhere in your mind that it’s just one kind of H5N1, a particularly deadly kind; and that the whole reason we’re talking about it is because it may not be just a bird flu anymore. At the very least, it now exists as a ferret flu in two labs.

Making bird flu transmissible in humans

In December 2011, the U.S. government appealed to the editors of the journals Science and Nature to leave out certain key details in two research papers they had accepted for publication. This unprecedented request had been recommended by the National Science Advisory Board for Biosecurity (NSABB), an expert panel that had been asked by the U.S. National Institutes of Health (NIH) what should be done about the two studies.

Both papers reported successful efforts to create strains of the H5N1 influenza virus that were easily transmissible through the air between ferrets. Ferrets are considered the best animal model for studies of human influenza; the assumption is that a flu virus that passes easily from ferret to ferret will probably pass easily from human to human as well. So it’s likely (though not definite) that these two groups of scientists created a potentially pandemic strain of H5N1.

H5N1 – bird flu – is unprecedentedly virulent in humans. Since its first appearance in 1997, it has killed about 59% of the people known to have contracted it – compared to a mere 3% for the horrific 1918 flu pandemic, and an average of about 0.1% for the seasonal flu. But only a few hundred people have caught bird flu. H5N1 spreads easily from bird to bird (though not in all species), but only very rarely from bird to person, and vanishingly rarely from person to person … so far.

If the H5N1 virus out in the real world were ever to become capable of efficient human-to-human transmission while remaining just as deadly as it is now, the result would be a pandemic that would eclipse any other disaster in recorded history. It could theoretically kill billions before a vaccine could be developed, mass-manufactured, and distributed.

Obviously, nothing like that has happened so far. Scientists have seen bird flu mutations in the wild that moved the virus in the direction of human transmissibility, but they’ve never seen all the necessary mutations at once. Some scientists have speculated that the requisite combination of mutations might be impossible, or that any combination of mutations that would make the bird flu virus transmissible human-to-human would also make it less deadly to humans.

In 2011 two teams of scientists – one in the Netherlands and the other in the U.S. – successfully forced nature’s hand. Apparently, they took mutations that had occurred individually in the wild and made them happen simultaneously in the lab. Then they passed the mutated H5N1 virus through several cohorts of ferrets via nasal swabs – and discovered to their surprise that the ferrets had started transmitting the virus to other ferrets in nearby cages just by sneezing and coughing. Thus the scientists (and the ferrets) had created a bird flu virus that was easily transmissible from one ferret to another, and reportedly still deadly to the ferrets.

If ferrets are a reliable surrogate for humans, then the two 2011 studies proved that a cataclysmic H5N1 pandemic is a genuine possibility. They didn’t prove that such a pandemic is imminent, or even that it will ever happen – only that it isn’t impossible. Assuming the two unpublished studies did not yield identical mutations, they showed two ways it could happen.

The fact that H5N1 viruses in a lab can be made to do everything they need to do to launch a pandemic doesn’t mean they’ll ever do all the right things at the same time in nature. Monkeys can pound all the typewriter keys needed to write a Shakespeare sonnet. Billions of bird flu viruses have been mutating randomly for at least 14 years. So far they haven’t hit on the right combination of mutations. How likely are they to do so next week? How likely are they to do so sometime in the next 50 years? Nobody knows. But now we know it’s doable.

None of this is what the NSABB was asked to worry about. Its focus was on whether the two H5N1 papers might constitute a roadmap for someone intent on manufacturing a pandemic virus. The “someone” who might want to do that is usually referred to as a “terrorist,” and I will sometimes use this term as well. But the list of candidates includes not just terrorist groups, but also disgruntled or disturbed individuals and nations violating the Biological Weapons Convention.

To take this risk seriously, we must posit an individual, group, or government that intends to destroy the world, or that intends to blackmail the world with the threat of destruction, or that doesn’t mind devastating its supporters as well as its enemies, or that has secretly vaccinated its own people (or at least stockpiled some vaccine). None of these, sadly, is inconceivable.

To reduce the odds of facilitating such an attack, the NSABB recommended asking the two journals to omit methodological and technical details. The NIH agreed and the request was made. The NIH promised it would figure out a way to provide the omitted details to scientists judged to have a legitimate need for them. Science and Nature have the two articles on hold while they wait to see what the NIH comes up with.

The NSABB was apparently concerned as well about criticism that the two H5N1 studies had been allowed to proceed at all. It recommended adding more information to the papers about the goals and potential public health benefits of the research. And it recommended adding information about the measures taken by the two teams to try to ensure that the virus didn’t spread to lab workers or the general public.

It isn’t entirely clear whether the NSABB also recommended a moratorium on this sort of research while scientists and policymakers debate the pros and cons. Certainly its recommendations have launched such a debate.

Parsing the risks

The two ferret studies and their possible publication raise three quite different risks:

number 1
The risk of a catastrophic natural H5N1 pandemic. This is the risk that led to the research, and that the research illuminates … in an alarming direction. If you’re most worried about the natural pandemic risk, you probably want the two papers published – and you probably want the studies widely discussed, replicated, and extended. Perhaps this research effort might lead to new insights about how to prevent an H5N1 pandemic, or how to cope with one. Perhaps it might spur more work on a universal flu vaccine, one that would protect people against virus strains (like pandemic H5N1) that don’t exist yet in nature.
number 2
The risk of a catastrophic H5N1 bioterrorist attack. This is the risk that the NSABB was most concerned about, and that the news coverage has focused on. The research doesn’t address the bioterrorism risk, but the fear was that its publication might increase the bioterrorism risk. If this is the risk that worries you most, you probably want the two papers suppressed or at least the details restricted to reputable scientists. I can’t guess whether there are actually details in the papers that would help a would-be bioterrorist (individual, group, or nation), and if so whether it’s feasible to keep those details out of the hands of terrorists once a bunch of scientists have them. But it’s not a fear we should shrug off lightly.
number 3
The risk of a catastrophic H5N1 laboratory accident. This risk has been less discussed than the other two, but it’s lurking in the background. The lab accident risk, of course, is present whenever scientists study dangerous organisms, however you bowdlerize their reports. So if that’s what you’re most worried about, you probably want the research suppressed, not just the papers. Or at least you want the research more heavily scrutinized and regulated. There are steps that could be taken to lessen the likelihood of a careless lab assistant or an arrogant principal investigator getting infected – or carrying a virus sample outside the lab, an infraction so common it has its own abbreviation: VIP (“vial in pocket”). At a minimum, shouldn’t H5N1 research be done in labs with the highest “biosafety level” (BSL) designation – BSL-4? (Both studies are said to have been performed in “BSL-3 Enhanced” labs.)

I’m not qualified to assess the relative size of these three risks. I’m not going to try. All three of these risks have the same potentially horrific outcome: an H5N1 pandemic. The question is which source of that outcome you think is likeliest – nature, malevolence, or human error. What you think should be done with the two papers depends largely on how you think an H5N1 pandemic is likeliest to start.

The other obviously relevant question is how useful you think access to the two research papers would be – to bioterrorists or to scientists. I’m not qualified to judge that either. (I wouldn’t be qualified even if I had access to the papers myself.) I have read arguments that enough is known already about how to bioengineer a pandemic strain that any terrorist with modest scientific acumen (and certainly any government with the resources to compel qualified scientists to do the necessary research) has no need for the papers. But I have also read arguments that that’s not so and the papers would be a huge leg-up for a would-be terrorist. I have read arguments that there’s not much to be gained for science or public health by following up on the research in the papers. But I have also read arguments that the papers could launch fruitful work on how to surveil for a pandemic strain and how to create a pandemic vaccine.

The NSABB inquiry poses five additional risks not posed by the two studies themselves:

number 4
The risk of calling bioterrorists’ attention to H5N1 and its potential. Perhaps the real risk is simply the idea of bioengineering an H5N1 virus to launch a pandemic; perhaps the debate over the two papers is spreading that idea more than the papers themselves would ever have done. In this sense, the NSABB’s concerns could be self-fulfilling, rousing a controversy capable of piquing the interest of terrorists in search of more weapons. (Influenza is not named on the CDC’s bioterrorism pathogens-of-concern list, though it could fit into the lowest category, Category C: “emerging pathogens that could be engineered for mass dissemination.…”) On the other hand, it’s hard to imagine that bioterrorists don’t already have bird flu on their lists of possibilities. The details the NSABB is trying to suppress might be all they would need to get started.
number 5
The risk of distracting policymakers and the public from the natural pandemic risk to the bioterrorism risk. Of course this is a problem only if you think the natural pandemic risk is greater than the bioterrorism risk. I imagine that’s what the authors of the two papers think. But they have a stake in thinking so, and bioterrorism isn’t their field. Still, it’s remarkable how little media and public attention has been paid so far to this new evidence about the plausibility of a disastrous natural pandemic – and how (comparatively) much attention has been paid to the possibility that bioterrorists might use the evidence itself to help them make the pandemic happen. This imbalance matches the risk communication truism that man-made hazards are usually perceived as much more risky than natural hazards. More about that below.
number 6
The risk of distracting policymakers and the public from other (non-H5N1) lab accident and bioterrorism risks. H5N1 is scary, but it’s not uniquely scary. There is something a little weird about worrying that the new potentially pandemic H5N1 strain might escape from the two labs that have it without worrying proportionately about the hundreds of military and civilian labs around the world that are playing with plague and other potentially catastrophic organisms. Similarly, terrorists (and governments) aren’t going to stop trying to develop biological weapons just because the NSABB succeeds in keeping them from learning how to harness H5N1. Is the H5N1 furor a much-needed pathway to broadened concern about the risks of laboratory accidents and biological weapons, or is it narrowing those concerns dangerously?
number 7
The risk of inhibiting the free flow of scientific research and publication – that is, the risk of censorship. It’s voluntary self-censorship, at least so far. Science and Nature are under no obligation to accept the NSABB’s recommendations. The authors of the two papers are under no obligation to accept them either; if all else fails, self-publication online has never been easier. But both editors and authors have indicated that they plan to comply, however reluctantly. Will more formal and less voluntary restrictions be forthcoming in the years ahead? Will other scientists start to self-censor? Will some avoid this research area entirely rather than face the prospect of controversy and censorship? Will the precedent expand to other research areas as well? Conversely, will the fight to protect their own intellectual freedom distract scientists from the Big Three pandemic risks (natural, intentional, and accidental) that H5N1 poses?
number 8
The risk of scaring people. Media coverage of the two papers (if there had been much) might have scared people about the prospect of a natural H5N1 pandemic. Media coverage of the NSABB report (and there has been some) may have scared people about the prospect of a bioterrorist H5N1 attack. I have no reason to think the NSABB intentionally diverted public attention and possible public alarm from one to the other, though it seems to have played out that way. In addition, the NSABB’s recommendation to add information about laboratory precautions to the papers may reflect a fear of scaring people about the prospect of a lab accident.

I do have opinions about some of these risks.

The risk of scaring people is of course the risk most directly related to risk communication. And what risk communication teaches is that it’s hard to scare people, that it’s useful to scare people if the risk is serious (in proportion to its seriousness), and that most people can cope with being scared. See my column on “Adjustment Reactions” and my column with Jody Lanard on “Fear of Fear.”

So I’m not very worried about the risk of scaring people.

I am a lot more worried about the risk of distracting people’s attention from natural pandemics to bioterrorism.

Just do an outrage assessment link is to an audio MP3 file of natural pandemics versus terrorist attacks, especially in the wake of the mild swine flu pandemic (widely misperceived as a false alarm) on the one hand and the 9/11 ten-year anniversary on the other.

Natural pandemics are, well, natural; terrorism isn’t. The same technical risk – in this case H5N1 – is a far bigger source of outrage if it’s intentional or even accidental than if it’s natural. Look at people’s strong reactions to oil spills compared to their unconcern about oil seeps. Look at the difference between radon emitted by uranium-bearing rock under your house and radon emitted by a mining company’s waste pile near your house. Look at methane in your drinking water before versus after a gas company starts fracking nearby.

In addition, terrorism is more memorable and more dreaded than a naturally occurring event. And terrorism is morally relevant, it’s “evil” as well as “dangerous.” In a battle to arouse outrage, bioterrorism beats nature hands-down.

It isn’t as easy as it once was to rev people up about a possible terrorist attack. The NSABB report wasn’t big front-page news either. But a new terrorist threat strikes journalists and the public as a lot more newsworthy than a bunch of scientists and public health professionals saying yet again that they’ve done a study and they’re worried about a pandemic. Been there, done that.

Your comment suggests you’re worried about both the natural pandemic risk and the bioterrorism risk. (You don’t mention lab accidents.) You express hope that the controversy over the NSABB report may help fuel public concern about an H5N1 pandemic, perhaps leading to a more urgent search for a universal flu vaccine. I am worried about the possibility of exactly the opposite effect: that concern about H5N1 bioterrorism will preempt concern about a natural H5N1 pandemic.

(My worry about bioterrorism and lab accidents is already high, but these studies didn’t increase it. The studies did increase my worry about a natural H5N1 pandemic.)

I’d have liked to see the two studies provoke a public debate over pandemic preparedness. Instead, they have provoked a public debate over the responsibility of scientists to avoid doing/publishing studies that might turn out dangerous.

The furor over whether flu experts should be permitted to conduct and publish research that increases the risk of bioterrorism may deter many experts from following up on opportunities to investigate the possibility of a natural pandemic. Yes, the questions raised are intellectually interesting, and they’re potentially enormously important to public health. But the research is going to be controversial. Funding may be hard to get and harder to sustain; external review of proposed methodologies is bound to be painstaking; security precautions are bound to be burdensome; if the research ever gets done, getting it published is bound to be a fight; along the way there’ll be endless meetings and media interviews. All that isn’t going to sound like an attractive package to the typical lab scientist.

Somehow, the two studies got hijacked. They are being used to raise questions about research responsibility, about bioterrorism, even about laboratory accidents. These are all important questions. But the questions the researchers meant to raise – about the risk of a natural H5N1 pandemic – may be getting lost in the shuffle.

I’m also worried, though not as worried, about the risk of distracting people from non-H5N1 risks (non-H5N1 lab accidents and non-H5N1 bioterrorism). I’m not claiming that it’s silly to worry about H5N1 when plague is out there, only that it’s silly to worry about H5N1 and not about plague – and a daunting list of other, comparable threats. I hope policymakers and the public will see the NSABB controversy as a window on those other threats, rather than a distraction from them.

But these communication-related risks – the risk of scaring people; the risk of distracting their attention from one risk to another – are sideshows. The big risk assessment question, obviously, is the comparative risk of three possible sources of a catastrophic H5N1 pandemic: naturally occurring mutation, bioterrorist attack, and laboratory accident. My “opinion” on that would be just a guess.

How flu experts see the risks

Flu experts are the people most entitled to an opinion about the probability of a devastating natural H5N1 pandemic. I haven’t seen any recent survey data showing whether expert opinion on this question has been influenced by the two new studies. But there are no signs yet of an upwelling of expert alarm – no new calls for a Universal Flu Vaccine Manhattan Project, no announcements that any of the top people are suspending their current research in order to focus on the urgent implications of the two studies. Maybe things like that have been happening behind the scenes. Or maybe they will begin happening after the two papers are published. But I don’t see any signs of it yet.

Highly-pathogenic H5N1 was first identified in Hong Kong in 1997 – but Hong Kong health officials killed every chicken they could find and the virus seemed to disappear. In 2004 it reappeared, and this time it spread widely in birds. It also spread to a small number of people, and killed more than half of them. Between 2004 and 2007 is when flu experts were most acutely worried about the possibility of an imminent H5N1 pandemic.

But their worry subsided as the pandemic failed to materialize. Bird flu continued to kill a terrifyingly high percentage of the few people it infected, but it showed no signs of acquiring the ability to sustain a chain of human-to-human transmission. Most flu experts continued to put a high priority on monitoring H5N1 carefully – both the virus itself and every single human case. But their public warnings about the possibility of an H5N1 pandemic became more and more tentative, almost pro forma. Before 2007, flu experts sometimes talked about H5N1 as a “terrifying” virus that “kept them up nights.” I haven’t heard them talking that way in the past few years.

Of course there’s a spectrum; flu experts don’t march in lockstep on the probability of a catastrophic H5N1 pandemic. And it’s possible that some experts are very worried but have decided to keep their worries to themselves. Maybe they’re afraid of frightening the public. Maybe they’re afraid of being accused of trying to frighten the public. Or maybe they’re afraid that influenza alarmism has lost credibility. They warned about a potential bird flu pandemic in 2004–2007 and it never happened. They finally got a flu pandemic in 2009–2010 – swine flu – and it was mild. So maybe they’re reluctant to let their new (or ongoing) worry show.

The two 2011 studies have undoubtedly reignited some H5N1 concern among some experts, but I doubt the overall level of expert anxiety is anywhere near as high today as it was in 2004–2007.

Whatever the current level of flu experts’ concern about a natural H5N1 pandemic, I’m certain it is higher than their concern about an H5N1 lab accident or an H5N1 terrorist attack – though here again there’s a spectrum of expert opinion.

This is not the place to review the extensive evidence that people who work in laboratories tend to underestimate the likelihood and magnitude of laboratory accidents. It’s a truism of risk perception – and of life – that familiarity breeds contempt; if you spend your days in labs you tend to lose your visceral sense that labs are dangerous places. The problem of insufficient concern (that is, insufficient outrage) is compounded by insufficient reporting and insufficient training.

Note this overconfident statement by a bird flu transmission researcher who works in a BSL-3 Enhanced lab:

In such labs, all workers wear full-body suits and breathe through powered respirators, said Daniel Perez, a virologist at the University of Maryland in College Park who studies interspecies transmission of a different kind of bird flu, H9N2, in the same kind of facility. Air is purified coming in and out.

“There’s no chance for the virus to escape,” Perez said.

Three weeks later, Dr. Perez was equally overconfident that there will be an H5N1 pandemic some day:

Individual mutations are already found in wild [H5N1] viruses, though none yet has all the mutations required for human-to-human transmission, Maryland’s Perez said in an interview.

“But it is not hard to imagine that nature will eventually find a way to do that. It’s not a question of if, but when,” he said.

As for an H5N1 terrorist attack, understandably and perhaps inevitably it’s mainly bioterrorism experts who are worried about that. Flu experts – much less so.

I don’t take much comfort from that fact. I remember all too well the U.S. battle over smallpox vaccination in 2002–2003. Intelligence experts insisted that terrorists might have or get the smallpox virus and launch a smallpox attack, so it was essential to vaccinate the population. But they said their evidence about the probability of an attack was classified and we’d just have to take their word for it. Public health experts insisted in return that a smallpox attack was vanishingly unlikely, and vaccinating millions of people would do more harm than good. But they had no evidence at all about the probability of an attack; their hunch seemed to be motivated largely by their pride in having “eliminated” smallpox, their nervousness about having saved a few virus samples in various labs, their distrust and dislike of intelligence agencies, and other nontechnical factors.

Also crucial, in my judgment, was what I later called public health’s “Blind Spot for Bad Guys.” My 2005 column with that title uses smallpox vaccination as one of several examples where public health officials have shrugged off – or simply not noticed – the risk of terrorism.

The dominant example in that column is an incident that passed almost unnoticed at the time (and since). Another potentially pandemic flu strain, H2N2, was erroneously included as an unidentified sample in hundreds of proficiency test kits sent to hospital laboratories around the world so they could test their lab workers’ ability to identify Influenza A in lab samples. Fearful that a lab accident might launch a pandemic, public health agencies okayed an urgent Friday afternoon fax to all the labs, telling them which sample was the potentially pandemic one and instructing them to destroy it. It’s not hard to imagine a disaffected weekend tech assistant somewhere in the world reading the fax and then spiriting the deadly sample out of the lab and into the hands of Al Qaida. But the public health experts involved seem not to have noticed that possibility.

When you consider that many health professionals are famously blasé about the risk of lab accidents, this is a remarkable story. Even a lab accident risk was sufficient to completely preempt the terrorism risk in the minds of the people who wrote and approved the fax. So I certainly don’t trust flu experts to take bioterrorism risk seriously enough.

Bioterrorism experts, on the other hand, may be inclined to overestimate and overstate bioterrorism risk … and to shrug off the risk of a natural influenza pandemic. They have their own blind spots.

And that is perhaps the most important point here. Everybody has blind spots.

Technical experts tend to imagine that their own risk assessments are purely data-driven. They may realize (and may even acknowledge) that there’s a lot of uncertainty in the data. But they’re unlikely to realize how much their interpretations of the data are driven by things like values, professional biases, self-interest, and even outrage – not to mention all the cognitive biases and universal distortions that Daniel Kahneman and Amos Tversky dubbed “heuristics.”

The term “risk perception” is almost always used to refer to somebody else’s risk perception, especially when we think that the perception in question is mistaken. I “analyze” or “assess” a risk. You merely “perceive” it – which is to say, you misperceive it. Technical experts in particular talk a lot more about the general public’s “risk perceptions” than about their own.

But we are all stuck in our perceptions.

I don’t know which path to a devastating H5N1 pandemic is likeliest – random mutation, intentional attack, or laboratory accident. I do know that opinions about the two 2011 H5N1 ferret papers depend largely on which path you think is likeliest. And I know that which path you think is likeliest – and which path I think is likeliest, and which path the NSABB members think is likeliest, and which path flu experts think is likeliest – depends largely on risk perception factors that have very little to do with the evidence.

Getting health professionals to take blood-borne disease transmission seriously

name:Anonymous
This guestbook entry
is categorized as:

      link to Precaution Advocacy index

field:Health educator
date:January 18, 2011
location:New York, U.S.

comment:

I am working on a grant project with CDC and the Safe Injections Practices Coalition. The goal is to reduce the incidence of blood-borne disease transmission due to improper use of injection equipment, such as changing the needle but using the same syringe on multiple patients, accessing a multi-use vial with a syringe used on more than one patient, etc.

I’m sure you’re saying, “That doesn’t happen in the U.S., or only in rare circumstances.” The truth is, it DOES happen in an alarming number of instances, leading to outbreaks and the need for thousands of patients to get tested to determine their health status.

Anyway, that's the background.

I am trying to devise strategies to counter pervasive denial among providers who don’t think this is a problem. We’ve had training sessions where the chief clinician is insisting this is a waste of time – “we know what to do” – while his/her staff are telling us it definitely does occur.

Thus far I have recommended taking advantage of teachable moments (send reminder info when the issue is in the news; include recent headlines about injection-related outbreaks with educational communiqués). I have also suggested bringing the matter to the attention of malpractice insurers in the hope that they will help with educating their insureds and/or offer incentives for injection safety training.

We have also recruited representatives of professional societies whose members comprise the most likely offenders, and representatives of patient safety organizations onto a workgroup.

What am I missing? Can I play the cognitive dissonance card, and if so how? Any advice would be much appreciated!

peter responds:

The precaution advocacy recommendations you have come up with so far sound excellent to me. Let me elaborate on them a bit:

  • I agree with you that teachable moments are crucial; they’re always crucial in precaution advocacy. (See #14 in my column on “How to Warn Apathetic People.”) In the case at hand, you need teachable moments not just to remind apathetic people of an issue they’re not paying much attention to, but also to persuade people in a kind of denial that the problem is real. (I’ll come back to denial later.) It might help if you could get some of the health professionals implicated in your “bad example” clips to say, “Yeah, we didn’t think we had a problem either. And then, this!”
  • Insurers are a great advocate for you because they’re hard to dismiss as highfalutin experts endlessly and obsessively preaching the gospel of needle safety. When an insurer says “here’s how much we paid out in blood-borne disease transmission claims last year,” it sounds like business, not theory. If the insurer adds that it’ll cut the premium if the institution takes appropriate precautionary steps and hike the premium if the institution keeps on resisting, that sounds like business too.
  • I really like putting “representatives of professional societies whose members comprise the most likely offenders” onto your workgroup. Their presence tells resisters that they have peers who think the issue is important. And of course they ought to have solid firsthand advice on how to pierce the resistance. Consider going a step further: Add a couple of victims to the workgroup, plus a couple of perpetrators (maybe administrators whose institutions messed up, or individual perps who feel terrible about what happened and want to be poster children for “Never again!”). Helping prevent future accidents should be cathartic for the victims; it’s a great penance for the perps, and reassures them that they’re not alone. And of course both victims and perps have a unique credibility you can’t possibly match.

I’m not sure how you can best “play the cognitive dissonance link is to a PDF file card,” but your mention of incentives raises one possibility. Big incentives motivate behavior change, but no dissonance; if you pay me a lot of money to do X, I’ll do it for the money, feel comfortable about doing it for the money, and therefore feel no particular need to reassess my attitude about the value of X when there’s no money to be made.

Tiny incentives, on the other hand, don’t motivate any behavior change or attitude change at all.

The ideal incentive is in the middle, enough that I change my behavior but not enough that I’m comfortable telling myself I did it for the incentive. So I’m left wondering why I bothered. This “why did I do that?” feeling is, of course, what Leon Festinger called cognitive dissonance; it leads me to seek out information that X is a smart thing to do. Assuming there is appropriate information there to be found – and it’s your job to make sure there is – my (incentivized) behavioral commitment makes me a lot likelier to find the information, absorb it, and build a long-lasting pro-X attitude out of it. Once I have a built-in pro-X attitude, I no longer need a steady stream of incentives or a steady stream of information to keep me doing X. Now I do it because I believe in it. (But occasional reminder campaigns are wise anyway; not everyone will have developed a solid pro-X attitude.)

Diagnosing the barriers

When I tried to think of other recommendations you might consider, I soon backed up to a more fundamental question: Why are your target clinicians and administrators resisting your message?

Your comment suggests a possible answer: Maybe they think “that doesn’t happen in the U.S.” “That” in this sentence could have at least two meanings.

Maybe your audience doesn’t think nosocomial blood-borne disease outbreaks happen in the U.S. In that case, of course, your job is straightforward and comparatively easy: Convince them that such outbreaks are far more common (and far more destructive) than they imagine. It would probably pay to start by acknowledging that their opinion is widespread and natural, perhaps because most of the outbreaks are insufficiently publicized. It’s always a good idea to validate that somebody’s opinion isn’t foolish before you present evidence that it’s mistaken. This is a key step in the risk communication game I call “donkey.”

Or maybe your audience knows that nosocomial blood-borne disease outbreaks happen in the U.S., but doesn’t think the errors that lead to the outbreaks happen. That would be a very different and much more interesting error. It implies a serious misperception about efficacy: a conviction that U.S. healthcare institutions are doing everything right and still the outbreaks keep happening, so obviously the precautions aren’t very effective. If that’s your precaution advocacy problem, there’s no point in documenting or dramatizing that the outbreaks happen; your audience knows that already. Instead, you need to demonstrate (and dramatize) that the precautions work when they’re implemented, but all too often they’re not implemented, or implemented incorrectly.

There are other possibilities.

This one strikes me as among the most likely: Your audience may believe the following:

  1. Nosocomial outbreaks happen;
  2. The precautions work;
  3. Some other institutions don’t implement the precautions properly, which is why they get outbreaks; and
  4. Our institution does implement the precautions properly, so we needn’t worry.

The first three beliefs are sound; it’s (d) that’s foiling your education efforts.

If that’s what’s going on, you shouldn’t misuse your own scarce resources by trying to sell (a), (b), and (c). They’re already sold, and your job is to unsell (d).

One possibility is to confront (d) head-on. For example, you could find a way to make it safe for staff to tell the boss what they’re telling you on the sly: “Those sorts of screw-ups happen here too.” But that’s pretty confrontational, and it might just exacerbate the resistance.

So consider deflecting your challenge. Instead of claiming directly that “you” get the precautions wrong sometimes, talk about peer institutions that overconfidently overestimated their own precautions … until the day they inadvertently launched an outbreak.

I have sometimes found it useful to skip (temporarily) both the effort to convince people that they’re vulnerable to X happening and the effort to teach them how to keep X from happening. Instead, I ask them to work on what they would do and say if X happened.

Ask your target clinicians and administrators to show you their plan for a big blood-borne outbreak – and a big blood-borne outbreak scandal – that traced back to them. If they haven’t got much of a plan, ask them to improvise one. Role-play not just rolling out their response, but also explaining how it happened, how sorry they are, and what they propose to do so it’ll never happen again.

After skeptics have spent a few hours imagining how they’d cope if X happened, they tend to be more receptive to the evidence that X might happen … and a lot more interested in hearing what additional steps they can take to make X less likely.

Of course you don’t want to imply that you think X – in this case, a blood-borne disease outbreak – is surely going to happen on their watch. You have no special reason to think it will; you only think it might – and they’re strongly defended against even that hypothesis. So put aside the questions of risk probability and risk prevention for a while, and focus first on risk response.

I have something more in mind here than any specific suggested intervention. My main point is that which interventions make sense depends on why you think your audience is resisting. It always pays to diagnose the barriers to your precaution advocacy efforts. Until you have figured out why your audience is resisting, you’re not likely to come up with the best ways to overcome their resistance.

Dealing with denial

So far I’ve focused mostly on ways to help convince clinicians and administrators that they really may have a problem (after analyzing why they think they don’t). But what if they already know they have a problem? What if they’re already aware that their institutions aren’t doing all that good a job of implementing the recommended precautions against blood-borne pathogen transmission – but they’re not willing to admit it, not to you and perhaps not even to themselves?

Your use of the phrase “pervasive denial among providers” in your comment suggests that this possibility is very much in your mind already.

So let’s take the word “denial” literally. Suppose the main barrier to your efforts isn’t apathy or overconfidence, but denial. Suppose you’re talking to people who can’t bear acknowledging, even to themselves, that they are not adequately protecting patients and staff against nosocomial blood-borne disease outbreaks.

Maybe they’re ashamed to admit they haven’t focused enough on training their people.

Maybe they’ve trained and retrained, their people keep making rookie mistakes anyway, and they’re ashamed to admit they haven’t a clue what else to try.

Maybe they’re pretty convinced there isn’t much they haven’t tried already without success. This is a different sort of efficacy problem than the one I discussed earlier. It’s not that they don’t think the precautions work. They don’t think they’ll ever be able to get their people to manage the precautions properly. That’s a pretty uncomfortable thing to admit, so they may go into denial about it, pretending even to themselves that their people are managing the precautions just fine.

When you’re talking to people in denial, insisting ever-more-emphatically and ever-more-dramatically that they have a problem isn’t the answer. That works if what you’re up against is apathy. If it’s denial, you’ll just push your audience more deeply into denial. Nor will it help to tell them they’re in denial; for obvious reasons, people in denial deny their denial too.

So how do you lure people out of denial? link is to a PDF file

The single most important strategy is to legitimize the emotions they’re denying, thus reducing the need to keep denying them. You don’t just validate that preventing blood-borne pathogen accidents is difficult; you validate that it’s upsetting to contemplate this risk and the difficulty of preventing it. And if necessary you deflect the validation. “You probably find this hard to think about” may be too intrusive. “A lot of lab directors find this hard to think about” is more empathic; it validates what listeners are feeling without directly accusing them of feeling it.

Offering people things to do also helps reduce their denial by giving them a greater sense of control. As psychiatrists sometimes put it: “Action binds anxiety.” Offering a choice of things to do works better still, since it mobilizes not just our ability to act but also our ability to decide. It’s a common mistake to try to convince an audience that action is needed before identifying what actions are worth considering. That’s good logic, but it’s not so good psychologic. If people are in denial, it’s easier for them to contemplate some things they could do before they confront the need to do something.

Take a look at my seminar handouts on “16 Reasons Why Employees Sometimes Ignore Safety Procedureslink is to a PDF file and “24 Reasons Why Employers Sometimes Ignore Safety Procedures.” link is to a PDF file And since these handouts are telegraphic, check out the related website articles at the bottom of each list.

A lot of the “Attitude Dimensions of Safety” on both lists are denial-related. Employees may ignore safety because thinking about the precautions frightens them, or because taking the precautions feels cowardly to them, or because they think their friends would laugh at them. Employers may ignore safety because taking precautions now would exacerbate their guilty feelings about prior accidents, or because contemplating possible safety deficiencies arouses their ego-defensiveness, or because they’re unconsciously hostile to their workforce and believe that careless employees deserve to have accidents.

Does anything on these two lists strike you as a potential reason why improper use of injection equipment continues to be a serious problem in so many healthcare settings?

When smart people are acting stupid about safety – failing to pay attention to a safety problem that’s crying out for their attention – sometimes the problem is just apathy. But sometimes something more psychologically complicated than apathy is at the root of their safety inattention. It’s motivated inattention. Assessing the underlying motives behind their resistance is the key to designing an effective intervention strategy.

Copyright © 2012 by Peter M. Sandman

Contact information page:   Peter M. Sandman     


Website design and management provided by SnowTao Editing Services.