Posted: June 5, 2012
This page is categorized as:    link to Outrage Management index  link to Precaution Advocacy index  link to Pandemic and Other Infectious Diseases index
Hover here for
Article SummaryThe need to simplify technical content is not an acceptable excuse for “simplifying out” information that is essential to credibility – especially information that seems to contradict your message, and that will therefore undermine your credibility if you leave it out and your audience learns it elsewhere. The obligation to include that sort of information is called the communicative accuracy standard; the failure to include it might appropriately be called “misoversimplification.” The column distinguishes three levels of misoversimplification, depending partly on how controversial the issue is and partly on whether you’re on the warning (precaution advocacy) or reassuring (outrage management) side. The three levels are illustrated with infectious disease examples: whooping cough, bird flu, and polio.

Misoversimplification:
The Communicative Accuracy Standard
Distinguishes Simplifying from Misleading

At a campaign appearance just before the 2000 U.S. Presidential election, George W. Bush explained how he had defeated his primary opponent, John McCain. McCain’s advisors, he said, “misunderestimated me.” The malapropism stuck with Bush for his entire presidency, and “misunderestimation” seems fated to enter the English language as a word for really, really bad underestimation.

We need a similar word for really, really bad oversimplification – oversimplification that misleads. I cast my vote for “misoversimplification.”

This column is about how to simplify without misoversimplifying.

The Need to Simplify

Refusing to simplify isn’t an option. Experts who want to communicate effectively with non-experts have no choice but to simplify. This is as true for risk experts as it is for other experts – so simplification is a key risk communication skill.

When you’re explaining a risk to the crowd at a public meeting, for example, you must decide which technical details to include and which ones to omit. You can’t tell your audience everything you know; you know too much.

The same is true when you’re explaining a risk to a journalist. The less you simplify what you tell a reporter, the more the reporter will have to simplify what you told her (whether she understands it or not). Since you probably have better judgment than the reporter about which details are crucial to include and which simplifications are flat-out wrong, you are wise to do your own simplifying.

There is (simply) no way for public communications to retain all the details and qualifiers of professional communications. There’s not enough space; the public won’t sit still for the technical complexities, and won’t understand them if it does; reporters and editors know that and will never let those complexities through. All this is true for a public presentation or a newspaper interview, and it’s even truer for a broadcast sound bite.

My 2008 column on “Simplification Made Simple” outlines the basics of simplification:

  1. Recognize your reluctance to simplify.
  2. Make the resolution: Better you than anybody else.
  3. Figure out how simplified your message needs to be.
  4. Decide what you can’t afford to leave out.
  5. Be sure to include credibility-sensitive information.
  6. Where you can, offer multiple layers of complexity.
  7. Simplify your language.
  8. Simplify your graphics.
  9. Get help.
  10. Use signposts to keep your audience oriented.

(Most of these basics are also covered in my 1994 video on “Quantitative Risk Communication: Explaining the Data.” link to video at Vimeo)

Deciding how much you need to simplify (#3 in my 2008 column) is a challenge. Interested people can tolerate a lot more complexity than uninterested people – so risk explanations for an apathetic audience may need to be extremely simplified to match the audience’s extremely limited attention span. But don’t “simplify out” a fascinating illustrative anecdote that might actually expand the audience’s attention span!

And even the generalization that interested people can tolerate more complexity has important exceptions. When “interested” morphs into “terrified,” for example, our ability to absorb complex information deteriorates; stress and anxiety can interfere with learning almost as much as apathy. Mistrust also reduces our appetite for complexity – we’re likely to see difficult-to-understand information not as an effort to enlighten us but as an effort to cow us into submission. Often, especially in emergencies, there isn’t enough time for leisurely explanations; evacuating us has to take precedence over instructing us. And sometimes the information is just beyond our comprehension, no matter how interested we may be.

While it may not be obvious how much to simplify in a given case, it’s obvious that you will need to simplify. But the obligation to simplify does not imply the right to mislead. Your goal is to simplify without misoversimplifying. The question is how.

This column elaborates on #5 in my 2008 column – the importance of not “simplifying out” information that has to stay in for the sake of credibility. One kind of credibility-sensitive information consists of things your listeners already know that reflect badly on you or your message. If you’re explaining why you think an accident is exceedingly unlikely, for example, you’d better mention that you had one a few years ago. Whatever “the elephant in the room” is – whatever people are mulling over as you speak, waiting to see if you’ll dare to mention it – mention it. At the very least, don’t imagine that simplification is an acceptable excuse for omitting it.

I want to focus mainly on a different kind of credibility-sensitive information: information your listeners don’t already know that will cast doubt on your credibility (and therefore your message) if they learn it later. The worst misoversimplifications occur when this sort of credibility-sensitive information is omitted.

Communicative Accuracy

The gold standard for how to simplify without misoversimplifying focuses on this kind of credibility-sensitive information. The standard is called “communicative accuracy.” It was named – and beautifully articulated – by one of the pioneers of communication theory, Warren Weaver, in the lead editorial link is to a PDF file in the March 7, 1958 issue of Science:

The events of the past few months have emphasized something we have known all along – that it is important for scientists to describe their activities to the public in such a way that they will be generally understandable and properly informative. This runs into the practical difficulty that some scientists, when they attempt a “popular” description of their labors and of their ideas, insist on achieving almost the same precision and completeness of statement which they would, quite properly, use in talking to their scientific colleagues….

It may be helpful to such scientists to suggest that they consider the concept of “communicative accuracy.” This concept rests upon the fact, not always recognized, that the effective accuracy of a written statement depends primarily upon the interpretation given to it by the reader. A statement may be said to have communicative accuracy, relative to a given audience of readers and hearers, if it fulfills two conditions. First, taking into account what the audience does and does not already know, it must take the audience closer to a correct understanding. The better an example of communicative accuracy it is, the more gain in understanding it will achieve – but the basic point is simply that it must gain ground in the right direction. Second, its inaccuracies (as judged at a more sophisticated level) must not mislead, must not be of a sort which will block subsequent and further progress toward the truth. Both of these criteria, moreover, are to be applied from the point of view of the audience, not from the more informed and properly more critical point of view of an expert.

It is Weaver’s second criterion that protects the audience from misoversimplification.

Every communication constructs a “knowledge structure” that the audience will use to assess future communications on the same topic. According to the communicative accuracy standard, simplification must not result in an unsustainable structure. While a particular communication doesn’t have to include all the available information on the topic, it does have to be compatible with all the available information on the topic – so that new information will fit into the structure it has created.

For example, suppose you’re making the case that dimethylmeatloaf doesn’t cause leukemia. You’re aware of nine studies that reached that conclusion, and one that reached the opposite conclusion. If you want to simplify your communication by discussing only two or three of the nine studies on the majority’s side, that’s fine. But you can’t simplify by leaving out the only study on the other side.

Your knowledge structure has to be that most but not all of the research supports your conclusion. If you leave out the discrepant study, you create a structure that doesn’t prepare people for that discrepant study. So they are misled now. And if and when they encounter the discrepant study, they must either reject it as incompatible with their knowledge structure or rebuild the structure. If they pick the first option, you have misled them permanently. If they pick the second option, you have forfeited your credibility and undermined your message.

Although the example is hypothetical, the situation isn’t. Here’s a real analogue from a recent consultation.

When a company thought it might have an asbestos problem at a facility it had purchased recently, it hired a local lab to do a screening test. The lab said yes, the sample looked like asbestos, and in high concentrations too. The company took immediate precautions to minimize employees’ exposure, and brought in a team of real asbestos experts to do a more detailed analysis. This time the news was much more reassuring: Most of the contamination was a far less dangerous asbestos-like mineral, and the actual asbestos problem was small and manageable.

Largely because rumors were already circulating about the contamination and what it might be, the company asked me to help plan what it should say to employees, neighbors, and other stakeholders. I pushed it to tell everybody about that misleadingly scary screening test. There was a lot of senior management resistance, so I laid out my case in a memo:

Anyone who hears the full story … and hears it promptly and straightforwardly reported by [the company] itself – should find that story reassuring: The company took the problem seriously, investigated the problem thoroughly, took appropriate precautions (actually excessive precautions, as it turned out), and reached reassuring conclusions.

But those who hear the story belatedly and piecemeal may well find it alarming. Worst case interpretation: The company did a secret asbestos study that reported a huge asbestos problem, so it found a different consultant to say it wasn’t a serious problem, suppressed the first study, and buried the controversy. It didn’t tell us about asbestos at all until we heard rumors and demanded to know. Then it told us about the reassuring study but not the alarming one. Only after some information about the alarming study finally leaked out did the company try to convince us that the second study was right and we should forget about the first one, the one it had kept secret.

It is a tried-and-true principle of risk communication that alarming information is seen as far more alarming when it has been suppressed. Or to put it differently: Secrecy generates mistrust; mistrust generates outrage; outrage leads people to overestimate hazard….

Companies that would never suppress alarming information they considered valid often think it is appropriate to suppress alarming information they are confident is erroneous or misleading. “Why tell people something that’s likely to scare them when the truth isn’t scary at all,” they reason. The answer is that the suppressed information tends to come out sooner or later. And once information has been unsuccessfully suppressed, it is extremely difficult to convince people that that information is misleading. The principle here is straightforward but counterintuitive: All information that seems to contradict a technical conclusion must be presented and explained at the same time as the conclusion is advanced. If you advance the conclusion first, and then apparently contradictory information you have suppressed belatedly emerges, your chances of convincingly explaining away the contradiction are slim.

I had other recommendations as well – to discuss the expert analysis with the laboratory that had done the screening test and make sure that lab agreed (if it did) that the screening test had been mistaken; to bring in another expert, preferably one with activist leanings, to confirm the expert analysis; to invite employees and neighbors to participate in any follow-up analyses that were done; to acknowledge the stigma attached to the word “asbestos”; etc. But my core recommendation was grounded explicitly in Weaver’s communicative accuracy principle: to make sure the scary screening study wasn’t “simplified out” of the company’s account of its investigation.

Decades ago I was peripherally involved in another controversy where the communicative accuracy standard could have saved a chemical company a lot of misery (and money). The company had manufactured and marketed a pesticide that seemed to be doing harm to non-targeted plants. After some owners of damaged crops filed lawsuits, the company initiated its own proprietary research to try to figure out what had gone wrong. Early results suggested that yes, the pesticide was probably at fault. But further analysis and follow-up studies convinced the company that the pesticide wasn’t the problem after all. When faced with legal discovery demands for all research evidence in the company’s possession, top management decided that the early preliminary reports had been superseded by later, better ones and didn’t have to be turned over to the plaintiffs.

In the years that followed, several judges emphatically disagreed. Heavy sanctions were imposed on the company, and even plaintiffs who had already settled were allowed to reopen their cases in light of the previously suppressed information. The technical question of whether the pesticide had done the damage became almost secondary to the legal and ethical risk communication judgment that the pesticide’s manufacturer had hidden damaging evidence it should have disclosed.

The communicative accuracy standard is a reliable guide to what you can’t afford to “simplify out.” But it’s profoundly counterintuitive. It requires you to include precisely the information you’re most desirous of excluding: information that conflicts with your core message. It requires you do so even when you’re confident that your core message is accurate and that the questionable information you want to exclude would only mislead people. And it requires you to think ahead and think empathically – to ask yourself what it will be like for members of your audience when they learn later from someone else the information you’re considering not telling them now.

Misleading toward the Truth

I’m not talking here about consciously, intentionally misleading people – trying to make half-truths look like truths by leaving out the other side when you know perfectly well the other side has merit too. That happens a lot, of course, but it’s not complicated. It’s not an error. It’s simply dishonest.

Experts who misoversimplify aren’t usually trying to mislead their audience. On the contrary: They usually think the information they’re leaving out is likely to mislead their audience. That’s why they want to leave it out.

In previous writing I have sometimes called this “misleading toward the truth.” That term first appeared in a March 2004 column, written jointly with my wife and colleague Jody Lanard, on the U.S. Department of Agriculture’s communications after the first U.S. “mad cow” was discovered in December 2003. Mad cow disease, we wrote, is not a significant public health threat. But some of the facts surrounding the discovery of that first mad cow could easily have led people to think otherwise. And so the USDA felt entitled to say misleading things. We continued:

The USDA knew, confidently and justifiably, that mad cow disease in the U.S. was not a widespread threat to human health. It didn’t just guess this was true and get lucky. It knew it was true. But it feared that the unvarnished facts … might lead people to mistakenly conclude that the risk was sizable. To keep us from making this mistake, it carefully shaped the information it provided. It misled us about a number of relevant facts. Usually if not always, it managed to do so without quite lying….

So what’s the problem? Misleading people, even toward the truth, is a very dangerous behavior. If and when people learn they have been misled, they have great trouble thereafter believing the truth they were misled toward. If and when they discover that the company or agency they have been listening to cannot be trusted, they jump to the conclusion that the facts it withheld or papered over must be damning. In our field, risk communication, this is predictable – as sound as Sound Science gets.

Misoversimplification – or misleading toward the truth – is the antithesis of the communicative accuracy standard.

The essence of misoversimplification is the temptation to omit (or distort) information you fear the public will misinterpret if you include it. The essence of communicative accuracy is the recommendation to include information you fear the public will misinterpret if you omit (or distort) it and it later comes to light.

Both are strategies for coping with information complexity – with accurate facts that may well lead people to inaccurate conclusions. Do you acknowledge those facts and try to explain why you don’t think they mean what people might think they mean? Or do you suppress them and hope people don’t ever learn them?

The two strategies differ on at least three dimensions:

  • On an ethical dimension, suppressing awkward facts may be ethically justifiable under some circumstances, but it requires thought-out ethical justification. Acknowledging those facts is self-validating, an obviously honorable thing to do.
  • On a relationship dimension, telling people less than the whole truth as you know it reveals a kind of contempt; it embodies what Martin Buber calls an I-It relationship. Telling the whole truth is intrinsically respectful; it is I-Thou, treating people as people, not as objects or pawns.
  • On an outcome dimension, withholding information that might arouse controversy or confusion is a risky strategy. If you get away with it, you got away with it. But if people find out anyway, you’re in much worse trouble than if you’d come clean in the first place. Acknowledging the other side of the argument from the start is the more risk-averse of the two strategies.

For all three reasons, misoversimplification is always bad risk communication. But not all misoversimplifications are equally bad.

Levels of Misoversimplification

I want to distinguish three levels of misoversimplification, running from bad to worse to much worse. I will illustrate all three with public health examples.

Bad: The March of Dimes misoversimplifies whooping cough risk

The most benign level of misoversimplification is when you’re trying to arouse concern in an apathetic audience on an issue that isn’t particularly controversial.

For example, a several-years-old March of Dimes TV commercial (paid for by Sanofi Pasteur) that’s still running urges adult relatives of young children to get vaccinated against pertussis (whooping cough). This is an important and timely message, since whooping cough is making a big comeback in the U.S. right now. Other than the sound of a baby’s tortured coughing, the ad relies most heavily on this statistic: “Up to 80%” of infants with pertussis “get it from family members.” Unspoken small-type text on the screen qualifies the claim: “When a source was identified.”

Leave aside the “up to…” hedge and focus on the ad’s other qualifier, “when a source was identified.” Logic says the person who gave a child a disease is a lot likelier to be identified if it’s a family member than if it’s, say, another child in a toy store or a fellow passenger on a bus. So I checked out the study link is to a PDF file from which the 80% figure was drawn. The study authors acknowledge that they “could not identify a source case in 22.0%–51.6% of infants,” and say these unidentified sources “suggest that transmission from casual contact may be an important source of transmission.”

The March of Dimes ad clearly and intentionally leaves the misleading impression that relatives account for four-fifths of infant cases of whooping cough. But I think it’s still pretty benign. I’m not giving the ad a pass because its creators stuck a “when a source was identified” qualifier on the screen; that’s just a way of misleading without actually lying. Two other factors make the ad comparatively benign (though still unwise):

  • The ad is trying to arouse concern about a genuinely serious risk in order to convince people to take an appropriate precaution (getting vaccinated). Overstated alarm about a risk you consider serious is a lot more forgivable than overstated reassurance about a risk you consider small. The example I always give: A smoke alarm that goes off when there’s no fire is a minor inconvenience, whereas a smoke alarm that doesn’t go off when there’s a serious fire is a disaster. We calibrate smoke alarms to go off too much so they won’t miss a fire. We similarly calibrate activists and public health agencies – anyone who’s in the business of issuing warnings – to err on the alarming side. Alarmist misoversimplification is still bad risk communication – but it’s not as bad as reassuring misoversimplification.
  • The ad addresses a low-controversy issue. There aren’t a lot of critics (though there are some) arguing passionately that adults with an infant in the house shouldn’t get vaccinated against pertussis. When controversy is low, people can learn by successive approximation. Earlier oversimplifications get corrected as we learn more – and we’re unlikely to feel betrayed by those earlier oversimplifications because the issue isn’t pushing people’s hot buttons. No respectable activist groups are out there accusing the March of Dimes and Sanofi Pasteur of hoodwinking people into getting a dangerous shot.

“Comparatively benign” or not, I still urge clients to avoid even this sort of misoversimplification.

Worse: The World Health Organization misoversimplifies bird flu risk

Misoversimplification is a more serious infraction when the issue is controversial. High controversy means that much of the audience is listening carefully, waiting to see if you’re going to acknowledge the other side’s best arguments or if you’re going to pretend those arguments don’t exist. And high controversy means there are opponents waiting to pounce if you choose the “pretend” strategy. So you need to be careful not to misoversimplify – even if you’re on the alarming side of the controversy.

A good bad example comes from the World Health Organization’s recent pronouncements on H5N1 avian influenza. Popularly known as bird flu, H5N1 is considered by many experts to pose a huge potential human health risk. Ever since it started spreading widely (in birds) in 2004, WHO has been in the forefront of efforts to warn the world’s governments and people to prepare for the possibility of a bird flu pandemic.

But bird flu warnings are controversial today – a lot more controversial than whooping cough warnings – and bird flu warnings from the World Health Organization are especially controversial. For one thing, bird flu hasn’t shown any signs yet of spreading easily among humans, so in hindsight many people (and many journalists) feel that WHO’s pandemic warnings in 2004–2007 were false alarms.

Perhaps even more damaging, the world did finally get a flu pandemic in 2009–2010, the swine flu pandemic. It was genuinely scary at first but turned out pretty mild, killing fewer (though younger) people than an average flu season. But WHO kept sounding the alarm about swine flu too, leading many (especially in Europe) to conclude that swine flu was a “fake pandemic” foisted on the world by the World Health Organization to help Big Pharma sell vaccine – a preposterous notion to those who understand the prickly relationship between WHO and Big Pharma. Although WHO was justified in continuing to warn that swine flu might become more virulent, it failed to acknowledge what was obvious to almost everyone: that the actual circulating swine flu virus was relatively mild.

Bird flu was temporarily newsworthy again in 2011–2012, when two teams of scientists managed to bioengineer the first H5N1 mutations capable of transmitting easily between mammals. If the two new flu strains could travel in the air from one laboratory ferret to another, there was reason to worry that they might also be able to transmit from human to human outside the lab. Questions were raised about whether the two papers should be published and the work pursued: The research might help scientists predict, prepare for, or even prevent an H5N1 pandemic – but it might also cause an H5N1 pandemic if a transmissible strain escaped in a laboratory accident or got into the hands of terrorists or unstable lab workers. In the battle that ensued, WHO was firmly on the side of publishing the papers and continuing the research – which turned out to be the winning side.

The scariest fact about H5N1 is this: An astounding 60% of people whose cases of H5N1 were laboratory-confirmed died of the disease. The specter of a disease as contagious as ordinary flu that kills three-fifths of the people who catch it is terrifying. But H5N1 isn’t as contagious as ordinary flu – at least not yet. It’s a bird flu, not a human flu. It’s extremely difficult for humans to catch. There have been millions upon millions of sick birds, but only a few hundred laboratory-confirmed human cases so far.

Note the term “laboratory-confirmed” in the preceding paragraph. The World Health Organization’s “case definition” of a human H5N1 case requires lab confirmation. Cases that never get to the lab don’t get counted. Which cases are likeliest to get to the lab? The most serious ones, obviously – the ones where the patient is likeliest to die. Mild or asymptomatic cases are far less likely to be counted. So if there have been any mild or asymptomatic H5N1 cases at all, the WHO 60% case fatality rate probably overstates the actual deadliness of the disease. The lab count would have missed some H5N1 patients who died, but it would have missed more H5N1 patients who recovered or never even got sick.

And if there have been lots and lots of mild or asymptomatic cases, the WHO 60% figure could be way, way too high – that is, H5N1 could be a lot less deadly than WHO is telling us. (A lot of mild or asymptomatic cases would also mean that H5N1 is easier to catch than previously believed. But we’re focusing here on lethality.)

Experts know this is a silly question to argue about, since the deadliness of H5N1 now (transmitted rarely from birds to humans) isn’t especially predictive of how deadly it would be after mutating in a way that made it readily transmissible between humans. If we ever have an H5N1 pandemic, we will care deeply about what percentage of the population catches it, and just as deeply about what percentage of those who catch it have undetectable cases, mild cases, severe cases, or deadly cases. All we really care about now is that almost nobody catches it.

Nonetheless, the number of mild or asymptomatic H5N1 cases so far became a hotly debated flashpoint in the battle over whether the two bioengineering studies should be published. Experts disagreed passionately about the answer, even though they knew perfectly well that the answer made very little difference, since it wasn’t predictive of what a human-transmitted H5N1 virus might do. Both sides cited studies in support of their contentions. Most experts seemed to think the 60% figure was probably a little high; a minority claimed it was way too high.

Although the battle over the validity of the 60% case fatality rate is clearly a tempest in a teapot, it is a tempest in a teapot of which WHO is thoroughly aware. The 60% CFR is WHO’s number, grounded in WHO’s H5N1 case definition.

Keep in mind that WHO says it considers it “critical” to “increase public awareness and understanding” of why the H5N1 bioengineering studies that caused the brouhaha need to be published and the work continued. WHO wants people to realize that the high lethality of H5N1 makes studying its transmissibility all the more important.

Regrettably, WHO’s efforts to accomplish this goal consistently misoversimplify the 60% figure. Consider for example three World Health Organization documents released the last week of May 2012.

Here’s the very beginning of WHO’s “FAQs: H5N1 research issues”:

Q1: What has happened recently to raise concerns about H5N1 research?
Scientists have been trying to understand why the H5N1 virus does not transmit easily between humans and to learn what genetic changes could make it easily transmissible. H5N1 infection in humans is a serious public health concern. Human cases are sporadic but the fatality rate is about 60%.

Here’s the very beginning of WHO’s “FAQs: H5N1 influenza”:

Q1: What is H5N1?
H5N1 is a type of influenza virus that causes a highly infectious, severe respiratory disease in birds called avian influenza (or “bird flu”). Human cases of H5N1 avian influenza occur occasionally, but it is difficult to transmit the infection from person to person. When people do become infected, the mortality rate is about 60%.

And here’s the very beginning of WHO’s “H5N1 research issues”:

H5N1 avian influenza is an infectious disease of birds that can be spread to people, but is difficult to transmit from person to person. Almost all people with H5N1 infection have had close contact with infected birds or H5N1-contaminated environments. When people do become infected, the mortality rate is about 60%.

None of these three documents mentions that the 60% figure is based exclusively on laboratory-confirmed cases. But merely mentioning that fact would have helped only a little. It would have made the three documents technically more honest but nonetheless misleading. They would still fall far short of the communicative accuracy standard because they would still leave people unaware that the 60% mortality rate for people who catch bird flu is a controversial number.

To satisfy the communicative accuracy standard and avoid misoversimplification, the documents needed to say:

  • Roughly 60% of laboratory-confirmed H5N1 human cases have died. This is a terrifyingly high percentage – but there have been only a few hundred cases.
  • The real death rate of H5N1 is probably lower than 60%, since mild or asymptomatic cases are unlikely to get lab confirmation. Some experts think there are a lot of mild or asymptomatic cases, but most experts think there are comparatively few. If the majority is right, the real death rate of H5N1 is a little lower than 60% but still much higher than the death rate of any other influenza strain we know of.
  • For now, H5N1 is hard to catch but very deadly. Nobody knows how its deadliness might change if it ever mutated in a way that made it readily transmissible between humans.

Instead, WHO stuck to its unadorned, unexplained 60% number, without mentioning the expert debate over whether it’s the right number.

This will undermine the credibility of the three new documents for readers who already know about the debate. But the effect on readers who don’t know better and uncritically accept the 60% figure could be even worse. If they stay interested, sooner or later they will run across information about the hotly debated number of mild or asymptomatic cases that never made it into the denominator of WHO’s case fatality rate. Their belated discovery that 60% is an overstatement may lead them to conclude that the real number is not just a little lower but much, much lower. It may lead them to conclude that H5N1 is not a serious risk after all. It may lead them to conclude that WHO is not a trustworthy source on the seriousness of H5N1 … and perhaps not a trustworthy source period.

The timing of these three WHO informational documents with their misoversimplified 60% H5N1 case fatality rate – after months of debate over the validity of that number – suggests a high level of contempt for communicative accuracy. In its efforts to raise public awareness about the H5N1 research, WHO chooses to misoversimplify.

Much worse: The international polio eradication campaign misoversimplifies polio vaccine risk

Misoversimplifying your side of a controversial issue is always a big mistake. But if you’re on the alarming side of a controversy about a risk you genuinely and rationally consider serious, it’s usually a survivable mistake. Even though accurate warnings are more sustainable and ultimately more effective than misoversimplified warnings, at least misoversimplified warnings get people to pay attention to risks they have been shrugging off. The more controversial the risk, the greater the number of people who know already or will find out later that your warning was misoversimplified, with resulting damage to your credibility. Still, any rational and sincere warning has a laudable purpose, and if it doesn’t make factually false claims it is arguably a service to the people being warned – even if it misoversimplifies a controversial risk.

A misoversimplified reassurance is more credibility-threatening than a misoversimplified warning. It aims at persuading people to pay less attention to a risk they have been worried about. Worried people are a better informed and more skeptical audience than apathetic people. So the percentage of a worried audience that will catch a misoversimplified reassurance is a lot higher than the percentage of an apathetic audience that will catch a misoversimplified warning. And those who didn’t catch it will be quickly brought up to speed by those who did.

Moreover, a misoversimplified reassurance is more culpable than a misoversimplified warning. Shrugging off a risk is usually a bigger threat to health and safety than over-worrying about one. So most people are readier to forgive misoversimplified warnings (especially about big risks) than misoversimplified reassurances (even about small risks). In the endless battles between corporate polluters and environmental activists, for example, just about everyone realizes that both sides make a lot of one-sided claims. But people are mostly okay with activists whose messages leave out evidence that the pollution might not be so bad. We get irate at companies that leave out evidence that it might be deadly.

Bottom line: Reassuring risk communication (outrage management) requires closer adherence to the communicative accuracy standard than alarming risk communication (precaution advocacy). When you’re on the reassuring side of a risk controversy, “simplifying out” the other side’s sound arguments can be devastating to your cause, even unrecoverable.

Most public health agencies spend more time warning people than reassuring people. Their risk communication stock-in-trade is trying to get the public to take serious disease risks more seriously. But they do find themselves on the reassuring side of some risk controversies – for example, controversies over the safety of vaccines.

For the record: I am fervently pro-vaccination. I am not aware of any vaccine in common use that does more harm than good; most do orders of magnitude more good than harm. This isn’t a professional risk communication judgment, obviously. It’s an amateur medical judgment. I just want my bias on the table: I believe vaccines do enormously more good than harm.

But vaccines do some harm. And when public health agencies misoversimplify vaccine safety by hiding or ignoring or understating the ways vaccines do harm, they give the anti-vaccination movement its best ammunition.

A spectacular example is the ongoing campaign to eradicate polio.

There are two kinds of polio vaccines, injected (IPV) and oral (OPV). My generation thinks of them as the Salk vaccine and the Sabin vaccine respectively. The injected vaccine is made from dead polio virus, while the oral vaccine is made from polio virus that has been weakened but is still alive.

The OPV approach has two big advantages and one big disadvantage compared to the IPV approach. The first advantage of OPV is that it’s cheaper and better adapted to conditions in the developing world – for example, it takes much less expertise to administer. The second advantage is that OPV “sheds” – the “vaccine virus” survives in the feces of people who have been vaccinated recently. So unvaccinated children who come into contact with the feces of vaccinated children (in feces-contaminated water, for example) can get second-hand protection from polio. Given that hygiene is often poor in the developing countries where polio is still circulating, and given that vaccination campaigns in those countries miss a significant number of children, the fact that shedding leads to “incidental polio vaccination” is a very significant advantage. (It also raises ethical questions, of course, since this incidental vaccination isn’t voluntary.)

The disadvantage of OPV is that it can give you polio. That happens only very seldom. The oral polio vaccine prevents vastly more cases of polio than it causes. But it does cause some cases. And thanks to shedding, even a child who has never been knowingly vaccinated can get polio from the oral polio vaccine.

When a vaccinee gets polio from the vaccine, that’s called vaccine-associated paralytic polio (VAPP). When the vaccine’s polio virus sheds and starts a polio outbreak among non-vaccinees, that’s called vaccine-derived poliovirus (VDPV). VAPP has been known to exist for decades; in a 1982 study, link is to a PDF file WHO found that “the risk was less than 1 case per million children receiving the [oral] vaccine but varied between 0.5 and 3.4 per million in the individual countries.” VDPV cases and outbreaks have been well-documented only since early in this century.

Most of the organizations at the forefront of the global fight against polio acknowledge the existence of VAPP and VDPV somewhere on their websites.

But VAPP and VDPV are rarely if ever acknowledged when frontline vaccinators show up on people’s doorsteps in Nigeria, Pakistan, and Afghanistan, countries where polio is still circulating. And they are rarely if ever acknowledged in the continuing OPV campaigns in countries (like Indonesia) that think they have eliminated wild polio but are worried about its possible return.

VAPP wasn’t routinely acknowledged in the U.S. in the 1960s, either, when tens of millions of American children got the Sabin vaccine without any VAPP warning to their parents. But in the U.S. in the 1960s, the scourge of polio was vivid in people’s minds, and nearly everyone was incredibly grateful for the Salk and Sabin vaccines. Despite some genuine safety problems with the injected vaccine, very few parents in the 1960s declined polio vaccination for their children.

By contrast, the safety of the oral polio vaccine has been an extremely contentious issue in the developing world in recent years. Significant numbers of parents refuse to let their children be vaccinated, often because they believe the vaccine is dangerous … and sometimes because they believe it is intentionally dangerous, a western genocidal plot.

The germ of truth in that paranoid fantasy is, of course, VAPP and VDPV – plus the fact that many developed countries (including the U.S.) no longer license OPV and use IPV exclusively because of the risk of VAPP and VDPV. This isn’t hypocritical, but it is complicated. The public health consensus is that the oral vaccine makes sense in poor countries and where wild polio is still circulating, but should eventually be replaced with the injected vaccine when countries have bigger public health budgets, when they can sustain very high polio vaccination rates, and when the only polio cases they’re seeing are VAPP and VDPV. (Note: This is an oversimplification, and possibly an unintentional misoversimplification, of health officials’ very complex and evolving plans for how and when to stop using OPV.)

The risk communication question is how anti-polio campaigners in poor developing countries should respond to this germ of truth. Should they inform the parents of prospective vaccinees about the existence of VAPP and VDPV? Or should they cover it up?

The communicative accuracy standard says tell the parents. The policy of the Global Polio Eradication Initiative and its member organizations (the World Health Organization, the U.S. Centers for Disease Control and Prevention, UNICEF, and Rotary International) is to cover it up.

An impressive example of official polio misoversimplification occurred in Indonesia in 2005. The second round of a massive nationwide polio vaccination campaign was turning out far less successful than the first round, largely because parents feared that the oral polio vaccine might make their children sick. Indonesian health officials blamed the lowered vaccination rates on “sensational” media coverage of the deaths and hospitalizations of a handful of children who had been vaccinated in the first round. The Ministry of Health insisted this was just a coincidence.

David Heymann, then the WHO “Senior Advisor for Polio Eradication,” supported the health ministry’s claim, explaining:

There are 2,000 children who die each day in Indonesia and it is natural that when you are immunising, some of those children who would die normally would die.

That’s literally true. It’s carefully worded so as not to constitute an outright lie. But Dr. Heymann was explicitly trying to convince Indonesian parents that VAPP didn’t exist. He was telling one truth in order to steer the Indonesian public away from a different truth: When millions of children are vaccinated with OPV, a few of them are going to catch polio from the vaccine.

WHO’s Indonesia representative Georg Petersen seconded Dr. Heymann’s misoversimplified talking point. “This is one of the safest vaccines,” Dr. Petersen said. “But we also have to remind people that everyday in Indonesia, more than 2,000 children die of other causes. So when we vaccinate absolutely every child in the country, of course, some children who got the vaccine, might die of other causes afterwards.”

In an Associated Press story on Indonesia’s polio vaccination controversy, UNICEF’s Claire Hajaj told reporter Michael Casey that “the biggest challenge is public trust.” She went on: “The key is that community fears get addressed and they don’t turn into widespread vaccine avoidance.” Hajaj didn’t say whether she thought misoversimplification about OPV safety was building or undermining Indonesians’ trust in polio vaccination.

I can’t prove that covering up VAPP and VDPV has backfired on the international polio eradication campaign. I can’t prove that Indonesian parents would have been likelier to vaccinate their children if told candidly over the decades that VAPP was rare but real, instead of being suddenly faced with a red-hot argument in which the media said the vaccine was making some kids sick and health officials insisted (falsely) that it could only have been a coincidence. I can’t prove that public concern about vaccine safety is attributable in part to the failure of health officials to adhere to the communicative accuracy principle. And I can’t prove that public health officials and international health agencies are more widely mistrusted today and will become even more widely mistrusted in the future because of their misoversimplification habit. I am convinced that these things are true – but I can’t prove them.

What I can prove is that covering up VAPP and VDPV is official policy of the Global Polio Eradication Initiative.

The Initiative’s January 2011 “Training Manual for National/Regional Supervisors and Monitorslink is to a PDF file (apparently developed for a national polio vaccination program in Yemen) is on the Initiative’s website – and it says so.

The manual instructs readers: “You should memorize the following Questions and Answers.”

Here is #8, in its entirety. This is what trainees were to memorize to tell any parent who asked:

Question
Are there side effects from the vaccine?

Answer
No, there are no side effects. Experience shows that if children develop diseases or symptoms after vaccination, it is a mere coincidence. They would have developed these diseases anyway, with or without vaccination.

I will give one final example of polio vaccination misoversimplification. In late September 2007, the world press learned that the largest-ever known outbreak of VDPV had been occurring in northern Nigeria since 2005 – and that WHO, the CDC, and Nigerian health officials had known about it for more than a year. At the moment health officials finally acknowledged the outbreak publicly, well over half the cases had occurred after the officials themselves had known about it – that is, while the VDPV outbreak was a secret. At that point, there were over 90 known cases in Nigeria’s VDPV outbreak. Eventually there would be more than 350.

Why did officials keep the outbreak secret? Olen Kew of the CDC told Science reporter Leslie Roberts: “There were legitimate concerns that anti-polio vaccination rumors would be rekindled by an incomplete explanation of the cause of the VDPV outbreak.” In an Associated Press story by Maria Cheng, Kew went even further: “The people who are against immunisation may seize on anything that could strengthen their position, even if it’s scientifically untenable.” In other words, it’s okay to hide the VDPV outbreak so opponents won’t be able to exaggerate it.

Cheng’s story also reported that “Dr. David Heymann, WHO’s top polio official, said that because WHO considered the Nigerian outbreak to be an ‘operational’ issue, it was unnecessary to share the information beyond its scientific committees.”

But the Science article noted that several polio experts disagreed with the decision to keep quiet. Some felt that other scientists needed to know about the VDPV outbreak in order to better understand the risks of OPV. And at least one expert told Science that the Nigerian public deserved to know as well:

Polio expert Oyewale Tomori, vice chancellor of Redeemer’s University near Lagos and chair of Nigeria’s expert advisory committee for polio eradication, says he has been urging officials to go public. He worries that secrecy might fuel suspicions about vaccine safety instead of reinforcing the need to intensify immunizations in Nigeria.

Going public with a long-suppressed secret is, of course, more difficult and more damaging than not suppressing the secret in the first place. That’s the core vaccine risk communication problem the Global Polio Eradication Initiative faces today: It can’t just start including the truth about VAPP and VDPV in its future messaging. It can’t tell the truth about VAPP and VDPV without also telling the truth about its long history of intentional, deceitful misoversimplification.

“Simplifying out” information that conflicts with your message is addictive. I think that’s the single best argument against misoversimplification and in favor of the communicative accuracy standard. Once you don’t admit something you should have admitted, it gets harder and harder to admit it later. When it finally emerges, people don’t just learn it. They learn it in a way that makes it loom much bigger than it would have seemed if you’d been mentioning it all along. And they learn that you have been habitually dishonest.

My wife and colleague Jody Lanard contributed research to this column.

Copyright © 2012 by Peter M. Sandman

For more on outrage management:    link to Outrage Management index
For more on precaution advocacy:    link to Precaution Advocacy index
For more on infectious diseases risk communication:    link to Pandemic and Other Infectious Diseases index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.