Posted: October 19, 2018
This page is categorized as:    link to Outrage Management index  link to Pandemic and Other Infectious Diseases index
Hover here for
Article SummaryThe fundamental question this column poses is whether to post informational “warnings” about a risk that many people consider serious but most experts don’t. The column focuses on a specific example: labeling foods that contain genetically modified ingredients. The column concedes that GM food labels have a “hazard salience” effect that leads to increased concern. But the labels also have an “outrage reduction” effect – a product of control, voluntariness, familiarity, trust, and cognitive dissonance – that leads to decreased concern. Usually the outrage reduction effect is stronger and longer-lasting than the hazard salience effect. And the available evidence suggests that this is indeed the case for GM food labels, which turn out more calming than alarming. The column then broadens the discussion to informed consent more generally. Relying in part on the example of the Dengvaxia vaccine, it builds a case that it is wiser to provide potentially scary information about small risks than to withhold this information. Even when people overreact – that is, even when the hazard salience effect overwhelms the outrage reduction effect – the crucial need to build and sustain trust makes honesty nonetheless the best policy.

Labeling and Informed Consent

This is the 38th in a series of risk communication columns I have been asked to write for The Synergist, the journal of the American Industrial Hygiene Association. The columns appear both in the journal and on this website. This one can be found (cut substantially for length and with some minor copyediting changes) in the October 2018 issue of The Synergist, pp. 30–33.

I routinely write two versions of my Synergist columns, a longish one for my website and an abridgement for the magazine. The short version is on the AIHA website for readers who don’t want to bother with additional theorizing and examples.

Most product labels are uncontroversial. They provide information that people want to know, or that somebody (typically the manufacturer or the government) wants them to know. Labels on potentially harmful products are uncontroversial too. Foods containing peanuts, for example, need a label so consumers with peanut allergies can avoid them.

Ditto for workplace risks. A drum that contains a flammable or carcinogenic chemical needs a warning sign. Though many details have changed over the years in the regulations that govern the labeling of industrial hazards, the simple mandate that hazards must be labeled hasn’t changed. That mandate is older than industrial hygiene itself.

There’s a twofold rationale for labeling something that is risky. First and usually foremost, the label can help you avoid harm, either by avoiding the potentially harmful situation altogether or by taking appropriate precautions. But almost as important is the label’s informed consent component: If you decide to take the risk, you do so knowingly thanks to the label. The concept of “warning” embeds both rationales: Back off or proceed at your own risk.

All that makes perfect sense if the label is warning against a risk that’s sizable. But what if the risk is real but tiny, and accompanied by far greater benefits? Is it wise to label toothpastes that contain parts-per-trillion of a carcinogen, even if the label will discourage tooth-brushing and the experts are confident that such tiny concentrations of that particular carcinogen pose no significant risk? Should we tell people that the fish they’re considering buying has low levels of mercury, even if we know that such a label will probably lead them to eat more meat instead, and even if we know that, despite the mercury, eating fish entails a smaller health risk than eating red meat? And if employees or neighbors of an industrial facility are inclined to overreact to small hazards, should facility management be nonetheless obligated to notify them when those small hazards are present?

And what if there’s considerable dispute whether the risk exists at all? Should we label a product on the supermarket shelf or a chemical in the workplace that contains a substance many people consider dangerous but most experts consider safe?

Take for example a food with a genetically modified (GM) ingredient – the example I’ll focus on in this column. According to a 2015 Pew Research Center report link is to a PDF file, 88 percent of scientists think GM foods are safe, compared with only 36 percent of the general public. Should GM foods carry a label so the 64 percent of consumers who want to avoid them can do so?

I’m ignoring some complexities here:

Ignore all that. For the sake of the argument, we’ll assume the 88 percent of scientists are right and it’s foolish to worry about GM foods. Does that mean it’s foolish to label them?

The case for labeling a presumptively safe product that some people mistakenly consider dangerous is straightforward: transparency or the “right to know.” Give people true information they want, thereby enabling them to act on their own judgments and preferences. To many people this is a self-evident ethical precept: If people want to avoid a particular risk, how dare companies deny them the relevant information merely because the companies think they’re being silly?

In poll after poll, overwhelming majorities of Americans support GM labeling – 92 percent in a 2014 Consumer Reports survey. Similar majorities would probably support carcinogen labels on toothpastes, mercury labels on fish, etc. Even people who aren’t worried about a particular product risk generally agree that if others are worried, the product should be labeled.

soda can:

The case against labeling is more complicated, but not foolish. In the minds of at least some consumers, a GM label implies that there must be something wrong with GM foods. That’s especially true of a mandatory label, “or why would the government make them label it?” Assuming this implication is false, the label misleads the public, distorts the market, and unfairly stigmatizes GM foods. And insofar as genetic engineering can lead to improvements in a food’s nutrition, pest resistance, shelf life, etc., by discouraging consumption GM labels discourage these improvements.

Based on this reasoning, the food industry has traditionally opposed mandatory labeling. So have many scientific organizations. In 2012, for example, the American Association for the Advancement of Science adopted a Board statement link is to a PDF file that mandatory GM labels “can only serve to mislead and falsely alarm consumers.”

What are the risk communication implications of GM labels? Do they falsely alarm consumers as the AAAS concluded, or might they actually have a reassuring effect? And can this discussion be extrapolated to the broader issue – beyond genetic modification and beyond labels – of transparency and informed consent when you think people are unduly fearful?

This is the broader issue: Assume you know something that your workforce, neighbors, or customers want to know. Even if they’re not actually asking for the information, you’re pretty sure that if they knew what you know they would think you should tell them … and if they find out later they will think you should have told them. But you’re also worried that the information will alarm them unduly, unnecessarily, and perhaps even harmfully. Should you tell them anyway? Or should you withhold the information if it’s legal to do so? And should it be legal to do so?

The wisdom of warnings is indisputable when a risk is serious and an audience is unduly apathetic. But what about the opposite situation? What should you do (and what should you be required to do) with accurate information that you’re worried will unduly alarm your audience? And how likely are you to be right that telling the truth under these circumstances will exacerbate the audience’s technically unjustified fears? These are questions industrial hygienists sometimes face. And GM labeling raises these questions squarely.

The bottom-line answer this column espouses is twofold.

GM labeling law

Here’s a quick rundown on U.S. GM labeling law.

Because of widespread and slowly growing public demand for information about GM ingredients in foods, in recent years lots of states have considered laws or regulations that would require GM labels. In 2014 Vermont became the first state to pass such a law, which took effect in 2016. Everywhere else in the U.S., GM labels were not required.

Partly in response to public demand and partly in response to the Vermont law, some major U.S. companies started voluntarily putting GM labels on their products. Anticipating similar laws in other states, they decided to make their nationwide labels Vermont-compliant; they also started to think a single national regulation might be preferable to 50 state regulations.

But even as voluntary labeling increased and industry opposition to mandatory labels softened, industry opposition remained fierce to voluntary labeling in the other direction: labels boasting that a particular product does not contain GM ingredients. Emblazoning “No GMOs!” on labels makes the controversy more obtrusive and is therefore potentially more harmful to sales of GMO-containing products than a tiny-type label on those GMO-containing products that reluctantly acknowledges GM ingredients. Such “negative labels” are also arguably more misleading, especially when they seem to claim a competitive advantage in a product category where GM ingredients are nonexistent. “GMO-free” table salt and bottled water are only the most extreme examples.

State regulations banning or restricting “No GMOs” labels have varied widely, as have court decisions about the constitutionality of these regulations. The U.S. Food and Drug Administration currently advises (but does not require) that negative labels include language explicitly acknowledging the government’s judgment that genetic modification doesn’t make a product unsafe. Some state laws require that sort of language.

That’s how things stood in 2016. One state, Vermont, required GM labels. Some companies voluntarily adopted GM labels. And some states prohibited or regulated GMO-free labels.

Then President Obama persuaded Congress to pass a federal mandatory GM labeling law. Congress had previously considered but not passed a bill that would have outlawed state mandatory GM labeling laws. Paradoxically, the law Congress passed in response to Obama’s prodding accomplished that objective. The federal law immediately preempted Vermont’s law, even though implementation of the federal law had to wait for the U.S. Department of Agriculture to work out the details.

USDA released draft regulations in May 2018. The public comment period ended in July 2018 (while I was starting work on this column). The final regs aren’t out yet.

But the draft regs aren’t what you’d call tough.

The USDA draft regulations try to make it relatively easy for consumers to avoid GM-containing products if they’re already motivated to do so, while minimizing the chances that the labels themselves will motivate any change in consumers’ choices. I would quibble with some of the details, especially the decision to allow manufacturers to settle for a QR code instead of a text message or an icon. But on the whole, the draft regs strike me as a pretty sensible informed consent compromise, given a technology that most experts consider safe and many consumers consider dangerous.

Risk communication principles

What does risk communication tell us about the likely effects of GM labels, including labels more like the ones Vermont briefly required than the ones USDA proposes to require? A core principle of risk communication – at least of my approach to risk communication – is that outrage (fear, anger, etc.) largely determines hazard perception. If lots of people are overreacting to some risk, thinking it’s more hazardous than the expert consensus says it is, odds are their outrage is a major cause of their overreaction. So a core risk communication task when you think people are overreacting is to figure out how to reduce their outrage and thus their hazard perception.

How does labeling affect outrage? According to risk communication principles, labeling should usually reduce outrage in people who are already worried about a risk. Of the 12 outrage components link is to a PDF file on my A list, labeling moves at least four in the direction of lower outrage: control, voluntariness, familiarity, and trust. A label that tells you a product contains GM ingredients (or a carcinogen, mercury, etc.) makes you feel more in control; it makes your decision to accept the risk more voluntary; it makes you more familiar with the risk and thus more used to the risk and increasingly desensitized to it; and it makes you feel more trust in whoever has so candidly alerted you to the situation.

Another factor that makes labeling usually an outrage-reducer is cognitive dissonance. Even consumers who try to avoid products with a GM label usually end up buying such products from time to time. Each decision to knowingly choose a GMO-containing food is a risk communication message to themselves that they’re not all that worried about the risks of genetic modification. It’s hard to keep buying and eating GM foods and still remain fervent about how dangerous it is to buy and eat GM foods.

But labeling can obviously work the other way too. The decision to alert people to the presence of GM ingredients does imply that GM ingredients are something they might want to consider avoiding. This is true, though less so, even if the label has no explicit alarm-arousing word like “Warning!” And if consumers know the label is mandatory, that implies all the more clearly that the label information is negative.

Perhaps consumers who have never heard of genetic modification or bioengineering might be unaffected by such a label … though even they could guess that these terms must refer to some kind of problem. For better-informed consumers, the label inevitably makes more salient in their minds whatever they have heard previously about the controversy over GM foods. How could it be otherwise?

So GM labels have two effects that work in opposite directions: the outrage reduction effect and the hazard salience effect. The question is which is greater.

I have long thought – for genetically modified food labels in particular and for risk-related informational labels in general – that the outrage reduction effect is usually the greater of the two.

The outrage reduction effect of labeling, in fact, can be a significant problem for labels meant to warn people about serious hazards. Like a neutral informational label, a warning label also increases control, voluntariness, familiarity, and trust. And those who take note of the warning but proceed nonetheless are bound to experience some cognitive dissonance that that they will then reduce by telling themselves that the risk doesn’t especially worry them. Thus it is hard to design a warning label that keeps warning people; the outrage reduction factors built into repeated warnings tend to vitiate the intended arousal of concern.

Of course if you’re unfailingly committed to avoiding a particular product ingredient – you’re allergic to peanuts or you hate garlic or you’re terrified of GM foods – your reaction to that information on labels probably won’t wane much over time. People who always search out certain information and always act on it are more or less immune to “warning fatiguelink is to a PDF file (the technical term for the waning effectiveness of warnings). But that’s the exception, not the rule. Most of the time, most people habituate to labels, even warning labels.

The literature on warning fatigue is too complex for me to summarize here. Suffice it to say that people exposed to the same warnings again and again without experiencing any visible bad outcomes are at risk of starting to shrug off the warnings.

As I have written before, I don’t see warning fatigue as necessarily dispositive. Risk communicators work hard to find ways to keep warnings working despite the outrage reduction effect. Among other things, a well-designed warning that’s intended to warn doesn’t just tell you that there’s some dimethylmeatloaf in the chemical drum. It tells tell you in both words and graphics what’s risky about dimethlmeatloaf. And it tells you what you should do to avoid exposure and what you should do if you’re exposed. It leaves no doubt in your mind that the experts think dimethlmeatloaf is dangerous stuff.

By contrast, a GM label just signals that if you’re really committed to avoiding foods with genetically modified ingredients, you should buy something else. You might infer from the label that the government or the manufacturer must think GM foods are dangerous. Or you might conclude – correctly – that the label is simply a response to widespread public demand for the right to know.

A key distinction here is whether the label’s creator is:

The first kind of label is the most challenging. A recent study, for example, found that text-only warning labels about the dangers of sugary soft drinks had no effect at all, while warning labels that used upsetting graphics as well (rotting teeth; a protruding stomach; a diabetic getting an insulin shot) reduced short-term soda purchases by a statistically significant but very small amount.

The second kind of label is the one industrial hygienists have the most use for. It’s a lot easier than the first kind, because no real persuasion is necessary. Your audience is already aware of the risk and already disposed to take precautions. All you need to do is tell them there’s dimethylmeatloaf in the drum and remind them about the precautions they should be taking … and maybe urge them to be diligent about it. Even so, warning fatigue is a real problem for this kind of label. People get used to the label, used to the risk, and (if they’re lucky) used to nothing bad happening even when they have neglected the recommended precautions.

The third kind of label is by far the easiest. You’re not trying to change your audience’s beliefs. You’re not even trying to remind them about their prior beliefs. You’re just providing information they can use or ignore as they prefer.

Bottom line: With regard to a neutrally phrased informational label about a risk that some people fear and others don’t, I would expect the outrage reduction effect to overcome the hazard salience effect.

It would be a risk communication coup if we could figure out a more effective way to help the public distinguish these three kinds of labels – especially the first two kinds from the third. Perhaps we could “label the labels.” Use a blood-red “duty to warn” label to tell people things they might need to know, such as whether a food contains peanuts that might kill them if they have a nut allergy. Use a sky-blue “right to know” label to tell people things they might want to know, such as whether the food contains GM ingredients (and whether it’s kosher, halal, made in the U.S., etc.). But that’s a subject for another day. For now, the key point is that “right to know” labels – the third kind – are likely to yield more outrage reduction than hazard salience.

My seminar handout entitled “Biotechnology: A Risk Communication Perspectivelink is to a PDF file was copyrighted in 1999. It includes this advice:

Support labeling.

The most crucial source of individual choice for biotechnology is labeling. Public acceptance will be faster and surer if people feel they will not be unknowingly exposed to GMOs. Where labeling is feasible, industry opposition is self-defeating. Where labeling is unfeasible, the battle for acceptance will be much tougher. Product development choices should be made with this distinction in mind.

Here’s my analysis of the likely effects of GM labels:

If that’s the effect of GM labels, what is the effect of the decision not to label? Presumably it has no short-term effect; it’s a nonevent. But the GM controversy doesn’t go away just because GM products aren’t labeled. Insofar as people are paying attention to the controversy, their concern might easily be exacerbated by the absence of labels and the resulting near-impossibility of protecting themselves. Worse yet, people who haven’t paid attention to the controversy until recently but are now newly worried about GMOs would tend to experience post-hoc outrage at having been hoodwinked into buying products they would have avoided (or now think they would have avoided) if only those products had been properly labeled.

In short, people who are worried about a situation or substance they consider dangerous may actually become more worried when the relevant information is (or was) kept secret and less worried when it has been shared candidly.

That’s what risk communication theory predicts. Now what’s the evidence about the actual effects of GM labels?

GM labeling research evidence

Research results about the effects of GM labeling seem to depend on a wide range of factors. Among them:

Understandably, reviews of the literature tend to conclude that the evidence is “mixed.”

My overall impression after reading some of the relevant studies and literature reviews: If you ask people how they would respond to GM labels, you end up concluding that the labels will probably deter a lot of purchases link is to a PDF file of GM-containing products and motivate a lot of purchases of GM-free products. But if you look at how GM labels have actually affected sales, you end up concluding that (in the U.S., at least) there is little if any impact of positive (“contains GMOs”) labels. Negative (“no GMOs”) labels do seem to create or serve a niche market, but even they don’t significantly stigmatize the GM-containing versions of the same product.

I am less interested in how labeling affects short-term sales than in how it affects long-term attitudes (which of course affect long-term sales). As I have already noted, risk communication theory suggests that the hazard salience effect should mostly be short-term – basically an “adjustment reaction” that should diminish as people get used to the labels. The outrage reduction effect, on the other hand, should increase over time as control, voluntariness, familiarity, trust, and cognitive dissonance work their wonders. In other words, I’d expect GM labels to be a bit scary at first, but calming in the long run.

Measuring the long-term effects of labeling is difficult. It’s one thing to show people a bunch of labels and ask them how they think each label would influence their views on GMOs and their likelihood of buying a product with that label. It’s something else entirely to see what actually happens to people’s attitudes and purchases after they have moved from a shopping environment in which GM foods aren’t labeled to an environment in which they are.

The Vermont law provided an ideal opportunity to investigate the real-world effects of GM labels. For a short time, a lot of foods in Vermont had GM labels, whereas the vast majority of foods elsewhere in the country did not. We can look at how the attitudes of Vermonters toward genetically modified food ingredients changed during this period – and we can compare the results with what happened during the same period to the attitudes of non-Vermont U.S. residents.

A study published in June 2018 did just that. The study’s title tells the tale: “Mandatory labels can improve attitudes toward genetically engineered food.” link is to a PDF file Both Vermonters and residents of other states were asked their views regarding the acceptability of genetically modified foods during three time periods before GM labels were mandatory on Vermont grocery shelves and two periods afterward.

At the time of the first measurement, before mandatory labels appeared in Vermont, consumer attitudes were somewhat more negative toward GM foods in Vermont than elsewhere in the country. By the final measurement, after Vermonters had had six months to get used to the labels, opposition to GM foods had declined substantially in Vermont, while GMO concern climbed modestly in the rest of the country. In fact, six months after Vermont’s GM labeling requirement went into effect, opposition/concern was actually a little lower in Vermont than in other states.

The question used to measure opposition/concern was phrased differently for the Vermont sample than for the national sample, so it might be misleading to compare the two sets of responses. But there’s no problem comparing how the two changed. Over the duration of the study, GMO “concern” went up a little in the rest of the country while GMO “opposition” went down substantially in Vermont. By the end of the study, just six months after the Vermont labeling requirement went into effect, Vermonters’ opposition to GM foods was 19 percent lower than it would have been if the pre-labeling gap between Vermonters’ attitudes and nationwide attitudes had remained constant.

It’s always risky to rely too heavily on a single study. Maybe something else happened in Vermont, other than the new labels, that could account for the decline in opposition to GM foods. Maybe the sorts of labels used in Vermont differed from the sorts of labels USDA will end up requiring in ways that will influence how the labels affect consumer attitudes. Maybe if the study had run longer, Vermonters would have turned against foods with GM labels. Maybe people in Vermont simply respond differently to labels than people in other states would respond.

But the Vermont study is the best evidence we have so far, and it strongly suggests that the outrage reduction effect of GM labels tends to be greater than the hazard salience effect.

Two more examples: lawn pesticides and Dengvaxia

This isn’t just about GM labels on foods. It’s about any situation where the authorities think some public is unduly concerned or likely to become unduly concerned, and are therefore tempted to withhold information for fear of feeding the concern. That’s a situation industrial hygienists sometimes encounter vis-à-vis both the workforce and the neighborhood.

Decades ago I advised the lawn pesticide industry to encourage landscapers to post signs after applying pesticides to a lawn. Passers-by who were worried about pesticide risk could choose to walk around. Passers-by who chose not to walk around would be signaling to themselves that they don’t consider lawn pesticides a significant risk. Over time, cognitive dissonance plus increases in control, voluntariness, familiarity, and trust would reduce the level of public outrage about lawn pesticides.

My advice to the lawn pesticide industry was grounded in my judgment that the outrage reduction effect of labeling usually exceeds the hazard salience effect. It’s a judgment based partly on risk communication theory and partly on my years of consulting experience. But I didn’t and still don’t have proof I was right. I never collected data or saw data on the actual effects of lawn pesticide warning signs that could confirm or rebut my opinion.

The same is true for the current and more complicated – and far more important – example in the paragraphs that follow: the Dengvaxia vaccine. As with lawn pesticides, I think informed consent could have helped remedy the literally deadly Dengvaxia dilemma. As with lawn pesticides, I haven’t got proof. (If you’re not interested in the complexities of Dengvaxia vaccination and the case for informed consent about the vaccine, feel free to skip to the next section.)

Dengue is a serious disease that’s widespread in many tropical countries. Dengvaxia is a recently developed vaccine against dengue, the only dengue vaccine available so far. People who live in places where dengue is highly endemic often catch the disease more than once. And their risk of dire complications is significantly greater the second time.

Here’s why that’s so important. Dengvaxia works well for vaccinees who have had dengue before. If you’ve had dengue once before, Dengvaxia makes you less likely to catch it a second time – and remember, it’s the second case of dengue that’s likeliest to be deadly. But for vaccinees who have never had dengue before, Dengvaxia can make their bodies react to a subsequent first case (if they catch one) as if it were a second case. For them, the vaccine does more harm than good. link is to a PDF file

I want to be absolutely clear what I mean by “more harm than good.” Most Dengvaxia vaccinees who have never had dengue before don’t catch a severe case of dengue after vaccination. So the vaccine does them no harm at all. But statistically, vaccinees who have never had dengue before are likelier to catch a severe case than if they hadn’t been vaccinated. The vaccine does them some good because they’re less likely to catch dengue at all; but if they do catch dengue, the vaccine worsens their odds of a severe case. On balance, the experts say, people never before infected with dengue would be wiser not getting vaccinated.

In countries where most people catch their first dengue case in early childhood, a Dengvaxia vaccination campaign would do more good than harm overall. But the vaccine would do more harm than good for the minority who have never been infected; that minority would be better off not getting vaccinated. Ideally, they’d be even better off waiting until after their first dengue infection and then getting vaccinated.

Unfortunately, so far there’s no feasible way to test every participant in a mass vaccination campaign to see which ones have never had a prior dengue infection and therefore shouldn’t be vaccinated. Tests exist. They’re called serology tests, and they distinguish people who have been infected in the past (seropositive) from those who have never been infected (seronegative). But the tests are too expensive and logistically difficult to implement as part of a mass vaccination program in most of the countries with serious dengue problems.

So Dengvaxia is more like low levels of mercury in fish than it is like GM food ingredients. Most experts think GM foods are harmless. But in places where dengue is highly endemic, Dengvaxia vaccination leads to a real, scientifically documented harm to a subset of the vaccinated population that’s more than compensated for (statistically) by a bigger real, documented benefit to most of the vaccinated population. The difference is that both the big benefit and the small harm from eating fish go to the same consumer. For Dengvaxia, on the other hand, vaccinees with prior dengue infections benefit, while vaccinees who have never had dengue before are harmed. A parent can guesstimate the odds that her child has had dengue before, based on local disease statistics. But she can’t actually know whether her child is better off vaccinated or unvaccinated unless her child gets a dengue serology test – and for the most part, the tests are not available.

The first country to conduct a mass Dengvaxia campaign was the Philippines. When the campaign was first launched (only for children 9 and older, and only in highly endemic parts of the country), the actual evidence of Dengvaxia’s downside for previously uninfected vaccinees was weaker then than it is now. It was already clear that vaccinated children younger than 9 had a higher risk of severe dengue – higher than the risk faced by older vaccinees and higher than the risk faced by non-vaccinees their age. Most experts already suspected that the reason younger kids didn’t fare well after vaccination was probably because too many of them hadn’t had their first case of dengue yet. Many experts were pretty confident that was true. But there wasn’t much proof. So the World Health Organization accepted age as a proxy for serostatus. It recommended not vaccinating children younger than 9, but it didn’t insist on serology testing, which it knew was impractical in most if not all dengue-endemic developing countries.

The Philippine government therefore decided to set a minimum age of 9 for vaccination. But it didn’t say anything to parents about the possibility, even the likelihood, that if their children had never had dengue, Dengvaxia might well endanger them more than it protected them.

Together with my wife and colleague Jody Lanard, a medical doctor, I have been following the Dengvaxia campaign in the Philippines since before the campaign rollout in April 2016. The only “informed consent” efforts that we have been able to find tell parents only about the common side effects of most vaccines, such as headache, injection site pain, malaise, and myalgia.

The Philippine Dengvaxia campaign was already well underway when the evidence for the vaccine’s suspected dangerous effect on seronegative children got stronger. The new evidence was published in late 2017 by Dengvaxia’s producer, Sanofi As the news spread in the Philippines, there was an explosion of public outrage (fueled in part by the opposition political party).

Outrage, of course, increases hazard perception. So large swathes of the Philippine public and the Philippine media became convinced that Dengvaxia was killing a lot of children … and completely lost track of the much larger number of children Dengvaxia was genuinely protecting.

The outrage got so bad that the government had to cancel the dengue vaccination campaign. Even worse, public confidence in vaccine safety plummeted and uptake of other vaccines may have declined in the Philippines as parents generalized their learned distrust of public health officials.

Largely in response to the Philippine experience, along with the updated data from Sanofi, the World Health Organization and its Strategic Advisory Group of Experts on Immunization (SAGE) no longer recommend mass dengue vaccination campaigns unless every prospective vaccinee is tested first to see whether he or she has had a prior dengue infection. As a practical matter, adhering to that recommendation means waiting to launch a dengue vaccination campaign until mass testing becomes feasible some day or a different vaccine without Dengvaxia’s downside is invented.

There are several relevant WHO documents on Dengvaxia. You can read the two most recent ones, both from September 2018, here link is to a PDF file and here. Three April 2018 predecessor documents are also informative; they are here, here link is to a PDF file, and here (Note: this link launches an audio file: link is to an audio MP3 file).

These five WHO documents vary in how candid they are about the impracticability of mass dengue screening tests. They vary also in how fervent they are in their opposition to vaccination campaigns without prior testing of prospective vaccinees; by September WHO was showing some reluctant willingness to countenance such campaigns in communities with solid evidence that at least 80 percent of the cohort to be vaccinated has already had dengue.

All these WHO documents consistently concede that wherever dengue is endemic, Dengvaxia is bound to do more good than harm overall. One key statistic: Over a five-year follow-up period, looking at vaccinees 9 and older, 80 percent of whom had already had dengue, vaccination prevented 13 dengue hospitalizations for every one dengue hospitalization it caused.

Even so, WHO is now pretty strongly against vaccinating untested children, no matter how old they are. You can see its point. All vaccines have some risks, but we’re usually talking about serious side effects for one in a thousand vaccinees, or even one in a million. I can’t think of any other vaccine that endangers one vaccinee in five more than it protects that vaccinee, as is the case when kids are vaccinated with Dengvaxia without serology testing in a place where one out of five kids has never had dengue. Moreover, it’s not impossible to identify that one-in-five and vaccinate just the other four. The tests exist. It’s just financially and logistically impractical to test millions of children. Is the impracticality of testing a good enough reason to knowingly do more harm than good to one vaccinee in five?

The decisive argument in at least some of these five WHO documents is grounded in the Philippine experience: the danger that a Dengvaxia campaign could arouse public outrage and thereby undermine other public health interventions. WHO would rather see more people die from dengue than see explosions of Dengvaxia outrage that could damage public trust in vaccination and in public health more generally. That is perhaps an unkind way for me to make the point, but it’s not unfair.

As far as Jody and I can tell, neither the Philippine government nor WHO gave serious consideration to informed consent as a possible way to cut this Gordian knot. WHO does consistently (if very briefly) advocate honesty, with sentences like “Communication needs to ensure appropriate and full disclosure of the risks….” But that’s a long way from encouraging Dengvaxia campaigns if and only if those campaigns have a strong informed consent component.

Why not tell parents something like this?

The vaccine is likelier to protect your child from dengue than to endanger her. But if she has never had dengue before, the vaccine could actually endanger her instead of protecting her, because if she caught dengue sometime after being vaccinated her case might be more severe. We know that four out of five children your child’s age in this community have already had dengue at least once. There’s a test to find out if she’s in the majority that’s helped by the vaccine or the minority that’s more endangered than protected. But the test is expensive, too expensive for us to do on every child.

We wish we could test every child. But since we can’t, you have three choices. You can go to a private doctor and get the test on your own – if you can afford to do that. Or you can let us vaccinate your child without testing, knowing that the chances are four out of five that that’s the right thing to do. Or you can decide that you don’t want to give your consent for a vaccination that might do more harm than good, and tell us not to vaccinate your child.

The only article I’ve seen on the Dengvaxia controversy that seriously considers informed consent takes the position that telling parents about the downside of Dengvaxia would lead them to reject the vaccine – just as a mercury label on fish would deter fish purchases. It cites a 1990 study that found that “people are reluctant to vaccinate a hypothetical child when doing so poses a risk of death, even if the risk of death is higher without vaccination.”

Of course the author and the cited study are right that some parents would decide to take the risk of not vaccinating their kids instead of taking the (smaller) risk of vaccinating them. Doing something feels riskier than not doing something; if the outcome is bad we know we’ll feel more responsible for our actions than our inactions. That can affect our decision even when we’ve been properly informed that inaction is actually the more dangerous option. Informed consent does sometimes turn into informed refusal to consent.

There isn’t a lot of evidence about what sort of informed consent process maximizes the probability that people will make the statistically wisest decision. And there’s even less evidence about what sort of informed consent process maximizes the probability that outrage will stay low no matter what people decide and what outcomes result.

But both risk communication theory and experience with informed consent in other contexts strongly suggest to me that it’s possible to keep participation high and outrage low in a mass dengue vaccination campaign without screening tests – if the authorities are candid and empathic in explaining this painful choice and if it’s absolutely clear to parents that the choice is theirs.

The case for informed consent

Labeling is a kind of informed consent. And informed consent is usually an outrage-reducer. Parents who allow their children to be given Dengvaxia without knowing its downside rightly become outraged when they learn they have been duped – and their outrage leads them to see Dengvaxia as vastly more harmful and vastly less beneficial than it is. Parents who were told about Dengvaxia’s downside and assured that the choice was up to them, on the other hand, would have far less reason to be outraged, and far less reason to overestimate the hazard.

That’s true whether they said yes or no to the vaccine. Assume the worst. Assume that most fully informed parents would decide against this particular vaccine. At least the decision would be theirs. It wouldn’t be likely to undermine their confidence in other vaccines or their trust in public officials.

But I think most parents would say yes. I think the outrage reduction effect of informed consent is usually greater than its hazard salience effect. (Even when risk exposure is mandatory and there’s no opportunity to “consent” or not consent, candor-plus-empathy about risks still tends to reduce outrage. But that’s a different topic for another day.)

What about the times when I’m wrong – when a label or some other mechanism of informed consent arouses or exacerbates unjustified concern (hazard salience) instead of ameliorating that concern (outrage reduction)? Do I still advocate informed consent, even in situations where the information makes people likelier to say no to something the experts think they should say yes to?

I do.

And when the question came up in my consulting work, I did. Some years ago, for example, I consulted for an organization that was part of the coalition working to eradicate polio. At that time, polio vaccination in the developing world relied on the oral vaccine, which has several big advantages over the injected vaccine in such places, but one significant disadvantage: Roughly one time in a million it can give a vaccinee polio, and on rare occasions it can even lead to an outbreak of vaccine-derived polio in the community. Polio eradication campaigns had long hesitated to tell parents about this downside of the oral vaccine, lest it deter them from letting their children be vaccinated. Sometimes they outright lied about it. On this topic, informed consent messaging vis-à-vis polio vaccination didn’t inform at all. It was systematically and intentionally dishonest.

My advice to come out of the closet about the tiny but real possibility of getting polio from the oral polio vaccine was not taken.

For more detail on this example, see my 2012 column on “Misoversimplification: The Communicative Accuracy Standard” and my 2016 column entitled “U.S. Public Health Professionals Routinely Mislead the Public about Infectious Diseases: True or False? Dishonest or Self-Deceptive? Harmful or Benign?

The “Misoversimplification” column ends with these two paragraphs:

Going public with a long-suppressed secret is, of course, more difficult and more damaging than not suppressing the secret in the first place. That’s the core vaccine risk communication problem the Global Polio Eradication Initiative faces today: It can’t just start including the truth [about getting polio from the oral polio vaccine] in its future messaging. It can’t tell the truth about [the vaccine] without also telling the truth about its long history of intentional, deceitful misoversimplification.

“Simplifying out” information that conflicts with your message is addictive. I think that’s the single best argument against misoversimplification and in favor of the communicative accuracy standard. Once you don’t admit something you should have admitted, it gets harder and harder to admit it later. When it finally emerges, people don’t just learn it. They learn it in a way that makes it loom much bigger than it would have seemed if you’d been mentioning it all along. And they learn that you have been habitually dishonest.

That’s the case for informed consent even when it may alarm people unduly – even when it may deter them from saying yes to a life-saving vaccine against dengue or polio. Over the long haul, public trust requires honesty, even when the short-term result of honesty may be unwise decisions that do real harm.

Recall that the World Health Organization now recommends against conducting Dengvaxia campaigns without screening tests because sometimes Dengvaxia endangers a vaccinee. It concedes that such campaigns would do more good than harm overall, and that refusing to conduct such campaigns will increase the total number of dengue deaths. But it worries that outrage over Dengvaxia’s downside could undermine public trust and thus end up (indirectly) killing more people than Dengvaxia would (directly) save.

I am making the same argument for informed consent – including informed consent to Dengvaxia vaccination without serology testing. In order to avoid long-term loss of trust, WHO advises governments not to use Dengvaxia at all unless there’s a feasible way to tell whether or not prospective vaccinees have ever had dengue before. With exactly the same goal in mind, I would advise governments to tell parents the truth about Dengvaxia’s benefits and risks, and then let them choose to vaccinate or not to vaccinate their children.

Whether it’s genetically modified food ingredients, or the dengue or oral polio vaccine, or mercury in fish, or tiny amounts of a possible carcinogen in a workplace chemical, my advice is the same: Tell the truth, even when it hurts. If the truth is something people already want to know, tell them. If the truth is something they will eventually wish they’d known and think you should have told them, tell them. Even if the truth is something that is all too likely to lead them to make an unwise decision, tell them. That’s the only way to build and sustain trust – trust in you, in your organization, and in your profession; trust on the specific issue you’re addressing now and on all the issues you will need to address in years to come.

Isn’t telling the truth simply a fundamental principle of ethics? I’m not an ethicist, but my understanding is that ethicists spend a fair amount of effort trying to figure out the conditions under which it’s acceptable to be less than completely honest.

I’m not entering that thicket. In the language of deontology (abstract principles) versus consequentialism (outcome assessment), my argument here is entirely consequentialist. I’m making a risk communication case, not an ethical case, for informed consent.

A fundamental principle of risk communication is that when people feel they cannot trust the source of a hazard to tell them the truth about that hazard, they become outraged. And their outrage leads them to overestimate the hazard. So it is useful as well as ethical to tell people the truth about hazards, even hazards they might be unduly worried about already. Give them control. Make their exposure voluntary. Get them used to the hazard. Earn their trust. Going under the radar may work short-term, but it backfires when word gets out. And when secrecy backfires and trust is lost, the impact affects everything you’re doing, not just the issue you were less than candid about.

When we decide not to tell people things that we’re afraid they’ll overreact to, our fear that they will overreact is usually itself an overreaction. People don’t usually overreact for long to candid information. But they do often end up overreacting to not having been told.

Here’s my argument in one summary boldface paragraph:

Surprisingly often when we’re tempted to withhold risk information because we think people will overreact in harmful ways, we’re simply wrong; the outrage reduction effect of our honesty overcomes the hazard salience effect we were worried about. Even when the hazard salience effect is the stronger of the two at the outset, people get used to hazard information. Thanks to control, voluntariness, familiarity, trust, and cognitive dissonance, the outrage reduction effect soon overtakes the hazard salience effect. And even if it doesn’t – even if undue concern leads people to unwise decisions about GM foods, vaccine downsides, etc. – the preeminent importance of long-term trust-building means that honesty is still the best policy.

All this is as true for industrial hygienists talking to their workforce or their neighbors as it is for food manufacturers deciding whether to label products with genetically modified ingredients.

Copyright © 2018 by Peter M. Sandman

For more on outrage management:    link to Outrage Management index

For more on infectious diseases risk communication:    link to Pandemic and Other Infectious Diseases index

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.