Posted: January 30, 2011
This page is categorized as:    link to Outrage Management index
Hover here for
Article Summary One of the toughest questions in risk communication is what to say – if anything – about the strongest arguments against the position you’re advocating for. The aspirational goal is presumably full disclosure. But most risk communicators fall far short of that goal, preferring to ignore or dismiss opposition arguments. The temptation not to disclose is especially powerful when you're urging people to do something that entails small-but-scary risks; when you’re confident that the benefits to your audience greatly exceed the risks; and when you’re worried that the audience won’t see it that way if you’re completely candid. This column offers two examples: trying to convince a community to accept a new chemical factory and trying to convince parents to vaccinate their children against polio. The column discusses eight reasons why full disclosure of small-but-scary risks isn’t just the most ethical strategy. In many cases it is also the most effective.

Full Disclosure:
The Risk Communication Case
for Revealing Small-but-Scary Risks

Here’s the dilemma. Assume you are trying to convince people to take some action – we’ll call it X – that you are confident is the right thing to do. Or you’re trying to convince people to believe X, which you are confident is correct. Assume the best case: Doing or believing X is a plus for everyone: good for your target audience, good for the world, and good for you (or your employer).

If you want an example in mind right away, imagine that the X is deciding to get a flu shot or believing that everybody should get a flu shot. Or pick your own example.

Now assume that there are a few facts – A, B, and C – that seem to suggest that X isn’t such a good idea after all. A, B, and C are all accurate. They are valid truths (or at least partly valid half-truths) on the other side. But for the most part the other side is mistaken. There are a lot more facts on your side. Unfortunately, A, B, and C are likely to have a lot of impact. They’re emotionally loaded, and you’re worried that in the minds of your audience they may very well outweigh the comparatively dry facts on your side. You’re right on the merits – but you think many people will erroneously decide you’re wrong if A, B, and C get out.

Should you reveal A, B, and C, or should you withhold these inconvenient facts?

It is tempting to suppose that this dilemma simply doesn’t arise, that all accurate information improves decision-making. That assumption is the basis for much that we believe in, from education to the free market. And yet we can all think of counterexamples – pieces of accurate information that ended up distorting a decision instead of improving it; factoids that were true but nonetheless misleading because they loomed larger in people’s minds and emotions than they merited.

One class of such potential distorters is particularly relevant to risk communicators: scary facts about a situation that isn’t actually very dangerous. The benefits of X outweigh its risks, and a well-informed, rational decision-maker would therefore decide to do X (or let you do X). But some of the risks of X are very vivid and emotionally accessible, much more so than its benefits. In my language, X is low in “hazard,” but its potential to arouse “outrage” – whether mostly fear or mostly anger – is quite high. If people are accurately and successfully informed about these high-outrage aspects of X, that information is likely to make some of them see X as riskier than it really is, and riskier than its benefits justify. They will end up with the impression that the risks of X outweigh its benefits, leading them to make what is objectively the wrong decision.

In such situations, is it okay to withhold particular pieces of information that are likely to lead to bad decisions?

I think it is not okay. The rest of this column examines the question, and the reasoning behind my answer. I will cover eight reasons why full disclosure of small-but-scary risks isn’t just the most ethical strategy. In many cases it is also the most effective:

  1. Two-sided arguments are often more persuasive than one-sided arguments.
  2. Two-sided messages engender more trust, while one-sided messages arouse more suspicion.
  3. People feel betrayed when they belatedly find out facts they think you should have told them earlier.
  4. Opponents are delighted when you decide against full disclosure, leaving them the pleasure of disclosing for you.
  5. Ambivalent people go to the side of their ambivalence that is inadequately represented in their communication environment.
  6. When people are newly aware of a risk, they often go through a temporary overreaction called an “adjustment reaction” – and then they adjust and calm down.
  7. Even if one-sided risk communication might work for a single decision, it won’t work for an ongoing relationship.
  8. Risk decisions aren’t symmetrical – reassuring communications have a stronger full disclosure obligation than alarming communications.

In mid-2009 I received an email raising this question from a public health policy researcher in Washington, D.C. She wrote:

Public agencies are sometimes afraid of full disclosure of risk in cases where the benefit of an intervention (e.g. vaccines) outweighs the risk. There is a concern that full disclosure about those risks could derail achieving policy objectives to maximize public health benefit.

Is there evidence from your research (or that of others) that such a concern is justified? Or does the evidence in fact support full disclosure because honesty/transparency outweighs any hesitation triggered by full disclosure about risks? Or something else?

In the months after I received this email, two aggressive flu vaccination campaigns were launched in the U.S., one to sell vaccination against the seasonal flu and the other to sell vaccination against the H1N1 pandemic flu. In both cases – but especially the latter – questions were raised about the safety of the vaccine … and about the trustworthiness of public health officials to be completely candid about the safety of the vaccine. I think most Americans rightly believe that public health agencies like the Centers for Disease Control and Prevention (CDC) would not withhold health-related information the agency considered important. But would the CDC withhold (or at least downplay) negative information it considered likely to sway people excessively? Should the CDC withhold or downplay such information?

I don’t know of any studies assessing how often people are misled by vivid, emotionally impactful information about small risks. But it’s commonplace – no doubt about it. And it constitutes a common rationale for less-than-full disclosure. If certain bits of accurate information will (mis)lead people to make a bad decision, the reasoning goes, better to suppress the information.

On the other side are ethical (and in some cases legal) precepts that simply require full disclosure. In most of the developed world, for example, surgeons are obligated to warn their patients of the risks of surgery – even if the surgery is essential, the risks are unlikely, and the patient is overanxious. The essence of the informed consent principle is the patient’s right to withhold that consent, however unwisely, based on information the surgeon is required to provide.

But “full disclosure” and “informed consent” aren’t the dichotomies they may seem to be. They are sliding scales. On one extreme is overtly lying about the risks in question. On the opposite extreme is laying out those risks vividly, in a personal conversation, in detail, with color pictures – and with plenty of time for follow-up questions to make sure the patient really understands. Somewhere in the middle is what surgeons typically do: give the patient a short, oral summary that emphasizes the conclusion the surgeon considers most sensible, and bury the remaining risk information in a pile of polysyllabic paperwork that the patient has to sign but isn’t actually encouraged to read carefully.

There are a range of such intermediate information strategies. You might decide not to volunteer scary information about the risks of the action you’re recommending, but to provide it if you’re asked. You might explain that the information is available, but recommend against examining it since it doesn’t really change the bottom line. You might leave it out of your TV ads and brochures, but put it in some obscure corner of your website. You might sort through the available risk information and skip the most upsetting low-probability factoids, settling instead on milder and less memorable ones to present. And so forth….

In short, there are many ways to meet your full disclosure obligation to your own ethical satisfaction while minimizing the probability of actually deterring your audience from making what you consider the wise choice.

Judicial systems face the same dilemma. On the one hand, witnesses must promise to tell (in one common formulation) “the truth, the whole truth, and nothing but the truth.” On the other hand, rules of evidence keep the jury from hearing big chunks of the truth – the truth about what a third party told the witness the defendant said (because it’s hearsay); the truth about whether the witness believes the defendant is guilty (because it calls for a conclusion); etc. In fact, truthful evidence is withheld from the jury whenever the judge considers it more “prejudicial” than “probative” – that is, likely to influence the jury’s decision more than the judge thinks it deserves. A defendant’s prior bad acts, for example, are not usually admissible in evidence. Horrific autopsy photographs are frequently excluded on the same grounds. (There are of course precedents that are supposed to constrain the judge’s decisions about whether information is more prejudicial than probative, and appellate courts that are empowered to review those decisions.)

Note that it’s not just permissible for the legal system to shield juries from truthful information that seems likelier to mislead or distract them than to aid in their deliberations. It’s obligatory. As a risk communication professional, I’m very skeptical about the legal profession’s judgment that juries shouldn’t be trusted with the whole truth.

But the burden of proof is on me. Judges literally suppress emotionally arousing truths that might influence juries excessively in the direction of making poor choices. Surgeons don’t quite suppress but certainly bury emotionally arousing truths that might influence patients excessively in the direction of making poor choices. Why shouldn’t risk communicators similarly omit or at least deemphasize emotionally arousing truths that might influence the public excessively in the direction of making poor choices?

(There’s a related issue I don’t plan to discuss in this column: whether it’s okay to overemphasize emotionally arousing but minimally significant truths that might influence the public excessively to make good choices.)

The full disclosure question is usually seen as an ethical dilemma: the ethical obligation to provide complete information versus the ethical obligation not to provide information – true information – that seems more misleading than helpful. How aggressively candid should we be about risks that might scare people out of making sensible choices? But if I’m reading her email right, the health policy researcher I quoted earlier is asking an empirical question instead: Ethics aside, is there an empirical case to be made for being more aggressively candid than we tend to be in these situations?

I think the answer is yes. But there isn’t a lot of research evidence one way or the other – mostly just examples and reasoning.

To inform our thinking, let’s consider two specific cases:

Your multinational corporation is trying to site a new chemical factory in a working class neighborhood. You are confident that the benefits of your factory will greatly outweigh its risks. Of course a significant portion of the benefits will accrue to you and your company. Still, you anticipate substantial community benefits – jobs and taxes, in particular – which in your judgment clearly exceed the lifestyle damage your factory may cause (trucks, smoke, viewscape) and its exceedingly theoretical and almost certainly negligible health and safety risks. There are ways to reduce facility risks still further, but they would be prohibitively expensive. Using current technology, a disastrous explosion is extremely unlikely though not impossible; and your routine emissions of suspected carcinogens will be kept within strict government standards. But mentioning words like “explosion” and “carcinogen” would be terrifying, and mentioning the availability of additional precautions the company considers too expensive would be infuriating. How aggressively candid should you be about these truths?
Your public health agency is trying to eradicate polio in a developing country. The benefits of doing so will accrue not only to the country in which you are working but to the entire world, since if the disease remains endemic in a few countries it could stage a comeback in others. You have opted to use the oral (live) vaccine – banned in most of the developed world – because it is much less expensive to administer than the injected (dead) vaccine, and because children given the oral polio vaccine shed the vaccine virus in their feces, facilitating (involuntary) second-hand “vaccination” of the vaccinee’s close contacts. You are aware that the oral vaccine infrequently induces polio in a vaccinee or a close contact. On rare occasions it can even give rise to a new outbreak of vaccine-derived poliovirus. Using the dead (injected) vaccine to prevent these uncommon but horrendous side-effects would be prohibitively expensive. You are confident that the live vaccine will prevent far more polio cases than it causes. But mentioning that the live vaccine can cause polio would be terrifying, and mentioning that developed countries use the safer, more expensive dead vaccine instead would be infuriating. How aggressively candid should you be about these truths?

For pretty much the same reasons, both the multinational corporation and the public health agency feel a strong temptation to be less than fully candid in these two situations. (Both are drawn from real situations on which I have consulted.)

The corporation is likelier to resist the temptation; it will probably come closer to full disclosure than the health agency. An increasing number of corporate communicators know from experience that downplaying facility risks does them more harm than good. When the truth comes out, as it usually does, their failure to be candid confirms the public’s suspicions about their basic dishonesty, and undermines their prospects for a successful siting. Companies still withhold inconvenient factoids when they think they can get away with it. But they are less and less inclined to think they can get away with it.

Public health agencies, by contrast, see themselves as the good guys, and they’re used to being seen by the public as the good guys. They’re not accustomed to the sorts of controversies that polio vaccination campaigns have provoked in some developing countries. Because they have altruistic motives, public health agencies feel more entitled than corporations to be less than fully candid; because they have altruistic reputations, they are less likely than corporations to get caught, and may be less harshly punished when they are caught.

But the time is coming, I believe, when public health agencies, like multinational corporations, will decide that withholding or deemphasizing inconvenient factoids is no longer the optimum public health strategy.

Why do I think so? What is the empirical case (as opposed to ethical case) for full disclosure?

The core problem with full disclosure of risks is that scary information sometimes deters people from doing X, the action that you want them to take (and that you believe is best for them to take). That’s simply true.

But in many situations this problem is greatly mitigated by countervailing factors. The rest of this column will outline some of those factors.

I want to underline that the question we’re considering isn’t whether it’s okay to lie about a risk in order to persuade people to accept that risk. Nor are we asking whether it’s okay to mislead your audience by withholding risk information that even you consider genuinely significant – facts and counterarguments that might well lead a rational person to reconsider your recommendation. People do sometimes lie or withhold significant information. But almost everyone agrees in principle that they shouldn’t.

The focus of this column is on a more debatable question: whether it’s okay to withhold inconvenient factoids about a risk, factoids that you think are themselves misleading. They’re accurate, but more emotionally compelling than logically dispositive. You believe they are likely to distort people’s judgment, not guide their judgment. In legal terms, you’re convinced that the information in question is more prejudicial than probative.

Many practicing risk communicators think it’s okay to withhold that kind of information – whether they’re working to convince people to accept a new chemical factory or a polio vaccination. I believe it is unwise to do so. I believe full disclosure about risks is usually the wiser course – not just the more ethical course, but the more effective one. Here is a list of eight reasons why.

1. Two-sided arguments are often more persuasive than one-sided arguments.

link up to indexGoing back at least to the 1940s, communication researchers have studied the pros and cons of one-sided versus two-sided persuasive arguments. Their conclusions in a nutshell: One-sided arguments work better when the audience is uninformed, uneducated, and inattentive; when the audience is already on your side; and when the action being promoted is temporary. But more interested and more educated audiences respond better to two-sided arguments, especially if they have reservations about your message, if they are already aware of some of the counterarguments, or if they’re likely to hear some of the counterarguments before whatever you’re trying to accomplish is completed.

There are some risk-related decisions that meet the specs for using a one-sided argument – but not too many. Most of the time, we find ourselves talking to people who are fairly interested, fairly educated, fairly aware of the other side, or fairly likely to encounter the other side before we’re done with them. Under those circumstances, acknowledging the risks we’re tempted to ignore isn’t just good ethics; it’s good persuasion.

This is certainly the case for a corporation trying to site a chemical factory. And it is increasingly the case for a health agency trying to run a polio vaccination campaign.

One key reason why two-sided messaging works better than one-sided messaging is the existence of opponents. If you’re trying to site a chemical factory, there are bound to be activists, politicians, neighbors, and others arguing against letting the factory in. Your polio vaccination efforts will also encounter opposition. Vaccination proponents may decry the existence of an anti-vaccination movement, but that movement is not going away. Vaccination communication strategy needs to allow for the reality that many prospective vaccinees (or their parents) have heard or will hear “popular” anti-vax arguments.

But even when there is no active opposition, a good case can be made that two-sided persuasion is more effective than one-sided persuasion. Experienced salespeople have learned the hard way that customers are likelier to buy when the sales pitch includes a few product drawbacks than when the pitch is one-sided ballyhoo.

This brings me to my second point.

2. Two-sided messages engender more trust, while one-sided messages arouse more suspicion.

link up to indexMany people have learned to respond skeptically to virtually all one-sided messages, wondering: “What’s the catch?” Even in conversations with service providers whom we assume to be on our side (our doctor, for example), we still want to know the downsides of the recommended action. We may or may not ask, depending on how hurried the conversation is and how comfortable we feel voicing our skepticism. But asked or unasked, “What’s the catch?” is a question that one-sided arguments routinely provoke.

Answering the question if it’s asked should be a no-brainer.

Answering the question even before it’s asked is often better risk communication, because it engenders trust and allays suspicion. Maybe your audience already knows some of the case against your recommendation, and is waiting to see if you’re going to acknowledge it. Maybe your audience simply assumes there must be some drawbacks, and is wondering generically what they are. Either way, you’re better off mentioning the downsides of your recommendation without waiting to be asked.

If you think you’re talking to someone who would rather not be confused by conflicting information, you can ask permission first: “I think the benefits of X greatly outweigh its risks, but as always there are some risks. Would you like me to go over what they are?”

Talking about the risks of your own recommended course of action is what lawyers call an “admission against interest.” That is, you’re saying something your audience can tell you’d rather not say. Admissions against interest are intrinsically credible. More importantly, they increase the credibility of the rest of your message. Most of us would rather buy a used car from someone who gives us a rundown on its defects as well as its merits than from someone who insists it has no defects. (Even dishonest used car salespeople know they’d better mention some defects, though they may hold back on the most important ones.)

Bottom line: Most people smell a rat when they’re exposed to one-sided messages … even if they don’t know what the rat is. That’s a big piece of why two-sided messaging so often works better.

I don’t want to overstate this point. (I’m about to do two-sided messaging about two-sided messaging.) Sometimes people aren’t in the least skeptical. Maybe their trust in you is very high. More likely their interest in your recommendation is very low. They know next to nothing and care next to nothing about what you’re telling them, so they’re perfectly willing to take your word for it. They don’t even need you to cite much evidence. Just tell them what to do/think and off they go. Elsewhere I have called this “playing the follow-the-leader game.” If the game is follow-the-leader, it’s unwise to burden your would-be followers with excess information about why they might not want to follow you after all.

But when you’re urging people to do something that might entail some risk, the game is seldom follow-the-leader. Your stakeholders may have heard from the other side already, or may hear from the other side soon. They may simply be skeptical, sensing that there’s got to be another side. Unless you’re confident that follow-the-leader is really the game you’re playing, full disclosure – or at least something closer to full disclosure – is usually a better bet.

3. People feel betrayed when they belatedly find out facts they think you should have told them earlier.

link up to indexWhether or not your one-sided messaging arouses suspicion at the outset, once people finally hear the other side (usually from your opponents) the other shoe drops. They reassess – with a vengeance.

They don’t just reassess the wisdom of your advice. They reassess you – your credibility and your integrity. Moreover, when people discover that relevant information has been withheld, their feeling of betrayal reliably leads them to overreact to that information.

This is the standard I routinely recommend to clients trying to decide whether to leave a particular fact out of their messaging: Imagine that at some future date people will learn the fact you’re planning to omit. As your audience digests this fact in hindsight, will the new information cast doubt on the truth and integrity of your original message? If so, you need to include it now.

The core implication of this standard is that you can dispense with much of the evidence that you’re right about X, while it is essential to include any indication that you might be wrong. Suppose there were 27 safety studies of your chemical factory or your polio vaccine, and 26 of them found no problem. The 27th is an anomaly, with serious methodological flaws that lead you to consider it entirely without merit. You may discuss as many or as few of the 26 studies that came out on your side as you like. But you absolutely must discuss the 27th, the worthless one that came out against you.

Here is another way I often make the same point. When somebody else reveals a piece of information that reflects badly on you, your organization, or your argument, it does roughly 20 times as much harm as when you “blow the whistle” on yourself. That’s not a hard-and-fast number, just a rule-of-thumb. But it’s useful because you can do rough math with it. If the odds are less than one-in-20 that your audience will eventually find out the inconvenient fact you’re planning to omit, then your decision to omit it is a sensible risk – still ethically suspect, but at least empirically defensible. But if the odds are, say, one-in-ten that somebody will eventually clue people in, then you would be wiser to clue them in yourself now.

Even in a courtroom, where juries understand that each side’s job is to present carefully selected facts showcasing one side of the story, lawyers have learned that it’s usually smarter to acknowledge the opposition’s best arguments first, before the opposition uses those arguments to torpedo their case later. In less adversarial situations, where people expect you to inform them about all sides of an issue, it’s much more damning when they discover later that you intentionally left out some inconvenient facts.

The mistrust that results when corporations are found to have withheld inconvenient information is straightforward: We figure they misled us to further their own interests.

The mistrust tends to have a different feel to it when the source is a nonprofit organization like a public health agency. We want to believe that public health agencies have our interests at heart. They don’t withhold information so they’ll get rich. They withhold (or at least underemphasize) information for our own good, information that they think we might overreact to or misperceive. They’re not trying to deceive us. They’re trying to protect us from information they think might deceive us, even though the information is true. They don’t trust us to make wise decisions if we’re fully informed – so we can’t trust them to inform us fully. That’s why I entitled my 2009 Berreth Lecture to the National Public Health Information Coalition “Trust the Public with More of the Truth.”

If we can’t trust public health agencies to inform us fully, then we have to rely on their critics to tell us the rest of the story. And not just their responsible critics. Crank websites and neighborhood rumors gain credence – even for their most extreme claims – because of the full disclosure lapses of public health agencies.

Now put yourself in the shoes of an international agency trying to stamp out polio. If you admit that the oral polio vaccine can actually cause polio, some people who might otherwise vaccinate their children will decide not to do so – and some children you might have saved will die. You almost certainly feel ethically diminished when you refuse to admit that your vaccine can give people polio, but you figure dishonesty is not too high a price to pay for saving children’s lives. What you’re not noticing is that your dishonesty is leading some people to believe your critics’ most extreme claims – not just that your vaccine can occasionally cause polio (which is true) but also that your vaccination campaign is a western genocidal plot (which is not true). That too will cost lives, many more lives in the long run, I believe. But somehow you feel far more responsible for the lives that would have been lost if you had disclosed an inconvenient truth than for the lives that will be lost because you drove people into the arms of your critics.

4. Opponents are delighted when you decide against full disclosure, leaving them the pleasure of disclosing for you.

link up to indexEven if you have no opponents, the odds that members of your audience will eventually run across the inconvenient information you decided not to disclose are probably higher than my one-in-20 standard. And when they do run across it, they may well become your new opponents, determined to spread the word. So even in an opponent-free world, failing to disclose inconvenient information is a bad risk.

When you have legions of opponents, failing to disclose inconvenient information isn’t merely a bad risk. It’s stupid. It’s especially stupid (crazy, even) when the inconvenient information isn’t actually secret – when it’s readily accessible on Google (a research tool your opponents just might have access to) or even in your own technical report or on your own website.

This is the worst of both worlds: You are revealing the inconvenient truth to your opponents, who will be sure to unearth it – while leaving them the pleasure of deciding when and how to reveal it to your undecided stakeholders, whose research is less diligent.

Remember, my assumption in this column is that you’re quite sure your side is the right side. You’re not omitting the other side’s best arguments because you’re trying to mislead the audience into making a bad decision; you’re omitting them because you think the other side’s best arguments would mislead the audience into making a bad decision. Your motives are pure.

It’s your strategy that’s flawed. The other side’s best arguments are misleading. They’re true as far as they go, but they’re more emotionally compelling than rationally dispositive … which is precisely why you don’t want to acknowledge them. But they’re all the more effective, and thus all the more misleading, when you neglect to mention them, giving your opponents the golden opportunity not just to fill in the blanks but also to emphasize your dishonesty.

This is one major reason why a scientific debate in which most of the evidence is truly on one side can often look like a toss-up to journalists and bystanders. For example, I think there’s a much stronger case that global climate change is a serious problem than that it’s a leftist fairy tale. But the opposition viewpoint does have some arguments on its side. Far too often, in my judgment, proponents of reducing the world’s output of greenhouse gases ignore or trash the other side’s case. That allows climate change skeptics/deniers to showcase their few good points in the most effective of frames: “What the global warming people aren’t telling you is….”

Exactly the same thing happens when a company ignores or trashes the few good arguments against its proposed chemical factory, or when the international public health profession ignores or trashes the few good arguments against the oral polio vaccine. If your side of a controversy is 90% right, or 99% right, or even 99.99% right, it’s far wiser to disclose the remaining ten percent or one percent or one hundredth of one percent of the truth than to ignore it or trash it, thereby giving your opponents their strongest argument on a silver platter: your failure to disclose.

When I first studied communication in the 1960s, one of the major approaches was called “inoculation theory.” Nearly 50 years later it is still a major approach. Developed by William J. McGuire, inoculation theory argues that people are likelier to resist an opposition message if they’re already familiar with it and have had a chance to rebut it in their minds … or at least to put it into context.

The analogy to medical inoculation is apt. A weakened form of a disease agent is developed, one that’s (usually) too weak to cause the disease but still strong enough to trigger the production of antibodies, thus protecting the patient from future illness caused by that disease agent. McGuire’s theory has inspired hundreds of research studies showing why it’s good communication strategy to expose people to a “weakened” form of the opposition’s best arguments. In practice, that often means acknowledging the opposition’s best arguments in the same message that incorporates your side’s even better arguments.

Put yourself in your opponents’ shoes. They know you can make a pretty strong case for the chemical factory or the polio vaccination, but they’ve got one or two dynamite counterarguments they hope will be enough to turn the tide. Their best shot is if their counterarguments come as a total surprise to the audience, a shocking revelation that casts doubt on everything you’ve been saying. So don’t let that happen. Disclosing the other side’s best arguments is your best shot at inoculating the audience against those arguments.

There’s another reason for acknowledging the true but misleading arguments of your opponents: Doing so enables you to distinguish those arguments from the opponents’ arguments that are egregiously false. This works a lot better than pretending they’re all nonsense. Thus your failure to disclosure the other side’s best arguments not only makes those arguments more powerful when the other side springs them on the audience. It also strengthens the other side’s not-so-good arguments, and even its ridiculous arguments.

5. Ambivalent people go to the side of their ambivalence that is inadequately represented in their communication environment.

link up to indexWhat I have sometimes called the seesaw of risk communication link is to a PDF file is fundamental to resolving the full disclosure dilemma. People trying to decide whether or not to do X are often torn – that is, ambivalent. They’re aware of its benefits and find them attractive, but they’re also concerned about its risks. Benefits and risks are doing battle in people’s minds.

Here’s the paradox of the seesaw: When people are ambivalent, they tend to resolve their ambivalence by emphasizing the half of it that everyone else seems to be neglecting. If everyone keeps harping on the benefits of X, I’ll focus all the more on its risks. If others are going on about the risks, I’ll rebut them in my head with the benefits. Of course people aren’t always ambivalent – but when they are, overselling the benefits of X is a lot less persuasive than acknowledging its risks alongside its benefits.

My wife and colleague Jody Lanard used to be a practicing psychiatrist. Even though her sickest patients were desperate for symptom relief, many were also understandably (and wisely) leery of psychopharmaceuticals. When Jody wanted to prescribe meds to a patient, she usually started out on the anti-drug side of the seesaw, emphasizing that psychotropic medications were a mixed blessing with lots of unpleasant side effects. She often assigned her patients to study the complete patient package insert and come back with questions. The more reluctant Jody was to prescribe, the more enthusiastic most patients became about trying the meds.

Jody’s goal wasn’t simply to use “reverse psychology” (that is, the seesaw) to lure her patients into accepting their meds. Ultimately, she wanted to guide her patients toward the seesaw’s fulcrum, to give them a balanced understanding of both the benefits and the risks of the drugs she was considering prescribing. We sometimes call the fulcrum of the seesaw “the place where grownups live.”

Some years ago I consulted for a company that manufactured fiberglass. When the International Agency for Research on Cancer announced that fiberglass was a possible human carcinogen, competitors said as little about the problem as they thought they could get away with. My client, by contrast, put out endless communications about the uncertain state of the science and the ways fiberglass users could protect themselves from the possible risk (and in some cases the possible liability). For many customers, my client quickly became the supplier of choice. I’m not sure if this was the seesaw at work, or the dynamics of trust, or some other factor or combination of factors. What I know is that many fiberglass users decided they’d rather buy from a company that was determined to keep them up to date on the cancer issue than from a company that had little to say about the issue.

A few months ago I told this story to a pharmaceutical industry client trying to decide what to tell women about the adverse health effects of birth control pills. (Although these health effects are less serious than the effects of an unwanted pregnancy, they are not trivial.) Of course every oral contraceptive company produces a patient package insert that offers customers the legally required risk information … which very few customers actually read. Would a more aggressive campaign to tell young women about blood clots and other oral contraceptive risks trigger a seesaw response, actually relieving some users’ concerns about health effects? Would such a campaign build trust? Would it diminish the impact of later anti-pill communication campaigns (by attorneys fishing for prospective plaintiffs, for example)? Would it encourage pill users to monitor their own symptoms more effectively and thus reduce the frequency of serious side effects? Would it encourage juries to see the case differently when some pill users had later medical problems that might have been side effects and decided to sue the company? Or would it just scare off a lot of women who would rather imagine that their birth control pills were completely safe?

All of these effects doubtless occur. Their relative frequency is what determines whether full disclosure that goes beyond a company’s legal obligation is wise or foolish. I accept that there are times when fully, aggressively, proactively disclosing the risks of a recommended course of action isn’t the best way to get people to do what you’re recommending. But far more often than my clients imagine, full disclosure (or something close to full disclosure) is exactly that: not just the most ethical thing to do, but the most effective as well.

There’s another seesaw worth mentioning here, because it comes up so often in risk controversies. It’s the seesaw of respect.

You may well consider your critics – at least your most extreme critics – to be crazies who aren’t worthy of anybody’s respect. But your undecided stakeholders are likely to be on a seesaw about that too. If you talk about your critics and their arguments with contempt, or refuse to dignify them by mentioning them at all, your lack of respect can easily make them seem more respectable (and certainly more powerful) to the rest of us. By contrast, if you are endlessly respectful of your most extreme critics and everything they have to say, we feel far freer to start rolling our eyes at their outlandish claims.

Of course being respectful doesn’t mean validating the truth of those outlandish claims. Respectfully validate your critics’ valid points, even if they represent only a tiny piece of the overall truth. Respectfully explain why your critics’ invalid points are mistaken, even if you have explained that umpteen times already. Leave the eye-rolling to the rest of us.

6. When people are newly aware of a risk, they often go through a temporary overreaction called an “adjustment reaction” – and then they adjust and calm down.

link up to indexI have written before about adjustment reactions, and won’t go over the same ground here.

The main thing to remember about adjustment reactions is this: With rare exceptions, you can’t skip that step. Sooner or later, people are probably going to hear about the risks associated with your recommended behavior. When they do, they’re going to have an “Oh My God” moment, a brief overreaction. You can provoke the adjustment reaction now by acknowledging the risks, or you can hide or downplay the risks in order to postpone the adjustment reaction until later. That’s basically your only choice.

“Now” is usually the better option. For one thing, you’re there now to guide people through the adjustment reaction – to validate people’s fears and help put them into context. And there may be time for people to get through the adjustment reaction before they have to act on your recommendation.

Perhaps if you’re trying to push your audience into an instantaneous and irrevocable decision, it may make tactical sense to postpone the adjustment reaction until it’s too late for people to back out. (It is to avoid such tactics that many laws provide for a period of “buyer’s remorse,” during which purchasers can legally back out of a contract they signed in haste.) But if you’re trying to build support that will last for a while, you’re better off front-loading the adjustment reaction than postponing it.

7. Even if one-sided risk communication might work for a single decision, it won’t work for an ongoing relationship.

link up to indexMany risk decisions, perhaps most, are part of a series of such decisions. How candid a communicator was last time has a lot to do with how the audience is likely to respond next time.

That’s why wise organizations care about their reputations, and why trust is one of the reputational variables they track. In my recent column on “Two Kinds of Reputation Management,” I argued that for most purposes a bad reputation hurts more than a good reputation helps. I’m pretty sure that’s true of the trust component of reputation. A reputation for trustworthiness probably does confer a bit of a halo effect; when you come under attack, people may be predisposed to believe your defense. But the trustworthiness halo is unlikely to survive a concerted, high-profile attack – and there’s no way it can survive actual, documented misbehavior. (The U.S. banking industry once had a reputation for trustworthiness.)

A reputation for untrustworthiness is much more stable. Once people notice that your organization tells only the part of the truth that suits its purposes, they aren’t about to take your word for anything.

So let’s assume, hypothetically, that a brand new organization with no reputation at all is trying to decide whether it should fully disclose the fairly minor risks of a recommended action. Let’s also assume that in this particular situation suppressing the inconvenient risk information stands an 80% chance of success; the odds are four out of five that nobody will ever find out. And let’s assume that the recommended action will be over soon, so even if the truth does come out later it won’t do 20 times as much harm (as I argued in #3) because the die will have already been cast. Looking simply at this one situation, then, full disclosure would be a dumb move (ethics aside).

Now let’s look at a series of four such situations. Each time, the organization has an 80% chance of getting away with telling only the self-serving parts of the truth. The probability that it will get away with doing so four times in a row is 80% × 80% × 80% × 80% – which is 41%. In other words, by the time our hypothetical organization had decided against full disclosure in four separate situations, the odds are better than even that it has been caught at least once.

Once caught, the organization has earned a reputation for untrustworthiness – a reputation that will stay with it for years to come. Now the probability of its getting away with less than full disclosure is no longer 80%. It’s close to zero, because people will be suspicious, closely checking every claim the organization makes, every word it says. Now, in fact, even full disclosure won’t come across as trustworthy. “If even these dishonest SOBs are admitting A and B and C,” untrusting stakeholders may speculate, “imagine what they must be hiding!”

This hypothetical arithmetic explains why so many organizations get sucked into a reputation-destroying pattern of one-sided risk communication. It’s hard to resist each individual 80% shot at successfully suppressing inconvenient information.

This arithmetic also explains why such organizations find it so hard to reverse course. Once you’re not trusted – either because you’ve been caught withholding relevant information or merely because people have come to sense how one-sided your information is – full disclosure no longer has much payoff for you. Your low-trust reputation is a sunk cost; people are discounting what you say anyway. So where is the incentive to change your ways?

After 40 years of risk communication consulting, I am convinced that organizations can overcome a reputation for untrustworthiness. In fact, they can overcome it rather quickly. Switching to a policy of full disclosure is obviously a prerequisite, as is implementing that policy consistently. But the toughest requirement is acknowledging that it’s a new policy. “It’s not enough to start being completely honest,” I tell my clients. “You need to point out – not just once but again and again – that you haven’t been completely honest in the past.” For a wide range of reasons, from ego to morale to liability, my clients are rarely willing to do this.

8. Risk decisions aren’t symmetrical – reassuring communications have a stronger full disclosure obligation than alarming communications.

link up to indexThis column is about the reasons why advocates of a potentially risky action ought to disclose the risks, even when those risks are small – why a chemical company should disclose the risks associated with the factory it’s urging people to accept into their neighborhood, and why a polio vaccination campaign should disclose the risks associated with the vaccine it’s urging people to accept into their children’s bodies.

Even if you are confident that your chemical plant or your vaccine will benefit your target audience much more than it will endanger them, and even if you are justifiably concerned that too much candor might deter people from making a sound decision, there are good reasons why you should at least give serious consideration to disclosing the risks anyway. In this column I have tried to lay out some of those reasons.

I can’t imagine writing a parallel column about the reasons why activists who are convinced the chemical plant or the vaccination campaign is more dangerous than valuable should nonetheless disclose the arguments on behalf of acceptance.

Many societal institutions are conservative with regard to risk. We calibrate smoke alarms to go off too easily, for example, preferring the hassle of an occasional false alarm to the disaster of a fire we weren’t warned about. We similarly calibrate activists to go off too easily. Within reason, we consider exaggerated warnings a public service. But exaggerated reassurances are an unmitigated disservice.

This asymmetry understandably infuriates those who are stuck on the reassuring side of a controversy.

One key reason for the asymmetry is the fact that people are more attached to what they have than to what they might get. Research by Daniel Kahneman and others has repeatedly demonstrated this phenomenon, known as “loss aversion.” It explains why “double or nothing” is usually an unattractive wager; the risk of losing $100 that’s already ours feels like a bigger deal than the lure of winning an additional $100. Faced with a choice between a benefit (something new that we might get) and a risk (something we already have that we might lose), nearly everybody focuses more on the risk. There are exceptions – entrepreneurs and gamblers, for example – but for most of us a bird in the hand is worth more than two in the bush.

In the typical risk controversy, the alarming side is the one protecting what we have already, giving it a built-in advantage. The reassuring side may be offering us the sun, the moon, and the stars, but it is threatening some of what we have already. So the reassuring side has a full disclosure obligation than the alarming side doesn’t have.

Sometimes it may be possible for the reassuring side to reframe the controversy so it’s got something to alarm us about. Global warming skeptics started out on the reassuring side, downplaying the risk of climate change. But they have managed to reframe the issue so it’s largely about the risk of destroying our economy and way of life if we overreact to the unproved prospect of climate change. Similarly, a controversy over the risk of the polio vaccine intrinsically favors the anti-vax side. A controversy over the risk of polio outbreaks favors those who want to get people vaccinated.

However a controversy gets framed, the alarming side is freer than the reassuring side to overstate its case and ignore its opponents’ case.

This asymmetry doesn’t mean the alarming side in a risk controversy is free to say anything it wants. Dishonesty can be costly to credibility and reputation even when you’re sounding the alarm. Consider the heavy cost global warming alarmists have paid for the “Climategate” revelations of carefully one-sided research reports. Or consider the reputational damage done to the World Health Organization in Europe because it kept harping on the seriousness of the swine flu pandemic of 2009-2010 long after the European public could see how mild it actually was.

Even so, when you think people are insufficiently concerned about some risk and you’re trying to rouse them out of their apathy, you are granted considerable leeway to argue your case one-sidedly. Your principal obligation is to dramatize your cause as effectively as you can. If in the process of doing so you overstate your evidence a bit and pretty much ignore your opponents’ evidence, that’s probably not going to devastate your credibility. Sooner or later people will notice your one-sidedness, and will rightly categorize you as more activist than objective observer. (That can be a problem for health officials who want to talk like activists but still be seen as scientists.) But unless you go too far, sliding down the slippery slope from dramatization to exaggeration to flat-out dishonesty, there’s a respectable niche for alarmist activism.

Full disclosure simply isn’t as important an issue for the kind of risk communication I call “precaution advocacy”: trying to increase people’s outrage about serious hazards.

But full disclosure is a crucial issue for the opposite kind of risk communication, “outrage management”: trying to decrease people’s outrage about small hazards.

Precaution advocacy requires dramatization and vividness, and permits a fair amount of one-sidedness. Outrage management, by contrast, requires excruciatingly thorough disclosure of the other side’s piece of the truth, no matter how miniscule.

Full disclosure is important when you’re trying to convince people to accept your chemical plant or your polio vaccine despite its small risks. Your opponents, the people insisting the risks aren’t small at all, get to be a lot more one-sided than you do. Sorry.

Copyright © 2010 by Peter M. Sandman

For more on outrage management:    link to Outrage Management index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.