Social media and source coordination in pre-crisis and crisis communication
|Date:||December 4, 2008|
I have tried to read most of the contents available on this site … very valuable indeed.
Currently I am putting together a few templates on disaster-related communication that can be used for training and emergency preparedness by staff and the general public. I am exploring the role of electronic communities like Facebook and what users of Facebook can identify as credible when this discussion on prevention begins.
Finally, all communication releases have to anticipate the need to work in joint groups when deciding content and timelines for information/instruction releases. Can there be effective merging of these varied activities all of which are crucial in times of disaster?
You’re raising two quite different crisis communication topics – the role of social media and the importance of source coordination.
Social media in pre-crisis and crisis communication
I am not an expert – I’m just barely a participant – in social media like Facebook. Others are way ahead of me in figuring out how these media can best be used for pre-crisis and crisis communication: to involve Millennials in the dialogue over what society should do to prevent preventable disasters and prepare for those that can’t be prevented; to motivate Millennials to launch their own prevention and preparedness activities; and to guide Millennials when an emergency strikes.
For more on this topic, see my earlier Guestbook entry on “Risk Communication and Media 2.0.” Better yet, Google something like “social media and emergency preparedness.” You’ll find a wealth of materials with titles like “How Municipalities Should Integrate Social Media into Disaster Planning,” “Social Media and Disaster Management,” and “Social Media and Your Emergency Communication Efforts.”
Social media have two characteristics that make them essential for pre-crisis and crisis communication.
First, social media reach an audience that has pretty much abandoned mainstream media information sources – people who don’t spend much time reading newspapers or watching TV news. Outreach efforts that ignore these new media will miss this key audience almost entirely. This is especially true with regard to pre-crisis communication. When disaster actually strikes, many social media users will also go to mainstream and “official” news sources.
Second, social media are enormously more interactive than the mainstream media. That makes them an ideal vehicle for fostering dialogue, collaboration, critical questioning, buy-in, initiative, creativity, and resilience. Getting Generation Y to pay more attention to traditional media wouldn’t improve pre-crisis and crisis communication much; getting old-timers like me to pay more attention to Web 2.0 would help enormously!
One less enthusiastic footnote: In some kinds of emergencies, the Web may be down as a result of cyberterrorism, power shortage, or excessive demand. So don’t put all your crisis communication eggs in one basket.
You mention credibility on Facebook. I don’t know whether this has been studied, and if so how the results came out, but I’ll bet that social media users are most responsive to other social media users – to people whose posts they’re accustomed to reading, or least to people who sound like they’re accustomed to posting. I doubt it’ll work for a government agency to burst onto the social media scene periodically with a pre-crisis or crisis message of some sort, then disappear till the next time. To be credible, you need to inhabit these media, not just exploit them. This is partly about being familiar and comfortable with social media customs (and even lingo). But it’s more than that. Social media foster, and pretty much require, a frame of mind that’s dialogic – that’s willing to say things you’re not sure about, willing to hear things you don’t necessarily agree with, and willing to see a consensus emerge that isn’t where you started. That’s exactly the right frame of mind for effective pre-crisis and crisis communication … but it’s not always an easy adjustment for corporate leaders and government officials.
Because I don’t have a lot of firsthand experience with social media, I sent this Guestbook entry to a colleague who does: Greg Dworkin, a Connecticut physician who helps run Flu Wiki and blogs frequently (as “DemFromCT”) on Daily Kos. I have appended his response.
Source coordination in pre-crisis and crisis communication
The other issue you raise is pre-crisis and crisis source coordination: getting sources of information and instruction to work together. I feel a lot more qualified to comment on that issue than on social media use. But I should warn you that my opinion is a minority opinion. Most crisis communication experts believe that it’s absolutely crucial for emergency managers to “speak with one voice.” I think that would be nice in principle, if there were only one view on what’s happening and what to do about it. But in practice emergency managers often disagree, and the effort to speak with one voice thus becomes an effort to suppress disagreement … an effort that nearly always backfires. The people forced to hide their true opinions get passive-aggressive, the disagreement leaks out anyway, and the public ends up thinking you’re neither coordinated nor candid. For a column specifically on this topic, see “‘Speak with One Voice’ – Why I Disagree.”
While I am very skeptical about the merits of the “speak with one voice” dictum, I nonetheless believe in crisis communication coordination. Here are four conventional coordination recommendations with which I wholeheartedly agree:
- Insofar as it’s possible (and safe), all crisis information sources should be physically located in the same place. This is convenient for journalists, of course. More importantly, it facilitates face-to-face discussion among the sources to identify disagreements and reconcile them if they’re reconcilable. On this point I go even further than most other experts. I would invite even activist groups and other opponents to staff a table at the emergency information center.
- Sometimes crisis information sources sound like they disagree when they really don’t. If one municipality says to boil water for five minutes and another says to boil water for seven minutes, anxious citizens may stress out over the difference. In cases like this, it makes sense to agree on a single number and thus avoid the appearance of disagreement. If the disagreement is real but not central, it’s often possible to agree on a range. “Boil the water for 5–7 minutes” should work.
- Even when the disagreement is too important to agree on a range, it’s useful to show awareness of and respect for the other opinions that are out there. “Smith County is recommending that people do X. We’re aware that Jones County is recommending Y instead. The two counties have discussed the matter. Both approaches look appropriate, and there is no definitive answer on which is better. We’ll be following the results in both counties to see if any differences emerge.” This isn’t ideal; people would rather both counties made the same recommendation. But it’s better than looking like you’re unaware of the other county’s approach, or sounding contemptuous of the other county’s approach, or being forced to copy the other county’s approach.
- When you actually do agree, then it’s a toss-up whether it’s better to put out two closely related and coordinated campaigns or one combined campaign. The obvious advantage of a single campaign is efficiency and cost-effectiveness; one brochure with two logos costs less than two brochures with virtually the same messages. On the other hand, impact and credibility are cumulative. People are most affected by a message when they’ve heard it and seen it a bunch of times in different contexts – not the same poster or radio ad every time they turn around, but the same information in lots of different formats from lots of different sources. On balance, I urge my clients to coordinate their pre-crisis and crisis messaging without actually merging their messages.
greg dworkin responds:
A few things I might add to Peter’s comment:
Social media fosters online community. This means that folks who are frequently online, and used to reading each other and interacting with each other, might be more likely to accept messaging that comes from “one of their own,” particularly if it’s someone previously identified by the community as knowledgeable on the topic. However, that only gets them a respectful hearing. The case still has to be made, the message still has to carry weight, and (depending on the community) you should expect lively challenges to whatever’s being said – because that’s how many communities function; it’s a feature and not a bug!
Also, all communities have varied opinions. As a native New Yorker, I would not presume to speak for New York (and I might well disagree with the Mayor if he says something I don’t think is true). The same is true for online communities. One should therefore not make the mistake of thinking that any selected individual speaks for or represents the entire community (whether you are the selector or the selectee).
Finally, the lifeblood of online community credibility is links. It isn’t just the credentials, it’s also the content. Links and references go a long way in establishing (and maintaining) credibility. Online communities haven’t abandoned traditional media so much as learned not to rely exclusively on traditional media, particularly outlets that might be a day or two behind on the news. So if the Washington Post or a government website says the same thing as an online source, there’s more comfort with both than if the information is provided by either alone. And both should be linked if they are original sources. This is an interesting variation on “similar message, many voices.”
Which media work best in different kinds of risk communication?
|Date:||October 26, 2008|
Do you think all media are equally potent in risk and disaster communication?
Media choice is a complicated topic. There are books written on it, and I can’t cover the ground properly in a Guestbook response. But I’ll take a crack at identifying some of the factors that influence media choice in the three main kinds of risk communication (as I define it): precaution advocacy, outrage management, and crisis communication.
Media differ along a lot of dimensions. Some of the key ones are these:
- Reach/efficiency. Some media (e.g. television) reach millions of people at once. Others (e.g. door-to-door canvassing) get to them one by one.
- Directionality. The traditional mass media (TV, radio, newspapers, etc.) are one-way; it’s very hard for the audience to talk back. A lot of less-mass media are also one way – pamphlets, for example. Websites can be two-way if they have significant interactive features; meetings (especially small ones) can be two-way if you give people a chance to participate.
- Cost. Public relations relies mostly on so-called “free media” – that is, on doing or saying something newsworthy. Websites are also free – and so is posting comments on other people’s websites and blogs, or putting your own video on YouTube. Advertising, on the other hand, can be incredibly expensive. Producing your own printed materials also runs into a lot of money if you’re trying to reach a lot of people.
- Message control. You can control what you say to a reporter, but not what the reporter does with what you say. Your advertisement or your brochure or your website, on the other hand, is all yours.
- Access control. If you’re calling people on the telephone or knocking on their doors, you control their access to the message more than they do – but they can still hang up or not answer. If you put an ad next to their favorite TV show or a billboard along the road they use every day, you’re controlling their access even more. An article in a newspaper, on the other hand, is easy for them to skip if they’re not interested. And they’ll never get to your website unless they decide to go there.
- Availability. Not everybody has a television or a computer. Not everybody knows how to read. Each media choice is readily available to some people and virtually unavailable to others.
- Speed. It can take years to write a book, months to create a website, weeks to produce a brochure, days to plan a news conference, hours to set up an interview, minutes to post on a blog, seconds to upload to YouTube.
- Emotional and sensory arousal. Big color photos have a lot more sensory and emotional impact than tiny type. Moving pictures with sound – television, a YouTube clip, etc. – are still more arousing. A meeting can sometimes be even more arousing than TV … or even less arousing than print.
- Length/detail. Television public service announcements usually last ten seconds, almost never run longer than a minute. Sound bites on radio and TV news are about equally short. A meeting, on the other hand, can go on for hours. Or you could write a book….
Okay, given these nine dimensions – and we could easily list another nine – how should your media choice vary depending on what kind of risk communication you’re doing?
Media choice for precaution advocacy
Precaution advocacy is what you do to alert audiences that you consider insufficiently outraged about serious hazards. Since the audience is apathetic or unaware, it’s crucial for you to have access control. People don’t go looking for information to reduce their apathy or lack of awareness; you need to go where they are and take them by surprise.
Emotional and sensory arousal is important too. You’ll need all the help you can get to provoke some interest.
Reach/efficiency and availability may be important. If you’re trying to reach large numbers of apathetic and unaware people, you need media that reach people efficiently, and you don’t want to miss too many people because they lack access to the media you picked.
Message control would be nice, so you can wordsmith your message to make it as effective as possible – especially since you’re not going have long to make your point before your apathetic audience loses interest. On the other hand, the media that provide the most message control tend to be the most expensive. And cost probably counts; precaution advocacy is all-too-often done on a shoestring budget.
You may or may not need speed. Many precaution advocacy campaigns are evergreen; next month will do as well as this month for urging people to quit smoking or wear seatbelts. Even if you’re warning people about global warming or the inevitability of an influenza pandemic sooner or later, doing it right matters a lot more than doing it fast. But sometimes the risk that people are too apathetic about is imminent – an approaching hurricane, for example – and speed matters.
You don’t really need media with two-way capability; apathetic and unaware people don’t have a lot to say back to a precaution advocacy campaign. And you don’t need length/detail; your audience won’t sit still for a long, detailed message anyway.
Ever wonder why so many precaution advocacy campaigns rely on posters, billboards, and the like? Because they meet the key precaution advocacy criteria: access control, emotional/sensory arousal, reach/efficiency, availability, message control, and cost.
Media choice for outrage management
Outrage management aims at calming people who you think are excessively outraged about a small hazard. Outraged people are extremely unlikely to calm down unless they are given a chance to express their concerns and grievances, and unless those concerns and grievances can be validated and responded to. So the single most important media requirement for outrage management is directionality: two-way media are essential.
Message control and length/detail are also important. Much of what you need to say in outrage management (after you’ve done a lot of listening) isn’t what people expect you to say – acknowledging your critics’ arguments, apologizing for your misbehaviors, sharing credit and control, etc. Audiences (and journalists) tend to misperceive, garble, or lose track of such unexpected messages, so you need a good deal of control over what you say and a good deal of time or space to spell out what you mean.
Access control may or may not matter. In many controversies, outraged people will seek you out; they want to vent even if they don’t especially want to hear from you. But if their outrage is directed elsewhere, if you’re a third party trying to calm their outrage, then you’ll need a fair amount of access control to make them aware of your messages (and your openness to their messages).
Speed, too, can be important if you’re trying to nip an emerging controversy in the bud. But more often than not in outrage management, the controversy is firmly established already and so speed doesn’t matter much anymore. Even though outrage tends to grow when it’s neglected, once people’s grievances are long-lived, you should take the time you need to think through and roll out your (already belated) effort to listen and respond.
Cost always matters, of course. But outrage management budgets tend to be bigger than precaution advocacy budgets; organizations in reputational trouble find the money they need to address the controversy. And the outrage management audience (stakeholders, not the general public) is smaller and less expensive to reach than the precaution advocacy audience.
The other three criteria matter little in outrage management. Reach/efficiency is about reaching a mass audience, whereas outrage management tends to be retail, not wholesale; the stakeholder audience is comparatively small. Availability usually isn’t an issue in outrage management, since the most useful media for your purposes are low-tech and readily available. As for emotional/sensory arousal, people are already more aroused than you wish they were.
The public meeting is the paradigmatic medium for outrage management. It’s two-way. It gives you control over your message (and gives your stakeholders control over their messages). And it can run all night if it needs to.
Media choice for crisis communication
In crisis communication people are rightly outraged (angry, frightened, etc.) about a genuinely serious hazard. So arousing outrage is unnecessary and calming outrage is inappropriate. Your goal is to validate the outrage, help people bear it, and help them choose wise rather than unwise precautions to cope with the crisis.
In any crisis, speed is paramount by definition. So is reach/efficiency. You need to get to a lot of people as quickly and efficiently as possible.
Availability is paramount too – and is often the biggest crisis communication challenge. Especially in multicultural societies and those with poor, rural communities, it can be extremely difficult to reach pockets of vulnerable populations who don’t speak the majority language or don’t have access to the mainstream media.
Message control is also important. People in crisis situations are motivated to hear what you’ve got to say, but they may be too upset to understand it easily; you’ll need to do a lot of repeating and clarifying. But while it’s important, message control isn’t often a problem in a crisis; journalists become a lot less investigative and a lot more stenographic when officials are trying to guide the public through a potential disaster.
For similar reasons, cost and length/detail are unlikely to be major crisis communication problems. In crisis situations, access to free media may be virtually unlimited.
Directionality is a toughie. People in crisis really do want to ask questions and make suggestions – and they’re a lot likelier to act wisely if the process of deciding how to act is interactive. But there simply may not be time for two-way one-at-a-time communications with large groups of people who need to take emergency action now. Under crisis conditions, people generally get the key guidance from the authorities via one-way media, and then do their own two-way communicating with each other to thrash out how they propose to respond.
Access control seldom matters once people know there is a crisis. They are actively seeking out guidance. Nor is emotional and sensory arousal needed. The crisis itself is accomplishing the arousal task for you.
In most societies, the medium that offers the best combination of speed, reach/efficiency, and availability is radio, and so radio is still the most important crisis communication medium.
When enforcement officials try to do risk communication
|Name:||Edna Pedroza Luna|
|Date:||October 21, 2008|
I am doing some research about the use of the theory of risk communication in food safety enforcement in the U.K. In particular, how can officers authorized to inspect premises communicate HAACP principles to food businesses, especially to owners and workers who come from ethnic minorities and whose English is limited?
HAACP is a method of assuring food safety, and moreover is required by legislation. I have found that some of the main barriers to the correct implementation of HACCP have been the lack of trust and motivation. There are also communication barriers with proprietors (e.g. language or cultural issues).
Most of the literature I have found is related to health care services, the relationship between patients and doctors, and the perception of risk among ethnic minorities (especially in crisis situations). I also read your book about Strategies of Effective Risk Communication, particularly the description of the types of risk; and I have read in your website about other factors that need to be addressed. However, I have problems figuring out how to focus the evidence on the enforcement process in practice – how the use of this theory could address the issues of non-compliance of these businesses.
I think you’re asking two questions. Some of what I know might help with one of your questions. I know almost nothing about the other one.
One question is the role of risk communication in HAACP enforcement – and in enforcement generally. That’s the one where some of what I know might be helpful. I think it’s intrinsically difficult to do effective risk communication across a power imbalance – not impossible, but certainly difficult. If I have the power to make you do something, and to shut down your business if you don’t comply, then you have a good reason to do what I say (or at least to pretend to do what I say when you think I may be watching). You’re doing it because of my enforcement hammer. So you’ll be as compliant as you think you have to be, given your assessment of the risk of getting caught if you don’t comply. And of course you’ll resent my power over you. As a result, obviously, you will have very little incentive to learn (from me) why HAACP is a smart thing to do. This is a well-established finding in risk communication, and in communication generally. People who change their behavior because of coercion experience less attitude change than people who change their behavior for other reasons. There’s no cognitive dissonance. When the coercion ends, therefore, the behavior change is likely to end too.
If my young child urges me to wear a seatbelt, I’ll do it to keep the kid happy and feel like a good daddy; then I’ll experience some cognitive dissonance about giving my child so much control over my behavior; and then I’ll reduce the dissonance by deciding that seatbelts are an important safety precaution. But if the authorities threaten to fine me if I don’t wear a seatbelt, I’ll wear it to avoid the fine; there’ll be no cognitive dissonance and therefore no incentive to decide that it’s a worthwhile safety precaution. (The example is hypothetical; my children are grown, and my seatbelt-wearing is habitual.)
There are surely other aspects of the coercive authority relationship that bear on risk communication efforts across that relationship. And there are risk communication strategies that should help cope with the problem (acknowledging the coercive relationship, for example, and conceding that nobody likes learning from an enforcement officer). It’s a question worth considering further: How to do good risk communication when you’re in a coercive authority relationship with the audience – when you’re the boss, the regulator, etc. I hope you make progress on it, and I hope you let me know what you come up with.
The other question, the one I know almost nothing about, is how the enforcement relationship is likely to be complicated by cultural and linguistic barriers when the person on the receiving end of the enforcement is a non-native speaker. As you know already, there’s a lot of research to show that immigrants are less trustful of the authorities than members of the majority culture. This is especially true of illegal immigrants, for obvious reasons; and it’s especially true of immigrants from places where the authorities are likely to be corrupt or brutal or both. So you have three kinds of additional barriers to surmount, beyond the basic barrier of being a coercer rather than a teacher: language, culture, and more-than-typical mistrust/fear of authority.
That much I know. But I’m sure you know it too. There may be more that is known, but I have done very little work on risk communications with linguistic and cultural minorities. And as you have observed, most of the readily available research on this topic is about crisis communication, not regulatory risk communication.
Overlapping definitions: “risk communication,” “crisis communication,” and “health education”
|Field:||Risk Communication Technical Group for Ministry of Health and ASEAN|
|Date:||September 29, 2008|
what i would add to this site:
Do you have any risk communication plan for Dengue control that I can refer to?
I have been reading your articles since 1999. I have learned a lot from you.
There is currently a debate in Malaysia re: What’s are differences and similarities between Risk Communication (RC) and Health Education (HE)? I have been saying that RC is a tool used during crisis and HE is to warn people of risks in non-crisis situations.
I may be wrong? I stand to be corrected.
I haven’t done any work specifically on Dengue, I’m sorry to say. But I do have a lot to say about the relationships among risk communication, crisis communication, and health education, and how these terms are defined.
As I’m sure you realize, all three terms (but especially “risk communication” and “crisis communication”) have a lot of quite different definitions. The way they are typically used has varied over time. More importantly, their use varies across disciplines. I will try to tease apart the main similarities, differences, overlaps, and confusions.
“Risk communication” is the newest of the three terms. It arose in the 1980s with regard to U.S. environmental controversies. Regulators and companies were concerned that members of the public were often excessively alarmed about small environmental hazards. Risk communication developed as a set of strategies to convince people that something (a nuclear power plant, a factory’s air or water emissions, a landfill) wasn’t nearly as dangerous as people thought it was.
It was obvious from the outset that risk communication was a daunting challenge. And given this definition, it was also obvious that it was a value-centric activity subject to abuse. That is, there were grounds for worry that it wouldn’t work, and grounds for worry that if it did work it would be used not just to reassure people about small risks but also to over-reassure them about serious risks.
In the three decades since the term first came into use, its definition has expanded. It is now widely used outside the environmental arena. And it is used outside the realm of reassurance. Today a doctor trying to reassure patients about the side effects of some medication is seen as doing risk communication – but so is a doctor trying to warn patients about the dangers of some lifestyle choice (eating too much, exercising too little, smoking) or a doctor trying to explain the pros and cons of two courses of action (surgery versus medication) so patients can make up their own minds. Some people use it as an extremely general term, meaning little more than “communicating about risk.”
Others – myself among them – have tried to limit the term “risk communication” to communicating about risk in a way that is grounded in research and theory, and that respectfully takes into account:
- social, cognitive, and emotional aspects of risk perception;
- the need for public involvement and candor; and
- a diagnosis of how technically serious the risk is and how upset the audience is, and a choice of communication approaches based on that diagnosis.
We welcome the broadening of the term beyond reassurance about small risks, but we resist its broadening to include communications about risk that are not purposeful and strategic; not informed by research and theory; or not respectful, honest, and two-way.
But risk communication still carries its historical association with reassurance, especially when applied to controversies. Activists, for example, very rarely use the term to describe their own behavior; to most activists “risk communication” (both the term and the activity) reeks of corporate efforts to deflect the public’s legitimate concerns.
The tendency of government and corporate sources to be more preoccupied with avoiding “undue concern” than with raising “due concern,” even in situations where the danger is serious, helps to sustain this association of “risk communication” with reassurance and even over-reassurance. Thus there are two reasons why critics of risk communication see it as a tool of powerful interests trying to quiet distress and quell dissent: because the term got its start as a label for efforts to reassure the public when a risk is small, and because powerful interests do tend to want to reassure the public even when a risk is serious.
“Crisis communication” has its origins in public relations, where it meant and still means talking to unhappy stakeholders in a situation that is bad for the client’s reputation. To a PR person, a “crisis” is a reputational crisis for the client (and if the client is a profit-making organization, a profitability crisis as well). The term is used regardless of whether the unhappy stakeholders have actually been endangered or damaged or only think they have. If your milk company has allowed melamine to get into your product, giving kidney stones to thousands of babies, that’s a crisis. If a false rumor that there’s melamine in your product is circulating on the Internet, from a public relations perspective (though not mine) that’s also a crisis.
Most of public relations involves selling the client’s virtues to an uninterested public. When the public is interested, and the interest is negative, the everyday, routine PR people give way to the crisis communication specialists. Decent PR professionals certainly understand that companies should respond differently when they’re guilty than when they’re innocent – but in both cases the PR crisis is the simple fact that they have been accused.
But outside of public relations, the term “crisis communication” is widely used to mean communicating in a situation where it is urgently important for people to hear what you have to tell them so they can take appropriate precautions – that is, in a real crisis. In this non-PR use of the term, it’s a crisis for your audience, not just for you.
The term “emergency communication” is also used to apply to efforts to tell people what’s going on and what they should do in crisis situations. Insofar as the two terms are different, “emergency communication” is reserved for acute and often widespread dangers to life and limb – fires, terrorist attacks, imminent hurricanes, etc. “Crisis communication” certainly applies to these sudden emergencies, but it is used also for slower not-quite-emergencies, and for urgent threats to wellbeing even when they are not health or safety threats. What political and business leaders are saying right now about the danger of economic meltdown is clearly crisis communication; it doesn’t feel quite right to call it emergency communication.
“Health education” is the oldest and clearest of the three terms. Most people both in and out of the field use this term to mean either or both of two things:
- information about health that people in the target audience don’t know and will find useful to improve their health or protect themselves from unhealthful conditions or activities; and
- advocacy of health-promoting behaviors that people in the target audience already know to be wise but are insufficiently motivated to do.
Some educators would reserve the term “health education” for the first of these two tasks, the simple provision of health information. But in practice, health educators have long since discovered that apathy, fatalism, and other motivational deficiencies are as damaging to health as ignorance is. And most have long since realized that an audience that can pass a test on smoking or sexually transmitted diseases but continues to smoke or have unprotected sex is not an educational success story. Effective educators know how to move their audience, not just how to inform it, and self-aware educators know that behavior change is the goal and information is only the means.
And so the term “health education” has come to include advocacy, coexisting with other terms (“health promotion,” “social mobilization,” etc.) that are more candid about the advocacy goals of health education.
How I use the terms
For me, the most important fact about risk communication is the incredibly low correlation between a risk’s “hazard” (how much harm it’s likely to do) and its “outrage” (how upset it’s likely to make people). If you know a risk is dangerous, that tells you almost nothing about whether it’s upsetting. If you know it’s upsetting, that tells you almost nothing about whether it’s dangerous.
Based on this distinction, I categorize risk communication into four tasks:
- When hazard is high and outrage is low, the task is “precaution advocacy” – alerting insufficiently upset people to serious risks. “Watch out!”
- When hazard is low and outrage is high, the task is “outrage management” – reassuring excessively upset people about small risks. “Calm down.” (Outrage management may also be called for when high-hazard events are over and have entered the “blame phase.”)
- When hazard is high and outrage is also high, the task is “crisis communication” – helping appropriately upset people cope with serious risks. “We’ll get through this together.”
- When hazard and outrage are both intermediate, you’re in the “sweet spot” (hence the happy face) – dialoguing with interested people about a significant but not urgent risk. “And what do you think?”
How do my definitions map onto others’ uses of the terms “risk communication,” “crisis communication,” and “health education”?
Obviously, I have gone along with the broadened use of the term “risk communication” to apply to all kinds of communications about risk, not just reassuring ones. I use the term “outrage management” to mean what “risk communication” used to mean – reassuring people about small hazards. I have tried to imbue my approach to outrage management with respect for the audience and its reasons for overestimating the hazard. Good outrage management doesn’t mean telling people they’re foolish to be upset; it means listening to why they are upset and trying to change the things you’re doing that are making them upset. Still, the goal is to get them less upset – a goal that is honorable only if the hazard is actually small.
My use of “crisis communication” pretty much follows the non-PR convention: talking to people in situations that are crises for them. For PR people, any high-outrage situation is a crisis, because it threatens the client’s reputation. For me, a high-outrage situation is a crisis only if it is also high-hazard. It’s not enough for people to feel endangered; for the situation to be a crisis, they actually have to be endangered as well. But I use the term a little more broadly than most crisis communicators – to apply to individual as well as group risks, and to apply to chronic as well as urgent risks … any risk that is both upsetting and dangerous. In a recent consultation on autism, for example, I used the principles of crisis communication to explain how best to talk to the parents of a child who has been recently diagnosed as autistic.
I don’t use the term “health education” much. It’s all over my risk communication map:
- When the audience’s problem is mostly motivational, health education fits into what I call precaution advocacy – trying to get apathetic people more concerned about a serious risk. Example: talking to teenagers about the long-term health effects of obesity and why they should eat less and exercise more.
- When the audience is already motivated but lacks information, then health education is in my sweet spot – dialoguing with interested people about a significant risk. Example: explaining mosquito control techniques to a community that is trying to cope with its endemic malaria.
- When the risk isn’t just significant but very serious (and often urgent), and the audience knows it and is upset about it, health education turns into crisis communication – guiding appropriately upset people through a serious risk. Example: helping newly diagnosed AIDS patients master the complexities of their medication regime.
- And when the problem is that people are excessively worried about the recommended health precaution, health education requires outrage management – reassuring over-anxious people about a small risk. Example: persuading parents to vaccinate their children despite the possible side-effects of the vaccine.
Should you tell bystanders about a crisis (or a controversy)?
|Date:||September 28, 2008|
How essential is external communication for companies facing a crisis?
The answer to this question is so obvious I keep thinking it must be a trick question, or you must have some other meaning in mind. Of course external communication is essential for companies facing a crisis!
Two kinds of situations are often called a crisis.
The first is an actual crisis (high-hazard, high-outrage), when people are appropriately upset about a situation that is genuinely dangerous. In that situation, communicating with stakeholders who are upset and endangered is obviously essential. How else can you guide them through the crisis, validating their distress (outrage), helping them bear the situation and the way it makes them feel, and helping them choose wise rather than unwise precautions? See my Crisis Communication Index for more information on crisis communication.
The second kind of “crisis” occurs when people are upset about a situation they believe is dangerous – but you’re pretty sure they’re wrong. If you’re responsible for the situation, it may constitute a reputational and profitability crisis for your organization, and thus an economic crisis for shareholders, employees, and others who depend on your success for their income. But it’s not a real crisis for your external stakeholders. Communicating with people who are upset but not endangered is often called crisis communication by PR practitioners. I call it outrage management (low-hazard, high-outrage), and it’s covered in my Outrage Management Index. When people mistakenly think they’re endangered, communicating with them is the most important thing you do. How could you possibly calm their outrage and change the things you’re doing that are precipitating their outrage without talking (and listening!) to them?
So if “external communication” means communicating with external stakeholders who are upset, then clearly external communication is essential, regardless of whether your external stakeholders’ outrage is technically sound (crisis communication) or technically unjustified (outrage management).
In thinking about what your underlying question might be, I thought perhaps it was this: When some people are outraged by a situation, whether rightly or mistakenly, how essential is it to tell others who are neither outraged nor endangered what’s going on? When you’re busy trying to manage a crisis or quiet a controversy, in other words, should you or shouldn’t you clue in bystanders who are “external” to the situation? That’s a much tougher question.
My advice here is to go public.
There are exceptions. Sometimes the crisis or the controversy is extremely limited. Nobody else is affected; nobody else is aware; nobody else is likely to be affected or become aware later. Okay, keep your response within the family, work team, or neighborhood – within the circle of people who are affected or aware.
But my clients routinely imagine that the crises and controversies they are managing are smaller than they actually are. When you’re under attack, it’s rare to make a mountain out of a molehill. But it’s common to fail to notice that what you devoutly hope will remain a molehill is already growing into a mountain. And over-reacting to the situation isn’t only rarer than under-reacting; it’s also safer. In other words, when you try to confine your response to a crisis or controversy to the people you think are affected or aware, typically others become aware, consider themselves affected (or at least interested), and hold you accountable for not having told them sooner.
Telling the world about your small crisis
Suppose for example that you were managing Fonterra. This huge New Zealand dairy cooperative is 43% owner of a Chinese company named Sanlu. Sanlu distributed milk contaminated with melamine, sending thousands of infants and children, mostly Chinese, to the hospital with kidney stones. Sanlu reportedly knew about the melamine contamination as early as December 2007, and certainly by spring 2008. Fonterra didn’t find out until early August 2008 – and didn’t tell anyone for more than a month, when it tipped off the New Zealand government, which started pushing Beijing for a recall.
Fonterra executives say they kept quiet so long because they believed that staying “within the system” was the best way to persuade Sanlu executives and local government officials to get the contaminated milk off the shelves. They were thinking about nothing but the children, they insist.
Like most observers, I find that hard to believe. But I find it easy to believe that Fonterra management imagined the crisis was small and would remain small. Here’s the logic I suspect (and they deny) they were following:
- We stand the best chance of forcing a recall and protecting children if we blow the whistle publicly. But that will do substantial damage to Sanlu’s reputation, and thus to our Sanlu investment. It’s good for the children but bad for our shareholders (who have children too).
- If we try to manage the crisis quietly – encouraging a recall without raising a ruckus – we may very well succeed. The crisis may stay small, get under control, and go away. It might even be going away already; maybe only one batch was contaminated, and the harm has been done. Even if there’s more bad milk in the system that will hurt more children, our best bet is still to exert quiet pressure. That way we will be able to protect the children (though perhaps not as quickly as if we went public) and also protect our investment.
- If we try to manage the crisis quietly and fail, the crisis could expand. More children could be sickened, and the chances of a major scandal could increase – damaging not just our investment in Sanlu but also our corporate reputation and even our survival. But probably the crisis will stay small. Better to keep quiet.
I have no evidence that that’s how Fonterra figured. But it is certainly how many companies figure. They roll the dice, hoping they can manage a small crisis quietly and gambling that it won’t get bigger and become public knowledge. Quite often they turn out wrong. The crisis grows, the secret emerges, and their problems are multiplied. The original crisis ends up bigger than it would have been if the company had acted more quickly; more stakeholders are both endangered and outraged. And the company ends up facing a second reputational and profitability crisis over its failure to communicate honestly about the original crisis.
Telling the world about your small crisis is thus literally conservative. You forgo the possibility of solving your problem privately; you pay the price of widespread knowledge that you have a problem; in the process you insure against the far higher price of being discovered having let your problem grow while you kept it secret. I think just about everyone agrees that it would have been more ethical for Fonterra to blow the whistle on Sanlu. My point is that it would have been better business too.
I have to acknowledge a nagging possible exception. In the Fonterra/Sanlu case, and in most cases, going public about your crisis helps you manage the crisis; it has reputational costs to you but it’s obviously good for your stakeholders who are getting hurt. (And in the long term it’s good for you too.) But sometimes going public will exacerbate your crisis. In such a case, external communication threatens not just your reputation but also your stakeholders’ wellbeing.
This is a particularly vivid dilemma right now, as we watch our financial institutions implode. Experts point out that many of the companies going down the tubes this month could have weathered the storm if the people and institutions doing business with them hadn’t got wind of their problems, precipitating a more complicated version of the old-fashioned “run on the bank.” Economies are built on confidence – which is pretty much the same thing as saying finance is a confidence game. When confidence wanes, our fear that the house of cards will tumble becomes a self-fulfilling prophesy. I don’t understand economics nearly well enough to comment further. It is often said that transparency is essential to the stability and viability of modern economies. Obviously this is true in many ways; more transparency earlier about the risks of securitized mortgages and derivative instruments might well have prevented the crisis we’re facing now. But how much transparency should we want once the crisis starts to emerge? It’s looking pretty obvious right now that there are some ways transparency about a manageable economic problem can create unmanageable economic problems, undermining not just individual financial institutions but entire economies.
Having admitted a possible exception, let me stress that exceptions are rare. Most of the time, telling the world about your small crisis doesn’t make the crisis bigger. It does make the crisis most costly to your reputation in the short run; it feels like a bigger crisis when more people know about it. But transparency helps you manage the crisis, and transparency protects you from later accusations of neglecting and hiding the crisis. If you’re really, really confident that your crisis isn’t going to grow – that only a few people are affected, you’re helping them already, and nobody else knows or cares – fine, manage it quietly. But remember that you have a long string of predecessors who made that same judgment and turned out wrong.
Telling the world about your small controversy
Now let’s turn to the second kind of “crisis,” the low-hazard, high-outrage one. Nobody’s seriously endangered. But some people are really upset. You know what you need to do to mitigate their outrage: listen to their concerns and grievances, acknowledge the validity of some of what they’re saying, apologize for your errors and misbehaviors, change some things that are exacerbating their outrage, give them the credit they deserve for the changes they urged, set up better accountability mechanisms so they can watch you more closely, etc. Isn’t this just between you and your outraged stakeholders? Is there any need to tell anyone else?
In principle, no. In outrage management, I distinguish your stakeholders and publics according to how invested they are in the controversy, dividing them into “fanatics,” “attentives,” “browsers,” and “inattentives.” (See my column on “Stakeholders.”)
- The fanatics are outraged but probably unreachable, whether because they’re too outraged or for other reasons (ideology, for example – or maybe fighting you is their job).
- The attentives are also outraged, but their outrage is more ameliorable. A lot of outrage management is interacting with fanatics while attentives watch, in hopes of showing the attentives that you genuinely have taken the fanatics’ objections onboard and made major concessions, and now the controversy is getting boring.
- The browsers aren’t really outraged, just mildly interested, at least for now. They’re not paying a lot of attention to the controversy; unlike the fanatics and attentives, they’re not very skeptical about what they learn.
- The inattentives are sitting this one out.
If you had your druthers, you would confine your outrage management efforts to the fanatics and attentives. Since the browsers aren’t interested or skeptical, there’s no reason to tell them you’re sorry you did X and you admit Y is a problem. You have to say those things to the fanatics and attentives in your effort to manage their outrage, but surely there’s no need to clue in the browsers as well!
There are two problems with this analysis.
First, most of your dealings with fanatics and attentives are going to take place with journalists in the back of the room. The journalists will inevitably report what you say to the browsers. (The inattentives will ignore that story.) So there’s simply no way to do good outrage management with the fanatics and attentives without cluing in the browsers. If you manage the attentives’ outrage well, the browsers are going to learn about X and Y too; that’s unavoidable collateral damage. Your only alternative would be to let the attentives’ outrage fester for the sake of keeping the browsers ignorant about X and Y – and that’s far too high a price to pay.
Moreover, you’re not the only one communicating with the browsers. The fanatics have their own communication effort going, trying to convince the browsers to get more outraged – that is, trying to convert them into new attentives. X and Y are some of their best ammunition. So in most cases the question isn’t whether the browsers are going to find out about X and Y or not; the question is whether you want them to find out from you or from the fanatics. That question almost answers itself. Smart outrage managers “inoculate” the browsers against the fanatics’ best (i.e., true) arguments by conceding the validity of those arguments early and often (and apologetically).
For both of these reasons, “private” outrage management works only when the controversy is extremely small. You don’t usually need to tell your neighbors about your fight with your spouse … or your apology to your spouse. There are no browsers in a tiny controversy, just participants and inattentives. But if a controversy is big enough that there are fanatics who are deeply committed to your defeat, attentives who are following closely to see if you have learned your lesson, and browsers who are tracking the situation casually, then it’s too big to keep the browsers from knowing the embarrassing things you’re admitting to the fanatics and attentives. Journalists are bound to tell the browsers what you admitted. So are the fanatics themselves. Therefore, so should you.
Framing effects research, the risk communication seesaw, and worst case scenarios
|Name:||Knut I. Tønsberg|
|Field:||Public relations for a government agency|
|Date:||September 25, 2008|
|Email:||kit (at) helsedirektoratet.no|
I have written a book in Norwegian about risk communication, making numerous references to www.psandman.com. Thank you very much!
I have also tried to find studies and empirical research supporting the seesaw principal. I found one study that presumably contradicts it – or does it? People were asked to imagine that they had lung cancer, and asked to choose between surgery and radiation. Some were presented with cumulative probabilities of “dying” rather than of “surviving.” When “dying” was used, the number choosing surgery dropped from 44% to 18% (McNeil, Pauker, Sox and Tversky, “On the Elicitation of Preferences for Alternative Therapies,” New England Journal of Medicine 1982, 306:1259–62).
Trying to find examples that contradict the seesaw principle or situations where talking about the worst case should be avoided could perhaps help us with more guidelines for the balancing act of “riding the seesaw.” Have you elaborated on situations when talking about the worst case should be avoided?
The study you cite is one of the classic studies of framing effects, a literature pioneered by Daniel Kahneman and Amos Tversky. In this particular study, patients, grad students, and physicians were all asked to think through how they might decide whether to have surgery for a hypothetical case of lung cancer. For all three groups, surgery was significantly more attractive when it was framed as offering a 68% chance of living for more than one year than when it was described as posing a 32% chance of dying within a year.
Framing effects are very important. The many framing studies that have been done teach us that equivalent ways of expressing a problem can have very different impacts on our choice of solution – one of many ways human information processing falls short of economic models of rationality. Thus a 68% chance of living feels more optimistic than a 32% chance of dying; an “escalation” feels more dangerous than a “surge”; etc. The huge impact of framing leaves communicators (and particularly risk communicators) only three choices:
- Ignore the data on which option the particular frame you pick is likely to encourage.
- Pick the frame that is likely to encourage the option you want to encourage.
- Use several counterbalancing frames in order to minimize any framing effects.
The third option is most objective and most respectful of the audience; the second option is most effective if persuasion is the goal; the first option is most common among doctors and other de facto risk communication practitioners, who have seldom mastered the principles of framing.
But I don’t see how framing is a counterexample to the risk communication seesaw.
The seesaw principle says that when people are ambivalent, they tend to emphasize the side of their ambivalence that is inadequately represented in the communication environment. Suppose we continued the framing study you cited by identifying respondents who were torn between surgery and radiation, and asking this subgroup to imagine a conversation with an opinionated friend. The seesaw principle predicts that if the friend urges surgery, all the reasons for sticking with radiation will come to mind, and respondents will become more inclined to pick radiation – whereas if the friend pushes hard for radiation, ambivalent respondents will rebound toward surgery.
Obviously, sometimes ambivalent people abandon their autonomy and just do whatever somebody else recommends; this is quite likely if a trusted doctor strongly urges surgery or radiation. But when we are trying to make up our own minds and have powerful inclinations in both directions, a one-sided argument often boomerangs.
The seesaw and framing are both powerful communication phenomena. The two are neither identical nor antithetical.
Like you, I have searched for empirical research – especially risk communication research – bearing on the seesaw concept. Like you, I haven’t found any. I referenced some of the non-empirical literature in a 2007 Guestbook entry on “Origins of the risk communication seesaw principle.” (That was in response to a comment from you! Your interest in the seesaw and your frustration at the absence of empirical data on it hasn’t diminished.)
Your comment also mentions worst case scenarios, and asks whether there are situations when it’s better not to tell people how awful things might get. One obvious example comes to mind: when your audience is fragile and very frightened already, and you’re worried that talking about the worst case might propel them into denial. I’m sure there are other exceptions as well. But they’re exceptions. The “rule” in pre-crisis communication is to split your attention about equally between the likeliest outcomes and the most alarming ones that aren’t vanishingly unlikely. As my column on worst case scenarios argues, that’s what you should do when you’re trying to alarm people, and it’s also what you should do when you’re trying to calm people. The last paragraph of the column reads:
And so we have come full circle. My advice to those who wish to warn us is to acknowledge how unlikely the worst case scenario is, even as they insist that it is too awful to bear. My advice to those who wish to reassure us is to acknowledge how awful the worst case scenario is, even as they insist that it is too unlikely to justify precautions. If both sides do good risk communication, they’re going to come out sounding a great deal more alike than they usually do today.
The seesaw goes a long way toward explaining why that advice is sound.
Should I have endorsed Obama on this website? Did I?
|Field:||Dentist/former naval aviator|
|Date:||September 22, 2008|
I thought I would give you some feedback as your site wisely requests same.
While I was on active duty and in the Navy Reserve, one of our tenets was to avoid discussions of any sort about the “Big Three” – politics, religion and sex – in the course of business or even in the wardroom. The obvious reason: People have emotional feelings about all three that are rarely influenced by rational discussion.
A number of people I talked to about your work objected to your endorsement for Obama on your website. [See “Risk Communication Talking Points for Hillary Clinton: Some Primary Principles for This Post-Primary Moment.”] The objection was not so much about who your endorsement was for. People felt that it was not a good decision to make any endorsement in your business capacity (especially as most small business people are Republicans – primarily over the tax and regulation issues); also, it was felt that some readers might object for emotional reasons.
I have my own choice for President, but I am not telling any of my patients.
You’re right, of course, that my political preference is nobody’s business. Nor is my opinion on anything else, except on risk communication.
But occasionally when writing about the risk communication implications of something, I decide that I have to acknowledge some substantive opinion so people can assess my risk communication opinion in that context. So when Jody Lanard and I wrote a column on the risk communication challenge facing Hillary Clinton as she sought to endorse Obama (assuming she meant it), we felt we had to tell readers which candidate we preferred personally.
It wasn’t meant as an endorsement of Obama, but as an acknowledgment of possible bias as we analyzed what Clinton ought to do.
The problem comes up fairly often. When I write about the risk communication implications of the vaccination/autism controversy, for example, I feel obliged to say that I think getting vaccinated is safer than not getting vaccinated.
Of course politics is arguably different. Or maybe not; many anti-vaccination activists feel far more strongly about that than about any political candidate ever.
I do appreciate your feedback. I will think further about the right balance between acknowledging private opinions that might influence my risk communication judgments and letting private opinions intrude on my risk communication articles.
The uncertainty of science
|Name:||Knut I. Tønsberg|
|Field:||Public relations for a government agency|
|Date:||August 4, 2008|
In the 6th point of the “Responding to Rumors” section of your column on “Rumors – Information Is the Antidote,” you write: “Good science is always tentative, and so is good risk communication.”
Risk communication is tentative – that’s understandable and one of your basic thoughts as I have understood. Could you please elaborate a little on why “good science” also is tentative? Most people would perhaps reply that good science gives us facts, principles to hold onto, that science is not tentative but universal.
Science is indeed the pursuit of “universal facts and principles.” But scientific findings are always a rough draft of those universal facts and principles, subject to amendment as new data identifies errors or exceptions, different facts or better principles.
Non-scientists tend to miss this important truth, seeing science as definitive rather than tentative. That’s partly our hunger for certainty at work. But it’s mostly the fault of scientists, who make two fundamental risk communication mistakes again and again:
- Scientists try to augment the authority and prestige of the scientific enterprise by implying that science builds on a firm foundation of prior science. They know better. In fact, science builds on parts of the foundation; it keeps tearing down other parts to rebuild them better.
- Scientists try to augment the authority and prestige of particular scientific claims by implying that those claims are firm. Once again, they know better. When they write for their peers, they are at pains to address the competing claims of other scientists. But when talking to the public, they all too often ignore those claims or disparage them as not scientific at all. The more controversial the claim, unfortunately, the likelier a scientist is to commit this sin on behalf of his or her side of the controversy.
Falsifiability is the fundamental premise of all science. If there is no possibility that you could be proved wrong by new data, you’re in the realm of faith, not science. If there is such a possibility, your findings are tentative by definition. (This is an oversimplification of Karl Popper’s notion of falsifiability, but I think it captures the essence of his point.)
Of course some scientific claims have stood firm for centuries, and are likely to stand firm forevermore. Likely – but not guaranteed. Scientific claims that stood firm for centuries occasionally crumble; consider what Einstein did to Newtonian physics.
Many scientific claims have never stood firm. Rather, they are endlessly under attack. Even claims that a majority of scientists consider well-established frequently coexist with a competing viewpoint advanced by a scientific minority. If 90% of the evidence supports one position and only 10% supports the other, most scientists are likely to go with the 90% – and a smart layperson should too. But in science as in horse-racing, sometimes the long shot comes from behind and wins. And depending on the cost of erring in each direction, sometimes it makes sense to hedge your bets – for example, to take precautions against a horrific risk even though most (but not all) experts believe it can’t happen.
As a practical matter, moreover, new science is extremely tentative. Different scientists often advance competing and mutually incompatible claims simultaneously, all of them grounded in data. Further science must determine which of these claims are valid and which are false, or must find a new interpretation that integrates them and shows that they’re not mutually incompatible after all. Sometimes that further science turns up in a matter of weeks or months. Other times that further science relies on methodological or conceptual breakthroughs that haven’t yet occurred; we can (and often do) wait for decades without a good basis for choosing among competing scientific claims.
Even if there are no competing claims, a new scientific claim has “stood firm” for minutes, not for centuries. At the very moment it is published a half-dozen other scientists may be hard at work on relevant studies that can confirm or disconfirm the new study – or, most likely, complicate its interpretation without quite confirming or disconfirming it.
Wise scientists therefore take a new finding with more than a grain of salt, waiting to see whether additional studies will emerge that support or rebut it. Wise laypeople should do the same.
And good risk communicators should help them do so – which is why communicating about uncertainty is so important. (See my 2004 column on “Acknowledging Uncertainty.”)
The uncertainty of scientific findings and the inevitability of scientific controversies don’t have to lead to paralysis. In fact, once you understand that all science is uncertain, it becomes clear that uncertainty isn’t an acceptable reason (or excuse) for inaction.
In coping with scientific uncertainty, the layperson has a fourfold job:
- Figure out what the weight of the evidence says. Or if that’s too difficult, figure out what the majority of the experts think – bearing in mind that their judgments may be biased or simply mistaken.
- Figure out how much residual uncertainty there is. Science is always uncertain – but there’s a real difference between the theoretical uncertainty of “all claims are falsifiable” and the practical uncertainty of “this claim is brand new and hotly debated.”
- Figure out what the cost of being wrong in each direction is likely to be. Often (but not always) over-caution about a risk does less damage than recklessness. When the disparities in outcome are big and the choice is uncertain, it’s not stupid to choose the option that won’t kill you if you’re wrong.
- Figure out whether you can afford to wait for new evidence that may reduce (or increase) the uncertainty. If new evidence is expected soon and the situation isn’t urgent, fine, wait. But when the time comes to act, take your best shot despite the uncertainty. And then keep looking for new evidence; you may want to change your mind or reverse your course.
The risk communicator’s job is to help with these four tasks – or at least not to make them more difficult by implying that science is certain or that action should wait for certainty.
As if all this weren’t difficult enough, I need to add one final factor. People’s decisions are rightly and inevitably grounded in more than just scientific evidence. Values play a role. So does outrage. So do a variety of other aspects of the situation. (See my January 2008 column on “Who’s Irrational: When People ‘Ignore’ Risk Data.”) Risk communicators not only have to help people cope with scientific uncertainty. We have to help them integrate what they know about the science with the rest of what they know, think, and feel.
A mercury risk the regulators are more worried about than the community
|Field:||Volunteer activist, geologist|
|Date:||July 14, 2008|
My neighborhood is an extremely charming old mercury mining town, New Almaden. The EPA, the California Regional Water Quality Board, and several other organizations are naturally worried about mercury pollution coming from our town, and the park behind it where most of the mines were located.
Several years ago the County Parks Department dug up, dumped and buried tons of dirt that had more mercury (in several different chemical forms) than was judged to be safe. The main fear seemed to be that park visitors would sue the county if they went hiking, claiming their medical problems were caused by their visit to the park. People in this town haven’t shown any clusters of diseases, so they weren’t too worried. The “cleanup” was pretty messy. It left scars in the park and distributed mercury in the town. Also, there was no subsequent reduction in methylmercury in fish.
Methylmercury is the really toxic form of mercury that resulted in deaths around Minimata Bay in Japan. Methylmercury is produced in nature by anoxic bacteria in stagnant bottom waters and soils of reservoirs, lakes and bays.
The three reservoirs around Almaden were found to have fish with high levels of methylmercury. A diverse group of people – water district employees, fish and game people, a geologist, and others – was convened to decide on an action plan called a Total Maximum Daily Load plan, or TMDL.
No one wanted to take responsibility for writing up the TMDL, so the state water board staff volunteered to do it. When the TMDL came out, it required that the suspended sediments have less than half as much mercury (of all chemical species) as the background level in non-mining areas, and far less than the EPA standard for methylmercury in fish. It also required that property owners along the creek test their property for mercury, and for erodability. This requirement caused rather a stir among the few people who were aware of it, especially the two who had property along the creek.
Tonight, we have a meeting with the principal author of the TMDL. We want to convince her to table the plan in the light of new evidence that the water circulators installed almost two years ago have reduced the amount of methylmercury in one reservoir by 95%. The fish have not yet been tested to see if they also show a reduction in methylmercury. If they do, the TMDL plan should be rewritten to require dissolved oxygen circulators in all of the stagnant areas of our watershed, and hold up on the far more expensive work of moving and burying dirt.
Tonight I will try to make sure everyone’s opinions are heard, and will make my own as clear as possible – or as my brother says, “Don’t sound like a wacko.”
I just picked up your comment – and it sounds like your TMDL meeting is probably over by now. Are you still interested in a response from me?
I found the mercury debate in New Almaden (as you described it) fascinating. If you decide to post an updated riskcomm-related comment/question on this situation – one that isn’t so time-sensitive – I would be delighted to respond.
And even if you decide not to, I’d love to know how the meeting went!
robbie lamons responds:
Our meeting did not go well, but we learned some things about the other side, and they learned some things about us.
They had not come to the meeting to receive comments, or to discuss changes to the document, because (they told us) as bureaucrats, they cannot accept comments or suggestions after the comment period has ended. So no negotiations could take place. They came only to try to get the community to accept their action plan. The only hope that property owners have is that many of their deeds say that they don’t own the mineral rights (and we hope they don’t own the responsibilities either).
The engineer who wrote the document had a pie-in-the-sky idea of how the community could get the U.S. government to pay for the cleanup. She did not understand that much of the huge expense of cleaning up the sediments may not be necessary because circulating dissolved oxygen using solar-powered circulators or river power has taken 95% of the methylmercury out of the water. (Methylmercury in fish is the entire reason for the cleanup.) Her own graph shows that methylmercury levels in fish are high in reservoirs and low in the streams above and below them. The total mercury bedload is highest in the stream below the reservoir.
She said to me, “You and I are going to disagree forever.” Well, that is true. She was there with her new boss, a geologist. Part way into the meeting she said to him, “I tried running this meeting.” Then she said to one of the community members, “And now you’re raising your hand. I give up.”
Her boss apologized to her! He said they would not delay the approval of the plan so that fish could be tested in the reservoirs that have had lower methylmercury levels for one and a half years. It almost seems as if the fish haven’t been tested on purpose.
We all left the meeting frustrated and nervous. The geologist thanked me for shaking his hand! Why was that? Did he feel guilty?
Our next negotiating strategy choices seem to be limited. We could notify the media, get the fish tested, talk to our legislators, talk to the board members who have to approve the plan, go over their head to the California and national EPAs, or wait for it to pass and then sue them.
Have you got any ideas for changing the path of a bureaucracy?
Your situation in a nutshell: The bureaucrats are convinced that your community’s mercury problem is a serious hazard. You’re pretty sure the mercury hazard is low and controllable with much less costly measures than the ones they’re proposing. You believe their remedy poses a higher hazard (especially a financial hazard) to the community than the mercury problem itself.
I’m not qualified to judge which of you is right about the mercury hazard. It’s not rare for communities to be insufficiently concerned about a genuinely serious risk, especially if the remedy poses a financial threat. Nor is it rare for government agencies to overreact to a small risk, especially if they’re under pressure or excited about the chance to write new regs. So there’s a long history of controversies where outside bureaucrats want tougher environmental protection than the locals.
Of course there is also a long history of controversies where communities are convinced some contaminant endangers everybody’s health, while regulators keep trying to tell them the risk is minimal. That pattern is a lot more frequent than the one you’re experiencing. But neither is rare.
What’s most interesting from a risk communication perspective is the way outrage is playing out in this controversy. Whether or not your mercury problem is technically serious, it’s easy to see some of the reasons why the problem is low-outrage for most community people. Most importantly, it’s familiar – part of the comfortable and perhaps even revered history of your “extremely charming old mercury mining town” (to use the words of your first comment). And the last time bureaucrats tried to do something about it, they scarred your park and spread mercury through your town. So you’re disposed to experience the remedy as a bigger outrage than the problem … and thus to suppose that the remedy is also a bigger hazard than the problem.
Judging from your description of your recent meeting, the plan’s sponsors are acting in ways that are pretty much guaranteed to exacerbate your outrage at them, and your conviction that they’re determined to disrupt your community with arbitrary overregulation. Their disinclination to listen to your concerns, their lack of interest in your data on the effectiveness of dissolved oxygen, and their unwillingness to assess methylmercury levels in the fish were all examples of very poor outrage management.
I also get the impression from your second comment that they were experiencing their own high outrage – at you – which obviously made it much harder for them to do a decent job of addressing your outrage at them.
Despite your description of your strategic choices as “limited,” your list of possible next steps looks pretty impressive to me. It might help to organize your options into three categories:
- Strategies aimed at arousing more local support for your campaign against the plan – greater use of the media, for example. Your goal here would be to increase your neighbors’ outrage (anger) at bureaucratic arbitrariness and intransigence, as well as your neighbors’ outrage (fear/concern) about the damage an unreasonable and unnecessary mercury cleanup could do to your community.
- Strategies aimed at overturning the new plan by going over the heads of its sponsors – to the state water board honchos, to the legislature, to the state and federal EPAs, to the courts. Most of the strategies on your list so far are in this category. This approach may or may not be compatible with the first approach, arousing more local outrage – it depends on whether you have a better shot at getting the plan reversed through public pressure or through quiet diplomacy.
- Strategies aimed at changing the minds of the plan’s sponsors. It may be too late for this; the die may be cast as far as the plan’s sponsors are concerned, either because they have committed themselves legally or because they have backed themselves into an emotional corner. But you ought to consider whether there might still be time to try to reduce their outrage and seek reconsideration of the plan at their level. Note that this approach is incompatible with the other two; arousing stronger community opposition to the plan and going over the heads of its sponsors will almost certainly solidify their unwillingness to reconsider.
One possible step toward reducing the outrage both sides are feeling would be to offer to conduct a collaborative study to measure methylmercury in the fish. If you do that, I would advise negotiating the implications of the study results before you launch the study. Agree in advance what findings would justify changing the cleanup plan, and what findings would justify reaffirming it.
Evaluating risk communication
|Field:||Environmentalist, Division of Solid and Hazardous Materials, New York State Department of Environmental Conservation|
|Date:||July 10, 2008|
|Location:||New York, U.S.|
I read your paper on “Risk Communication: Evolution and Revolution.”
In the Appendix, the EPA’s “Seven Cardinal Rules of Risk Communication,” a portion of #7 reads: “Risk communication will be successful only if carefully planned and evaluated.”
My question to you is this: In evaluating risk communication, does one give a test to the audience? If yes, does one give a pretest, and another test at the end of the presentation/project?
Two distinctions are crucial in thinking about evaluating any sort of communication, including risk communication.
- The distinction between methodologically rigorous evaluation research and seat-of-the-pants, cut-the-corners-you-have-to-cut pragmatic evaluation.
- The distinction between evaluating pieces of a communication campaign you’re still drafting (“formative evaluation”) or still conducting (“process evaluation”) and evaluating the whole campaign after it’s over (“outcome evaluation”).
Formative evaluations and process evaluations almost always cut methodological corners. Outcome evaluations usually do too. There is rarely enough budget to do a rigorous evaluation.
I am a big, big believer in “sloppy” formative and process evaluation, rather than no evaluation at all. Early in the design of a campaign, it’s important to talk about your preliminary plans with people who are representative of your target audiences. A bit later, when you have some draft messages, it’s important to try them out – once again on groups typical of the people you’re trying to reach. And later still when the campaign is ongoing, it’s important to do some testing to guide your mid-course corrections.
Formative and process research typically makes use of focus groups – a methodology purists rightly insist is appropriate for generating hypotheses but not really for testing them. The samples are “samples of convenience” – that is, people it was easy to round up and talk to – rather than random samples of the populations you’re targeting, so statistical generalizations about those populations are unjustified.
Nonetheless, the difference between sloppy evaluation and no evaluation at all is bigger, in my judgment, than the difference between sloppy evaluation and rigorous evaluation. Even in a crisis, when there’s obviously no time (and probably no budget) for rigorous research, you can still take a few hours to test your draft messages on whoever’s handy and revise based on what you learn. And once you have started rolling out your campaign, you’d be crazy not to make some effort to see what’s working and what’s backfiring, and then adjust accordingly.
For more on formative and process evaluation that doesn’t necessarily dot all the methodological i’s and cross all the methodological t’s, see the “Evaluation and Coaching” summary my wife and colleague Jody Lanard prepared as part of the development of the World Health Organization’s outbreak communication guidelines.
Although formative and process evaluations almost always need to cut some methodological corners, try hard not to cut the corners that matter most. In message testing, for example, I think it’s crucial to test what you really want to know. Asking people how much they liked a message or measuring how much information they learned from it is a very poor stand-in for whether the message made them more inclined to take a specific action (quit smoking, say, or get their flu shot). Similarly, message testing often needs to be done in the context of other messages likely to be emanating from other sources.
Both of these issues come up again and again in risk communication message testing. Typically, I will urge clients to acknowledge some information they’d prefer not to mention (prior errors, current uncertainties, the seriousness of the risk, etc.). A focus group test of my recommended message against the client’s preferred message will very likely find that people would rather not hear disquieting information. For example, people may well “like” over-reassuring messages more than appropriately alarming ones. But over-reassuring messages don’t “work” if the goal of the campaign is to motivate people to take precautions. And when over-reassuring messages are tested in the context of alarming information from other sources (embedding the test messages in a mock radio newscast, for example), people’s preference for over-reassurance dissipates, and its devastating impact on trust and credibility becomes clear.
I am focusing on formative and process evaluation because I believe they are so important. Your question, however, focused on post-campaign outcome evaluation.
Here methodologically sophisticated studies may be somewhat more feasible. But only somewhat. To evaluate a campaign rigorously, you obviously need a control group. It’s usually possible to randomly assign target communities to two groups: the one that receives your campaign messages and the ones you skip. When the campaign is over, if the targeted communities differ systematically from the ones you skipped in ways relevant to your campaign, that’s pretty solid evidence that the campaign was responsible for the differences. Similarly, you can develop three or four different campaigns, randomly decide which communities get which ones, and test the differential effects.
But this means lots of communities aren’t getting your best shot. They’re getting the “alternative” campaign you’re trying to prove works less well, or they’re getting no campaign at all. And it means a big share of your budget is going into two peripheral tasks: putting out suboptimal campaigns, and testing to see if they really worked less well than your best shot.
The long-term payoff of careful evaluation research is immensely worthwhile. (See my Precaution Advocacy Index for some of the evaluation research Neil Weinstein and I have done (with colleagues) on radon communication campaigns.) If your goal is figuring out how to do more effective risk communication about smoking or HIV or radon, rigorous evaluation research is the way to go.
But if you think you already have a handle on how to do effective risk communication, and your goal is actually warning people about smoking or HIV or radon, then evaluation isn’t your core task. You’ll probably end up doing less-than-rigorous evaluation so you can focus more of your efforts on getting the word out.
Once you decide to do a rigorous study (if you do), one of many questions you will face is the one you mentioned: whether to do a pretest or not. As you probably know, there are pros and cons to a pretest-posttest study, as opposed to posttest-only. Posttest-only relies on randomization to assure that the groups started out equivalent. Pretest-posttest measures how they started out, which makes it easier to show a change as a result of your campaign. But the pretest sensitizes people to the campaign, and thus might alter its effect. The methodological gold standard is the so-called “Solomon four-group design,” which merges the two – but it’s rare to be able to afford it.
Some of the issues that arise when testing risk communications in particular are discussed in a 1993 article I wrote with Neil Weinstein, “Some Criteria for Evaluating Risk Messages,” Risk Analysis,13:1, pp. 103–114. It’s fairly heavy reading (I wouldn’t have attempted it without Neil), and it’s not on this website.
The dangers of excessive warnings … and of over-reassurance
|Date:||July 4, 2008|
I am a student of health communication and am currently working on a report on the avian influenza outbreak in India.
I was wondering if people can be prone, with frequent episodes of avian influenza, to becoming apathetic in response to warnings. The heuristic would be something like this: The authorities and the media warned about the deadly outbreak; we were advised not to eat chicken and egg; and nothing happened – there were no human casualties. [Therefore there is nothing to worry about.]
Is this scenario likely? What precautionary measures can risk communicators take to prevent such a scenario?
People notice when warnings don’t come true, and if warnings don’t come true repeatedly people sometimes stop taking the warnings seriously. This phenomenon is in the popular culture (in the U.S., anyway) as “the boy who cried wolf.” It is in the research literature as “warning fatigue.”
What’s most noticeable about warning fatigue, however, is how weak it is. When weather forecasters warn that a hurricane is coming, most people in the predicted path prepare; when the hurricane changes course, most people are relieved; the next time there’s a hurricane warning, most people prepare again. Similarly, activists have long known that they’re pretty safe warning that a particular industrial facility is likely to explode or its emissions are likely to increase the cancer rate; if the explosion doesn’t happen and the cancer rate stays stable, the activist group simply moves on the next issue, undeterred and undamaged.
There’s a good reason why warning fatigue is weak. People intuitively understand that a false alarm is a lot smaller problem than a disaster they weren’t warned about. We understand that it’s a minor irritation if a smoke alarm goes off when there’s no fire, but a catastrophe and a scandal if there’s a fire and no alarm. So we calibrate smoke alarms to be oversensitive; we tolerate their going off too much in order to be fairly confident that they won’t miss a fire. We “calibrate” activists and weather forecasters to be similarly conservative in their warnings – that is, to err on the alarming side.
Still, warning fatigue does happen. And warnings have other costs too; picture a major airport shut down for hours because of a bomb threat. The authorities shouldn’t warn us any more often or more urgently than necessary in order to protect us.
And when they warn us, they should be candid that the warning may turn out unnecessary. The hurricane may change course. The factory may not explode. The bird flu outbreak may be quickly contained without any human fatalities. In the last section of my column on “Worst Case Scenarios,” I make the case that warnings that dramatize the magnitude of a risk are less vulnerable to warning fatigue than those that overstate its probability. An insurance salesperson who reminds you how awful it would be if your house burned down is likely to get more renewals than one who keeps insisting your house will probably burn down.
All Warnings Begin and End in Apathy
Warning fatigue is all about the danger that the authorities will sound the alarm, nothing will happen, and people will respond to the false alarm by becoming resistant to future warnings and apathetic about the risk.
Okay, but suppose the authorities sound the alarm and then the thing they warned against does happen. People are upset – grateful to have been warned, but unhappy to have a new risk to cope with. And then things get better. The crisis has passed. For a while, people are on guard, worried that it might happen again and interested in figuring out what precautions and preparations are appropriate. Then newer problems arise and memories start to fade. Pretty soon people are apathetic again – not as apathetic as they were in the first place, but a lot more apathetic than they were at the height of the crisis, or even at the height of the pre-crisis warnings. The “New Normal” includes the risk of another such crisis … which won’t feel quite as much like a crisis as the first one did.
Or suppose it’s a chronic risk rather than an acute crisis. It comes and stays; it becomes endemic. For a while, people are focused on trying to get rid of it. Once they start to realize they can’t, they pay some serious attention to learning to live with it. And then they get used to it. Once again, apathy reasserts itself. The “New Normal” routinizes the new risk, which doesn’t feel so risky anymore.
In short, all warnings begin and end in apathy. If the bad thing doesn’t happen, people get apathetic again. If it happens and ends, people get apathetic again. If it settles in forever, people get apathetic again. Apathy is the default position. What varies is how much people learn, and put in the back of their minds, during the warning phase and the crisis phase – before they become apathetic again.
An effective warning rouses people out of their apathy for a while. They go through an “Oh my God!” moment, an adjustment reaction. During the adjustment reaction, people rehearse for the crisis to come – cognitively, emotionally, logistically, and socially:
- They pay a lot more attention to the new risk, watching for signs that it’s approaching.
- They have an emotional reaction – mostly concern/fear about the risk itself, but sometimes mixed with irritation at the authorities for burdening them with a new problem.
- They learn about the risk and the recommended precautions, decide how best to prepare, and start preparing.
- They may communicate their own suggestions – at least to family and friends, and maybe even to the authorities.
- They may take some premature precautions, as if imagining that a future crisis were already happening.
The adjustment reaction to warnings doesn’t last long. If a little time passes and nothing bad has happened yet, most people stop worrying. They put the problem out of their minds. They relax their vigilance. They forget some (but not all) of what they learned, and abandon some (but not all) of their preparations. And yes, some people scoff at the original warnings, complaining that the authorities got people all upset for nothing.
But post-warning apathy is less apathetic than pre-warning apathy. If the predicted crisis eventually arrives, people will find it less shocking than if they had never been warned, and will gear up more quickly and more effectively to cope with it.
And then after it is over, or after they’re used to it, they’ll go back to the default position: apathy. That’s why it’s so important to use a crisis as a “teachable moment.” There is a narrow window of opportunity to debrief the lessons learned and lock in commitments to prepare better for next time before everyone (including the authorities) loses interest.
Over-Reassurance: Warning Fatigue’s Evil Twin
So if you warn people, you stave off apathy for a while, and then it returns (though not completely) – whether what you warned them about happens or not.
What if you don’t warn people? What if you warn them insufficiently? What are the effects of official over-reassurance, over-optimism, and failure to warn?
I have written about this pretty often before. See particularly “Tsunami Risk Communication: Warnings and the Myth of Panic,” written with my wife and colleague Jody Lanard.
In a nutshell, here’s what happens when officials over-reassure:
- Before we know whether things are going to get bad or not, people who are unaware of the risk or who are very trustful of the authorities are reassured by official over-reassurances. They stay calm (presumably the goal), but too calm: They fail to get ready. They don’t have an adjustment reaction, don’t rehearse emotionally, don’t prepare logistically.
- People who are more aware of the risk and more inclined to be mistrustful have a seesaw reaction to official over-reassurances. Even though nothing bad has happened yet, they smell a rat. So they become all the more concerned – in a particularly undesirable way. They feel abandoned by the authorities, left alone with their fears, misled and patronized. So they are likely to overreact (because the government is under-reacting), and likely to think things are worse than they are. Paradoxically, they might even panic (though panic is rare even in the face of official over-reassurances).
- If the situation turns out benign in the end, officials will probably get away with their failure to warn. Those who bought into the official over-reassurances and took no precautions feel they were well-led. Those who smelled a rat rightly feel the officials just got lucky this time – but they’re unlikely to be able to generate much outrage at official misbehavior that turned out okay. Officials, of course, feel all the more encouraged to over-reassure again the next time some risk arises.
- If the situation deteriorates and the officials’ over-optimism is proved wrong, on the other hand, mistrust runs rampant. Those who suspected as much all along are confirmed in their judgment that they’re on their own. Those who swallowed the over-reassuring official line must cope not only with their own unpreparedness for the situation at hand but also with their feelings of betrayal by the authorities. Just when people most need official guidance on how to handle the crisis, officials have forfeited their credibility and thus much of their ability to lead the public.
- Over the long haul, a government that has been routinely over-reassuring in the past faces a new risk without essential tools. If the new situation is genuinely not very serious, they find themselves unable to be convincingly reassuring. (After all, they sound the same when something is serious as when it is not.) By under-warning about serious risks in the past, they have set their people up to overreact to small risks now. And if they now try to warn people about a serious risk, they face confusion and incredulity. What does it mean when a normally over-reassuring government suddenly says something alarming? Is the situation so dire that officials finally had to admit there’s a problem? Or do they have ulterior motives for being falsely alarming now, just as they were falsely reassuring in the past?
Official over-reassurance leading to mistrust is a bigger problem than official over-alarm leading to warning fatigue. A much, much bigger problem. It’s bigger in general. It’s bigger in communications about bird flu. And it’s bigger in India, whose government (like most governments) has a long history of being publicly over-reassuring about many risks.
Bird Flu Warnings
Consider the announcement of India’s first known outbreak of the H5N1 bird flu strain, in Maharashtra in 2006. This isn’t a horrible example, just a typical one.
The first announcement was, by definition, delayed. The government announced confirmation of H5N1 in birds. That means that several days earlier, the government had to have known (and not said) that there was some deadly bird flu strain killing local poultry. A couple of days after that, they had to have known that it was H5 (because it takes less time to identify the “H” than the “N”). Only after the lab tests were complete for both the hemagglutanin and the neuraminidase did the authorities announce the presence of bird flu.
This means they missed the chance, for several days, to warn poultry farmers to ramp up their biosecurity precautions in order to reduce the risk of spread. It also means consumers were buying potentially diseased birds without having been warned.
On the other hand, once they made the announcement the Indian authorities did not understate the seriousness of the outbreak. One early Reuters story (February 18, 2006) quoted local animal husbandry minister Anees Ahmed as saying, “We are treating it as an emergency.”
This is much better than India’s risk communication during Delhi’s 2005 human meningococcal disease outbreak. As that situation got progressively worse, officials repeatedly insisted it was under control, was just “a few sporadic cases,” could be treated easily with antibiotics, didn’t necessitate a vaccination program, etc.
Avian influenza poses at least three quite separate risks:
- The risk to farmers that their poultry will be infected. A bird flu outbreak is an economic disaster not just for farmers whose flocks get sick and die, but also for nearby farmers whose flocks need to be culled in an effort to stop the spread of the virus. There are serious follow-on effects on the whole local economy, and on the diets of poor people who rely on chicken as a major source of protein.
- The risk that someone will come into contact with a sick bird and contract the disease. This is a low-probability risk even for farmers and their families. Avian influenza passes from birds to humans only with considerable difficulty; with millions of opportunities, it has successfully done the trick only a few hundred times so far. The probability of bird-to-human transmission is lower still for people who don’t spend a lot of time with birds. Still, contact with the carcass of a diseased chicken can be sufficient, so the cook faces a small but real risk. Contact with an incompletely cooked chicken dinner might be sufficient as well, so the rest of the family faces a tiny but still conceivable risk. No matter how small the probability that a particular person will catch bird flu, the magnitude of the risk is huge; roughly half the diagnosed cases have died.
- The risk that a bird flu virus that has never before circulated among humans will mutate or reassort in a way that makes human-to-human transmission efficient. Each time this happens – which is roughly three times a century – it causes a pandemic and kills millions of people. The prospect that it could happen to the H5N1 bird flu virus is the risk public health experts are focused on – a risk of huge magnitude and unknown probability. Controlling bird flu in birds is a good way to reduce the probability of this disaster. But if it happens, people won’t have to worry about contact with birds anymore; they’ll be worried about contact with their neighbors.
So every government – the Indian government included – has to make decisions about three warnings, not one:
- how urgently to warn farmers that their flocks might get the disease;
- how urgently to warn various subgroups that they might get it themselves from contact with diseased birds; and
- how urgently to warn all of us that a pandemic might devastate the entire world.
Most experts agree that the first and third risks are a lot more serious than the second one. That is, H5N1 poses a huge risk right now to poultry, a huge risk maybe someday to us all … and a small risk right now to people who come into contact with diseased birds.
In each case, excessive warnings might lead to warning fatigue and to apathy: “You told us it was coming and nothing happened!”
And in each case insufficient warnings might lead to a host of problems: apathy and lack of preparedness among those who feel more reassured than the situation justifies; mistrust among those who believe the situation is worse than the reassurances imply; outrage on everyone’s part if things get bad and we demand to be told why we weren’t properly warned.
The risk communication goal is to communicate all three risks in ways that are proportionate to the actual hazard. Tougher still, the goal is to communicate all three risks in ways that feel proportionate to the hazard – that feel neither excessively alarming nor excessively reassuring … not just now, but also in hindsight after bad things happen or after they don’t.
It’s not easy to walk this risk communication tightrope. For most officials who are trying to walk it, excessive warnings aren’t the main problem. Insufficient warnings are.
[Note: My wife and colleague Jody Lanard collaborated on this response.]
Risk communication is a type; outbreak communication is a subtype
|name:||Ahmad Rasoul Mofleh|
|Date:||July 3, 2008|
I am a medical doctor working in Afghanistan. Next week we will conduct a communication workshop for some of our emergency response teams, focusing on avian influenza, and I am expected to lecture on both risk communication and outbreak communication.
Could you please answer this question: What is the difference between risk communication and outbreak communication?
There are really three questions here: the origins and development of “risk communication”; its relationship to “outbreak communication”; and the role of “avian influenza communication.” Let me take them one at a time.
The Origins and Development of “Risk Communication”
The term “risk communication” was introduced in the U.S. in the 1980s. Its first use was with regard to environmental controversies where the actual technical risk was fairly small but the affected public was very upset. A typical example would be a polluting factory; even though most experts may judge that a particular factory’s emissions are unlikely to cause significant health effects, the factory’s neighbors may be understandably skeptical about this judgment and fearful that the emissions could end up giving their children cancer.
The first conference with “risk communication” in its title (as far as I know) was held in 1986, sponsored by the U.S. Environmental Protection Agency and others. It focused on trying to figure out how best to reassure people who are excessively upset about small environmental hazards. The conference organizers understood from the outset that telling people to “calm down” wasn’t going to do the trick – that effective risk communication would need to be respectful rather than patronizing, and two-way rather than one-way. But we had a lot to learn about how to foster the sort of mutually respectful dialogue that might actually be experienced as reassuring.
While we were busy creating this new field of risk communication, there was already a well-established field of “health communication,” which generally made the opposite assumption: that the audience was insufficiently upset about serious hazards. Health communicators aimed to persuade apathetic people to eat less, exercise more, quit smoking, etc. A closely allied field that also existed already was “safety communication”; like health communicators, safety communicators tried to arouse more concern and motivate more precaution-taking, focusing on driving, crime, workplace accidents, and similar threats.
And there was already another well-established field of “emergency communication” that focused on yet a third problem: how to guide appropriately upset people through serious situations – natural disasters, epidemics, wars, etc.
In my vocabulary, health communication and safety communication focused on high-hazard, low-outrage risks. Emergency communication focused on high-hazard, high-outrage risks. Risk communication initially evolved to address low-hazard, high-outrage risks.
But over the next two decades the term “risk communication” expanded to encompass all three tasks – alerting insufficiently upset people to serious risks, guiding appropriately upset people through serious risks, and calming excessively upset people about small risks. Today, even a technical expert who simply wants to educate the public about risk data is likely to be described as doing risk communication.
This expansion of the meaning of “risk communication” is regrettable, I think, because it obscures the huge differences in goals and methods among these various tasks. But it happened, and it’s probably irreversible.
So these days I like to talk about three principal “risk communication paradigms”:
- When hazard is high and outrage is low, the task is “precaution advocacy” – alerting insufficiently upset people to serious risks. “Watch out!”
- When hazard is high and outrage is also high, the task is “crisis communication” – helping appropriately upset people cope with serious risks. “We'll get through this together.”
- When hazard is low and outrage is high, the task is “outrage management” – reassuring excessively upset people about small risks. “Calm down.”
This terminology isn’t universal (though I keep trying). What I call precaution advocacy is sometimes called social marketing or social mobilization. What I call outrage management is sometimes called crisis communication. The proliferation of labels is by no means over.
“Risk Communication” versus “Outbreak Communication”
So where does “outbreak communication” fit in? This term was coined just a few years ago by the World Health Organization, when it was developing its first set of “Outbreak Communication Guidelines.” ( 453kB, off site) In part, the new label was simply a branding exercise. WHO packaged together those aspects of risk communication that were paramount in coping with infectious disease outbreaks – informing early, acknowledging uncertainty, warning people that the situation was likely to change and early information might turn out wrong, etc.
Most or all of the outbreak communication guidelines are identical to crisis communication best practices. Nonetheless, “outbreak communication” is turning out to be a useful term to describe the way crisis communication principles ought to be applied under outbreak conditions. Infectious disease outbreaks are arguably different from other kinds of emergency situations in some ways – they tend to last longer; the fear of contagion complicates people’s emotional responses; etc. And some of what’s in the WHO guidelines addresses the particular crisis communication problems of national governments and international health agencies.
I had a hand in helping draft the WHO guidelines. And my wife and colleague Dr. Jody Lanard was hired by WHO to draft a background document on which the guidelines were built. That background document, based on a literature review and interviews with experienced outbreak communicators, is in three parts:
- The main paper ( 154kB on this site)
- A set of nine appendices ( 183kB on this site)
- A primer on evaluation and coaching ( 126kB on this site)
Bottom line, in my view: outbreak communication is a kind of crisis communication; crisis communication is a kind of risk communication.
But this over-simplifies the situation. An infectious disease outbreak is clearly a crisis, and so it makes conceptual sense to see outbreak communication as a subset of crisis communication. But in practice, outbreak communicators need precaution advocacy skills and outrage management skills too. In the midst of an epidemic, some people may remain apathetic. Alerting them to the risk and persuading them to protect themselves is obviously both an outbreak communication task and a precaution advocacy task. Others in mid-epidemic may be angry or frightened about the wrong things – they may inappropriately blame their neighbors or outside doctors or government officials, for example; or they may fear the wrong disease vector, imagining an airborne disease to be foodborne. Calming misplaced anger and fear is obviously both an outbreak communication task and an outrage management task.
And of course outbreak communication isn’t confined to communicating in mid-outbreak. Warning people about a possible future outbreak (or a possible future second wave of the current one) is surely precaution advocacy. When the outbreak has passed, reassuring people that it’s really over, reducing the stigma problems of those who were exposed or infected, and dealing with recriminations about how the situation was handled are all surely outrage management.
So while the WHO outbreak communication guidelines are grounded in crisis communication principles, it is probably more accurate to see outbreak communication as a subset of risk communication, not of crisis communication only.
Where Does Avian Influenza Communication Fit In?
And now a question you didn’t ask: What kind of risk communication is “avian influenza communication”? All kinds, I think, depending on exactly what you’re doing:
- When you warn the general public that there may someday be an influenza pandemic and people should try to prepare themselves and their families to cope with it when it happens, you’re doing precaution advocacy.
- When you warn farmers to be on the alert for bird flu in their poultry flocks, and to take various steps to lessen the likelihood of their flocks becoming infected, you’re also doing precaution advocacy.
- If there is a local bird flu outbreak (in birds) and you have to explain to affected farmers that you must kill their birds and devastate their livelihoods in order to try to contain the spread of infection, you’ll be doing crisis communication – and probably some outrage management as well.
- During the same bird flu outbreak, when you have to explain to consumers that it is safe for them to keep eating chicken as long as they cook it thoroughly, you’ll be doing outrage management.
- If a novel influenza virus crosses the species boundary from birds to people, starts spreading human-to-human, and launches a devastating pandemic, we’ll all be doing crisis communication.
- If that happens, after the pandemic runs its course there will be endless investigations of why we were so unprepared – and the risk communication task will shift to outrage management. If it doesn’t happen for a long time and there are investigations into why we wasted precious resources preparing unnecessarily, that will call for outrage management too.
Which of these six are outbreak communication? Certainly #5. Probably #3. Arguably #1 (unless you can tolerate the term “pre-outbreak communication”). Maybe all of them.
For sure, all of them are risk communication. And all of them are avian influenza/pandemic influenza (AI/PI) communication.
But what’s most important here is that an AI/PI communicator is ultimately going to need three toolkits: a precaution advocacy toolkit, an outrage management toolkit, and a crisis communication toolkit.
How should the public cope with outrage and uncertainty –
and how do I cope?
|Field:||Science Editor, Die ZEIT|
|Date:||June 12, 2008|
As a science editor of Germany’s biggest weekly paper, Die ZEIT, I read with great interest your articles on the public perception of risks and how organizations can deal with them.
As I’m currently working on an article about the ups and downs of media coverage on different risks, I will surely cite your findings.
But there is one open question left for me: What would be good advice for terrified readers of articles on new (and maybe unknown) risks? As you point out, it doesn’t help to tell them: “Stay calm. The real risk has just a probability of xx percent.…”
What would be a better strategy? Or in other words: What is YOUR OWN strategy, when you read about a risk that is unknown to you? How would you deal with it, if you didn’t have time to verify all the scientific or technical details (as it is the case for most of our readers)? Just wait and see whether it is still in the news four weeks later? Or call an authority whom you trust – and which one would that be?
If you have any personal advice, I would be happy to include it in my article.
Note to readers: I received and answered this inquiry back in April, but waited till June 12 hoping the Die ZEIT article would be published. It finally appeared (in German, obviously) on June 19. My response starts with a review of some risk perception basics, but then moves on to topics I haven’t written about much before: how I advise the public to cope with outrage and uncertainty, and how I cope with them myself.
Let me start with some basics you probably already understand.
Outrage trumps hazard in people’s perception of risk.
I distinguish a risk’s “hazard” (how much harm it’s likely to do) from its “outrage” (how upset it’s likely to make people). The essence of my “Risk = Hazard + Outrage” formula is the fact that both hazard and outrage are part of what people mean by risk.
In fact, outrage determines people’s hazard perception more than hazard does. That is, whether or not people take a risk seriously and choose to take precautions (or demand that the government or some company take precautions) depends mostly on how much outrage that risk arouses.
If a risk is coerced, dreaded, and immoral, for example, it will be considered more dangerous than if it’s voluntary, not dreaded, and morally irrelevant. Similarly, if the people responsible for the risk are dishonest and unresponsive, the risk will be considered more dangerous than if they’re honest and responsive. “Outrage components” like these five – voluntariness, dread, moral relevance, trustworthiness, and responsiveness – pretty much determine how seriously we take a risk.
So high-hazard, low-outrage risks, such as driving and smoking, leave people insufficiently concerned, while low-hazard, high-outrage risks, such as mobile telephone towers and pesticide residues on foods, tend to arouse excessive concern.
People are right to focus on outrage.
This emphasis on outrage rather than just hazard isn’t foolish. It reflects our shared understanding that outrage matters. We want to live in a world where we decide for ourselves which risks to accept; where we don’t have to endure the anxiety of facing highly dreaded risks; where institutions behave morally, tell the truth, and are responsive to our concerns. These outrage components – voluntariness, dread, moral relevance, trustworthiness, and responsiveness – are not alien, Martian values. They are our values. By making these values part of what we mean by risk, we are insisting that society take these values seriously in its decisions about risk policy and risk management.
Here’s a hypothetical example I like to give.
Imagine that every ten years or so a sniper climbs up onto an overpass with a high-powered rifle and shoots and kills a passing motorist. Then he’s good for another decade. Finally, after 30 years and three deaths, he is caught and brought to trial. Here’s his defense: “During the 30 years during which I shot and killed three passing motorists, thousands of people died on our nation’s highways as a result of drunk driving, not wearing their seatbelts, poor highway design, and poor automotive design. Sniping is an infinitesimal part of the highway death toll. In picking on me, a mere sniper, the government is distorting the public’s understanding of the real priorities of highway safety. The money the government is spending catching me, trying me, and imprisoning me could save far more lives if the government made the rational risk management decision to let me continue killing a mere one person per decade, and reallocated the money to repainting the lane markers on highways.”
My hypothetical sniper is right on the data. Nonetheless, no jury would vote to acquit. Even after they study the data, normal people support spending more money per life saved catching and punishing snipers than repainting lane markers.
It is true, and important, that outrage leads people to misperceive hazard. If a risk is upsetting, I will imagine it’s dangerous even if it really isn’t. But it is also true, and even more important, that outrage matters apart from its effect on hazard perception. Suppose you succeed in convincing me that some very upsetting risk isn’t really as dangerous as I thought. I will still care that the risk is coerced, dreaded, and immoral; and that the responsible institutions are dishonest and unresponsive. I will still want to avoid such a high-outrage risk, and I will still want the government to regulate it strictly. In other words, people don’t just want to live in a world that minimizes hazard. We want to live in a world that minimizes outrage too.
Many of my clients are companies whose activities impose a low-hazard, high-outrage risk on their neighbors or customers. They keep asking me how to convince people that the hazard is low. I keep answering that they should focus on reducing the outrage instead.
It helps to keep outrage separate from hazard in your mind.
But you’re not asking what my clients should do about the low-hazard, high-outrage risks they are imposing on people. You’re asking what my clients’ neighbors and customers should do about those low-hazard, high-outrage risks? I have three answers:
- Give yourself permission to be upset about high-outrage risks even though their hazard may be low. We human beings are hard-wired to respond that way – and it’s good for society that we are, because it incentivizes government and industry to work to make those low-hazard risks lower in outrage. (Of course it also incentivizes them to reduce the hazard of risks that are already low-hazard, a much less desirable effect.) Feel free to get upset when some risk situation is managed in a way that seems coercive, dreaded, immoral, untrustworthy, unresponsive, etc. And feel free to demand action – preferably action to reduce the risk’s outrage, rather than its hazard.
- Try to remember that a high-outrage risk isn’t necessarily high-hazard. Even as you are giving yourself permission to get upset anyway, bear in mind that you may not actually be endangered – that the anger portion of your outrage may be right on target but the fear portion may be misplaced. Aim for double-entry bookkeeping in your mind, taking outrage seriously without letting it control your judgment about hazard.
- Pay attention to high-hazard, low-outrage risks too. These are the risks that aren’t likely to upset you – but can kill you (or hurt you, or damage the environment). They’re the risks your loved ones keep harassing you about: smoking, driving, eating too much, exercising too little, not going to the doctor or not taking the doctor’s advice, etc. Since these risks lack the crutch of outrage to capture your emotions, they need some concentrated attention from your mind. (And policymakers should do what they can to increase the outrage of such high-hazard risks.)
Coping with uncertainty is hard – and expert overconfidence makes it harder.
Everything I’ve written so far assumes you can tell whether a risk’s hazard is high or low. But often you can’t tell. Some experts (or presumed experts) are saying the risk is dangerous, others are saying it’s safe, and others are saying they don’t really know. Now what can you do?
Uncertainty is part of an outrage factor called knowability. Especially when outrage is high already for other reasons (such as dread and coercion), uncertainty makes it higher: “How dare you make me the unwilling subject of your risk experiment! If you’re not sure it’s safe, how dare you expose me to it!”
Because the experts know that uncertainty makes people anxious, they often pretend more certainty than their science justifies. That almost always backfires on them. For one thing, it leads to a worse knowability problem: expert disagreement. When different experts are making conflicting claims, all of them sounding certain, the public’s outrage goes through the roof. Expert disagreement is such a powerful source of outrage that people are often more upset when half the experts say something is dangerous and half say it’s safe than when they all say it’s dangerous.
Aside from triggering disagreement, expert over-confidence also arouses the public’s mistrust. Here’s a typical pattern in risk controversies. Most of the experts think X is safe, and most of the evidence shows they’re right. But a few experts dissent, and they have a few studies on their side too. Instead of acknowledging the dissenters, the anomalous studies, and the possibility that X might be dangerous after all, the mainstream experts pretend they’re certain and ignore everything that suggests otherwise. The dissenters rightly point out that they and their data are getting ignored. The public rightly senses that the mainstream experts are being overconfident and even dishonest. Fueled by mistrust and expert disagreement, public outrage increases – and so does the public’s perception that X is a serious hazard … which it probably isn’t.
This pattern is characteristic of many recent and current risk controversies, such as bovine growth hormone in milk and the possible link between vaccination and autism.
Sometimes the mainstream experts are on the alarming side of a risk controversy (global warming, for example). Other times they are on the reassuring side (mobile telephones, say). It can be hard to tell which side is the mainstream side. Dissenters tend to communicate more aggressively than the mainstream – so journalists may pay them as much attention, and their websites may be easier to find. Distinguishing dissenting experts from dissenting non-experts can also be a challenge.
Still, with a little research it is usually possible to figure out where the majority of the experts stand. That’s certainly a lot easier than reaching an independent judgment about which side is right. So figure out which side the mainstream experts are on, and then follow these three decision rules:
- The mainstream experts usually turn out mostly right in the end (though less completely right than they were claiming). Of course the times when the mainstream was dead wrong and the dissenters were right are memorable and very important. Paradigm shifts do happen – but they’re relatively infrequent. Bet on the mainstream.
- It’s usually a lot more damaging to think something is safe when it turns out dangerous than to think something is dangerous when it turns out safe. So the alarming side in a risk controversy deserves the benefit of the doubt, even when the mainstream experts are on the reassuring side. If most experts say X is safe but a few say it’s deadly, and if you can live nicely without X, it’s not crazy to go with the dissenters – even though you know you’re probably taking an unnecessary precaution. When the cost of being mistakenly cautious is low and the cost of being mistakenly unconcerned is high, in other words, it may be sensible to be cautious even though the mainstream experts are telling you caution is unnecessary. This strategy does have a social cost in the aggregate. If a lot of people are unnecessarily cautious about X, the X industry will pay the price. If a lot of people are unnecessarily cautious about a lot of things, we will all pay the price: excessive spending on trivial hazards and reduced attention to more serious ones. Nonetheless, erring on the alarming side makes sense.
- The second rule has exceptions. Sometimes it can be very damaging to think something is dangerous when it turns out safe. Wrongly thinking a vaccine is dangerous, for example, can leave you unvaccinated and at risk of catching a disease everyone knows is dangerous. The vaccine risk is a bigger outrage than the disease risk – but the disease risk is a bigger hazard. When the cost of being mistakenly cautious is higher than the cost of being mistakenly unconcerned (or when it’s about as high), and the mainstream experts are unconcerned, go with the mainstream – even if it means setting aside your own outrage.
How do I respond to risk?
You asked how I personally respond to risk.
For health and environmental risks, I usually follow my first decision rule. That is, rather than try to figure out the science myself or call a scientist I trust, I usually decide where most of the experts stand – and that’s where I stand too. Once in a while I rely on the second rule, deciding to go with the cautious minority on the grounds that I’d rather be wrongly cautious (and safe) than wrongly unconcerned (and endangered). And once in a while I smell a rat and decide that the mainstream experts simply can’t be trusted. (An example of smelling a rat: I got myself a prescription for Tamiflu to have in case of a pandemic even though most experts recommended against it.) But the vast majority of the time I take the precautions the mainstream experts recommend, shrugging off the dissenters’ warnings.
I am unusual – statistically weird – in that I find this fairly easy to do. Most people are more responsive to a risk’s outrage than I am. And most people are less comfortable than I am playing the odds – that is, relying on the mainstream experts while knowing that once in a while they’ll be mistaken or dishonest. My atypical risk response is probably why I became interested in risk perception and risk communication issues in the first place.
That’s for health and environmental risks. In other risk venues I am oversensitive to outrage and totally impervious to expert reassurance. For example, I avoid situations that pose a high risk (in my mind) of social embarrassment – sports and dancing come immediately to mind – even though I know the real hazard to my reputation or self-esteem is very low.
Was Hillary Clinton’s Obama endorsement good outrage management?
|Date:||June 11, 2008|
Your column on concession speech recommendations for Hillary Clinton provided a great analysis, but you’re not a very compelling speechwriter. Your article would have been stronger without “examples” of what you thought Clinton should have said.
As it happens, it seems that she did take your (or someone else’s) advice and did pretty much what you said, to stunning results. I think she did a great job (and I supported Kucinich this year). Her speechwriters did a better job of implementing your suggestions.
Perhaps you could replace your “example” passages with actual well-written quotes from her speech? That might be helpful to other politicians, and might make your article a stronger, more useful, and more inspiring reference in the future.
I accept the point that Sen. Clinton has better speechwriters than Peter Sandman and Jody Lanard. But I don’t agree that her June 7 speech “suspending” her campaign did what we were recommending.
I think she did a good job of endorsing Sen. Obama – which was, of course, the sine qua non for the speech. I give her full credit for that. Well, almost full credit. The endorsement language, while strong, was surrounded by an awful lot of language celebrating not just her supporters (which I think was appropriate) but also her causes and in some ways herself. (I didn’t count the number of times she mentioned her tally of 18 million – but I thought it was too many.) And, perhaps inevitably, her face and voice were a lot more enthusiastic during those other passages than during the Obama endorsement passages. I think the speech reads like a stronger endorsement than it sounded like, and it sounded like a stronger endorsement than it looked like.
Still, I give her credit for a strong endorsement. That can’t have been easy. And it wasn’t long-delayed, coming just three days after his delegate count went over the top. That’s pretty short, especially given the contortions surrounding the assignment of Michigan and Florida delegates. Usually when a candidate gets enough delegates, that is truly the end. But it must feel to Sen. Clinton, and to her supporters, like Sen. Obama kind-of sort-of got enough delegates – so conceding and endorsing were harder. (It would have felt the same to Sen. Obama and his supporters if Sen. Clinton had been put over the top by superdelegate endorsements.)
What Sen. Clinton didn’t do on June 7 – and what the column urged her to do – was to address explicitly her own anger, anguish, and ambivalence, and the anger, anguish, and ambivalence of her followers. The closest she came to that – and it was a terrific line – was the first thing she said: “Well, this isn’t exactly the party I’d planned….”
She said she was supporting Obama, but she didn’t say it was painful for her. She said she wanted her followers to support Obama, but she didn’t say she knew it would be painful for them. These acknowledgments would have helped her followers make the difficult transition she was making. They would have addressed her followers’ outrage.
Of course these acknowledgments can still do so in the weeks to come. One of my favorite bloggers said it best: Reconciliation is a process, not an event.
There is still plenty of time for both Sen. Clinton and Sen. Obama to consider the outrage management task they face – time for them to ask themselves how best to help Clinton Democrats reconcile themselves to becoming Obama Democrats (or at least Democratic Party Democrats), and time for them to realize that this task calls for a different sort of communication than traditional political cheerleading. There’s no big hurry.
While I wanted Sen. Clinton to make a start in her June 7 speech, arguably that was too early. We should all recall the large cohort of people on the Republican right whose early reaction to Sen. McCain’s success in wrapping up the Republican nomination was to vow never to support him. Most of them, today, are supporting him. It didn’t take a psychologically sensitive endorsement from, say, Mike Huckabee to win them over. It took time.
Nonetheless, the responses to Clinton’s June 7 speech were bimodal in a way I found illuminating.
If you read the reactions of Obama supporters on Daily Kos, for example, it’s clear that most thought it was a wonderful speech. Their enthusiasm about the speech she gave makes me wonder how Obama supporters would have responded to the speech Jody and I had urged her to give. It’s possible they would have liked it a lot less. But they were not the audience she needed to deliver.
Most of the comments there insist that Sen. Clinton is simply doing what she has to do (endorse Obama), and that she is counting on her followers to “read between the lines.” For some, reading between the lines means waiting for an Obama meltdown in the coming months and a Clinton resurrection in Denver; for others it means working for a McCain victory in November and a Clinton candidacy in 2012. A minority of the comments on HillaryClinton.com take Sen. Clinton at her word when she says she wants them to support Sen. Obama. Most of this minority apologetically or angrily decline; only a tiny few say they will do so – and then must fend off accusations from the majority that they are trolls who belong on an Obama website, not a Clinton website.
Judging from these two websites alone, Sen. Clinton’s speech managed to win some affection and gratitude from Obama supporters without actually moving her own supporters into Sen. Obama’s camp… or even in his direction. If that was her goal, she succeeded. If her goal was to lead her followers into or toward Sen. Obama’s camp, on the other hand, she failed.
Of course judging from these two websites alone isn’t fair. Many less intransigent Clinton supporters have in fact moved to or toward the Obama camp. Maybe her June 7 speech helped them do so. And maybe she already has plans to address the anger, anguish, and ambivalence of her more steadfast followers when the time is right. As I said earlier, there is plenty of time.
This is a risk communication website. What’s important here isn’t whether Hillary Clinton and Barack Obama do a good job of shepherding the former’s followers into the latter’s campaign. It is whether readers of this website see the difference between outrage management and public relations. Look over the column on “Four Kinds of Risk Communication.” Then think about the plight of Clinton’s followers in terms of “Adjustment Reactions” and “Empathy in Risk Communication.” Then start browsing elsewhere on this site, looking for relevant lessons.
The task of helping Hillary Clinton’s followers reconcile themselves to Barack Obama’s candidacy is a sensitive risk communication problem, not a traditional political PR job. That’s the point Jody and I tried to make in the column. Based on her June 7 speech, I don’t think Sen. Clinton got it … and I’m afraid “Goatchowder” didn’t either. At least with respect to “Goatchowder,” that means we failed in our communication task too.
Pesticide spraying against West Nile Virus
|Field:||Local health officer|
|Date:||May 31, 2008|
|Location:||A high plains state, U.S.|
It’s spring, and the West Nile Virus (WNV) season is looming. The issue of spraying for mosquitoes is on our Council’s mind, an issue requiring both outrage management and precaution advocacy. We are in a hot county for WNV, in a hot state. I am trying to put into practice all your advice on risk communication, but sure wish I could do it better!
Any advice specifically on this issue? I haven't seen it discussed on your website.
I have been on both sides of the issue, which helps in understanding the opposing perspectives. In 2003, I didn’t feel the preponderance of evidence supported adulticide spraying and was less than enthusiastic about it. Then we got clobbered with a terrible outbreak in this area. I felt that if I had been more of a “precaution advocate” for spraying, it might have prevented many severe cases and some of the deaths.
However, realistically, the best time to spray was before any human cases had been reported, and there would have been zero political support to do so.
In 2007, our second-worst WNV year, with more data on the effectiveness of adulticiding (when done correctly and based on infected mosquito surveillance), and more data on human and environmental health risks, we were able to get the city to spray at the right time. This appears to have decreased our rate of serious illness last summer by up to 80%, compared with a nearby community that has always chosen not to spray.
While our public health agency sees this as a success, the opponents want to make sure spraying does not happen again in the future. Opponents are a small percentage of community residents – about 5% – but loud in their opposition, and more likely to show up at a council meeting than the 90+% who support spraying when it is recommended by public health officials to prevent disease.
I'd appreciate any suggestions you may have for me.
The main West Nile Virus risk communication challenge is of course precaution advocacy. Even in the areas most affected by WNV, tens of millions of people remain unpersuaded that they should wear long sleeves, use DEET-containing insect repellants, maintain screen doors and windows, etc. For some, this is simply ignorance; they haven’t heard about WNV yet, or they haven’t learned what they should do to protect against it. For others, it’s about pleasure and convenience; they have heard the WNV precaution advocacy message but they haven’t bought it yet; they’d rather enjoy a precaution-free summer and take their chances.
The more controversial risk communication issue is whether or not communities should spray insecticides in order to reduce the number of mosquitoes, and thus the number of WNV-carrying mosquitoes, and thus the number of human WNV cases. Unlike long sleeves, repellants, and screens, whether to spray or not to spray is a political decision, not a personal choice. It pits those who are worried about the possible health effects of WNV against those who are worried about the possible health (and environmental) effects of pesticides.
Precaution advocacy is relevant here is well. The people who are apathetic about long sleeves, repellants, and sleeves aren’t usually passionate on behalf of spraying either. They’re mostly sitting the WNV issue out. The people who oppose spraying, on the other hand, are passionate indeed. So local health officials and politicians face a lot more pressure from one side of the issue than from the other.
Thus the typical situation is exactly the one you describe. Health officials decide that the benefits of spraying outweigh the risks. Most citizens accept that judgment pretty easily, since they’re not especially worried about pesticides or West Nile Virus. But a minority sees pesticide risks as much higher than the experts believe, and thus opposes the spraying. Trying to address this group’s excessive concern about spraying (rather than the majority’s insufficient concern about WNV) is a job for outrage management.
Of course other patterns are also possible. You could make a professional judgment that spraying isn’t justified in a particular situation, and face outrage from a different minority, one preoccupied with the WNV risk. Or you could be caught between two outraged minorities, one demanding that you spray and the other demanding that you not.
For the first few years after WNV appeared in the New York City area in 1999, these alternative patterns were quite common. There were scores of articles urging readers not to “panic” about West Nile Virus; some of them flat-out claimed, against all evidence, that lots of people were in fact panicking. While panic was a misdiagnosis, many communities did experience concerted citizen demand for government action. Not just anti-pesticide activists but also environmental health experts wrote that too many local agencies were unwisely acceding to public pressure to spray, even in situations where the spraying was unlikely to be needed or useful and the pesticide risk probably exceeded the WNV risk.
Even during the height of West Nile Virus “panic,” most people were pretty apathetic about WNV. But enough were upset about the new disease to build a constituency for spraying that rivaled and sometimes even overpowered the decades-old movement against pesticide use (or “overuse,” depending on how you see the issue).
While I haven’t seen (or done) a careful study, my impression is that West Nile Virus anxiety has abated. Or at least the number of people sufficiently anxious about WNV to push hard for spraying is down, even as the apathetic majority slowly learns that WNV is yet another risk they ought to pay some attention to. This is a pretty common pattern. When a new threat provokes a flurry of interest, a minority becomes really concerned and builds a movement around the problem. Then the threat gets old-hat, endemic not just in epidemiological terms but in media terms as well. “Oh, right, it’s the start of West Nile Virus season again.” The size and influence of the obsessed minority declines precipitously, while the level of general awareness continues to rise slowly.
That’s not a bad description of what has happened with regard to West Nile Virus in the U.S. – or with regard to mad cow disease and bird flu in countries that have had first-hand experience with those threats.
So for the moment, at least, you face a public that’s okay if you spray and okay if you don’t, an activist minority that strongly wants you not to spray, and no real pro-spraying activist movement to balance the opponents. Since you’ve decided that spraying is a sound public health measure, you need to manage the outrage of the anti-spraying activists.
Some quick suggestions on how you might do that:
- Don’t focus too much on trying to convert your most determined opponents. Their opinions are unlikely to change. Pay more attention to their less deeply committed followers, who are more open to influence. You’re not mostly trying to convince the followers that you’re right and your opponents are wrong. Rather, you’re trying to convince the followers that you have taken onboard many of your opponents’ demands, that your position has moderated (if it has) whereas theirs has not.
- Actively look for ways to document your responsiveness to opposition demands. What are you proposing to do to reduce the hazards of pesticide spraying? Is there more you can do? How many of these precautions can credibly and honestly be attributed to the efforts of the anti-spraying activists? Of course they’re not demanding safer spraying; they’re demanding no spraying. You can still give them the credit for motivating you to develop a safer spraying protocol.
- Don’t understate the dangers of pesticide spraying. In fact, go out of your way to validate those dangers. That means more than vapidly claiming you understand people’s concerns. It means documenting that many of their concerns are right – that indiscriminate pesticide spraying has done enormous harm in the past, that even careful spraying can have undesirable health effects on some animal species and some individual humans, etc. Be equally forthright about the dangers of West Nile Virus.
- Frame your situation as a technical dilemma: pesticide spraying is dangerous, and West Nile Virus is also dangerous. Neither risk is huge, and neither is tiny. On balance, you have reached a professional judgment that in the current situation spraying against WNV will do more good than harm. But it will do some harm. You know that. You’re not denying it, and you’re not cavalier about it.
- Concede that there is more public passion behind the anti-pesticide position than the reduce-WNV position. Point out that this hasn’t always been true, and may not always be true, but right now your opponents can muster more speakers at a Council meeting than you can. Concede that the difference is relevant – that a democratic society should always take a passionate minority seriously. Frame this as another dilemma, this one a political dilemma: What to do when your best professional judgment points one way, a passionate minority is on the opposite side, and the majority doesn’t seem to care. The more often you say it’s not obvious whether you should defer to the minority or stick to your professional guns, the easier it will be for some supporters of the anti-spraying position to decide that they actually think you should stick to your professional guns.
- Express wishes. It humanizes you, and humanizes the situation. You wish you could find a way to protect people against WNV without doing any harm. You wish they’d come up with a new generation of insecticides that do in mosquitoes without any collateral damage to health or the environment. You wish West Nile Virus didn’t sometimes give people a deadly case of encephalitis. You wish there were more people emotionally committed to stopping WNV, the way some are committed to stopping pesticide spraying.
- Offer to involve your critics in the administration of the spraying program. Make clear that you understand they won’t give up on their efforts to stop the spraying – but in the meantime you really want them to help oversee the spraying. There are many decisions to be made on where, when, what, and how much to spray – decisions on which their input will be invaluable. You especially want them to help you develop a “pesticide alert” protocol, a procedure to warn people (especially people who are concerned) when and where you’re spraying and what they can do to avoid exposure for themselves, their families, and their pets.
- Balance your spraying program with less controversial measures to reduce West Nile Virus, such as eliminating sources of standing water where mosquitoes are likely to breed. Urge anti-spraying activists to help with these measures – working together to maximize the non-spraying part of the solution and thus minimize the need to spray. And try to get pesticide opponents to join you in pushing for individual responsibility: use of long sleeves, repellants (the ones they approve of), screens, etc. Getting involved in the effort to reduce WNV will give pesticide opponents a useful additional context for their concerns.
Convincing people incinerators have improved
|Field:||Local government consultant|
|Date:||May 10, 2008|
After a French so-called survey shows that near old incinerators there may a risk of 23% to develop cancer, then environmental activists go further and keep repeating that any municipal solid waste thermal treatment IS incineration – ignoring the figures showing far lower emissions than allowed.
How can I fight that sort of mess, and convince people that some solutions already exist? I am facing stubborn people just reciting an old catechism, and no way to open the discussion or present my arguments.
Let’s start by making two assumptions:
- The old incinerators were dangerous – maybe not as dangerous as the study you cite suggests, but certainly more dangerous than the authorities said at the time.
- The new incinerators are safe – maybe not as safe as you’re suggesting now, but certainly safer than the old ones, and safer than incineration opponents argue.
I’m guessing that you want to convince people of the second assumption without admitting the first. That probably can’t be done.
In order to “open the discussion” and get a fair hearing for your arguments, you must first concede the other side’s arguments – not the ones you think are false, but the ones that have some truth to them. If you won’t admit that incineration opponents were largely right about the old generation of incinerators, the odds are vanishingly low of convincing a skeptical public that this time the opponents are wrong.
Note that I said “a skeptical public.” When talking to people who know nothing about the checkered history of incineration, you could probably get away with just telling them how wonderful your new “thermal treatment” technology is. At least until the opponents got to them and filled them in on the rest of the story, that would work just fine. But people who know nothing about incineration’s past aren’t likely to care much about its future either; they’re “inattentives” or at most “browsers.” They’re probably not going to get involved in the controversy, so they’re not an important audience for you.
On the other extreme, dedicated opponents of incineration aren’t an important audience either. They’re not likely to change their minds, any more than you are. Of course it’s still important to communicate with them. But you’re not trying to convince them of the merits of your arguments. Either you’re trying to negotiate some sort of compromise, or you’re doing a kind of theatre: talking with “fanatics” while their attentive but less committed followers are watching and listening.
That’s your core audience: “attentives.” (See my column on “Stakeholders” for the distinction among fanatics, attentives, browsers, and inattentives.) Most attentives already know that incinerators have been problematic in the past; the rest will soon learn. So you have to say so, early and often.
Here are a few other recommendations, all of them grounded in the basics of outrage management:
- Don’t shy away from the words “incineration” and “incinerator.” That’s the terminology in your audience’s mind, so it’s where your communication efforts must start. Don’t give it away to your opponents. “Thermal treatment” sounds like what it is, a euphemism. It will work no better than “biosolids” for sewage or “rapid oxidation” for explosion. Your main point, moreover, is that the new incinerators are a lot safer than the old ones. It will be easier to make the comparison stick if it’s about two generations of incinerators, not incinerators versus something else.
- Concede readily that the burden of proof is on you. It’s common sense to mistrust your new incinerators, for at least four reasons:
- Burning stuff causes emissions. Where there’s fire, there’s smoke.
- At a minimum, burning stuff is wasteful. We should try to reuse it instead. If we’re going to burn it we should use the energy.
- The old incinerators were harmful – conceivably very harmful. Remember that study with the 23% cancer rate, or 23% excess cancer rate, or whatever it showed … something alarming, apparently.
- The authorities who now tell us the new incinerators are safe are the same ones who told us the old ones were safe.
- Give credit to your critics – not just credit for being largely right about the old incinerators, but credit for forcing the improvements that led to the new incinerators. This is a powerful strategy. It has the virtue of truth: Incineration technology improved largely because it had to improve in order to stand a chance of adoption in societies that had learned to be skeptical. And it’s a very hard argument for critics to rebut. What can they say? “No, we didn’t force you to improve”? They can and will argue that you haven’t improved enough – but that puts the debate exactly where it ought to be.
- Make incinerator performance accountable. The attentive public isn’t just skeptical about the technical performance of the new incinerators; it is also skeptical about the integrity of those who are seeking to deploy those incinerators. That’s why your reassuring words won’t do the job on their own. The core of most incineration controversies is how cleanly the incinerator will actually burn. So why not install satellite monitoring equipment in the lobby of City Hall, the newsroom of the largest newspaper, and the offices of your severest critic? Why not install a minicam and put the results on the Web, 24/7? And why not negotiate an agreement with opposition groups that includes a penalty clause, so you pay them a stipulated fine every time you exceed the parameters you promised you wouldn’t exceed?
- Keep your claims moderate. In risk debates, the alarming side can afford to exaggerate a good bit, but the reassuring side should avoid exaggeration at all costs. (This is just like conservativeness in risk assessment. Exaggerated alarms give us a margin of error, whereas exaggerated reassurances are dangerous.) Your predecessors made a bad mistake when they overstated the safety of the old incinerators. Don’t overstate the safety of the new ones – even though your critics will surely overstate their riskiness.
Risk communication and Web 2.0
|Field:||Government risk communication specialist|
|Date:||May 5, 2008|
Have you written or looked into writing about the impact on risk communication of the social web, aka Web 2.0 (Facebook, YouTube, MySpace, etc.)?
How is risk communication adapting to this new reality? How do we use the social web in risk communication? And how do we manage it – considering the speed at which inaccurate information travels, considering the fact that governments are not necessarily comfortable associating with websites that carry all kinds of information that do not agree with their core values, etc.?
Anyone who spends any time on my website knows I am barely Web 1.0. I could enter competitions for “most old-fashioned website,” and I’m not a heavy user – barely a user at all – of the social web. When I Googled “social web” and “Web 2.0” I found a huge literature, but when I added “risk communication” to the list of search terms I found surprisingly little.
One name that kept turning up was Jay Bernhardt, Director of the National Center for Health Marketing at the U.S. Centers for Disease Control and Prevention. See for example Dr. Bernhardt’s blog – especially his October 2006 entry, “This Blog Can Save Your Life,” and his October 2007 entry, “Better Health through Social Media.” See also his PowerPoint slide set on “Health Marketing and Social Media at the CDC.”
So I wrote to Jay and asked if he wanted to respond to your inquiry; I’d answer too and we’d cross-post on his blog and my Guestbook. For me that was a pretty Web 2.0 thing to do.
jay bernhardt responds:
Web 2.0, social media, and new media are all buzz words that demonstrate one thing: online communication is changing. Online information seekers are turning to “people like me” for trusted insights and more people are valuing participatory resources where their input counts. This shift from control of information by authoritative sources to mass participation in information delivery and dissemination is creating a new dynamic for risk communication and crisis communication. People are increasingly turning to online media, including Web 2.0 sites, for information about health issues and crises. The extensive popularity and substantial influence of social media on a wide range of target audiences make Web 2.0 channels an ideal place to disseminate persuasive messaging.
CDC’s Web 2.0 Strategy
CDC is working to leverage the new characteristics of Web 2.0 and expand our communication strategy to include a combination of top-down and peer-to-peer information dissemination. For large organizations, the key to effectively using Web 2.0 will be finding the “sweet spot” where these traditional and new media strategies intersect. CDC is striving for that balance.
CDC has a variety of public health communication goals that aim to encourage healthier and safer behaviors and motivate people to become better prepared for emergencies or natural disasters. To meet these ends, there are some obvious ways of leveraging Web 2.0 channels by scanning and analyzing these sites to improve our media monitoring, real-time surveillance, and situational awareness. By listening to the chatter in user-generated sites like social networks and blogs, CDC can be better prepared to respond to an emerging issue or move toward proactive crisis management. In the true spirit of the new collaborative media, CDC is also looking for ways to empower partners and citizens to spread accurate information.
Engaging the Audience with Web 2.0 Tools
Working with blog writers, CDC has shared science news and communication tools that allow these citizen journalists to disseminate accurate and persuasive messages about important public health topics. In November 2006, CDC hosted its first webinar for blog writers where partners and mommy bloggers alike learned information on the importance of seasonal influenza vaccinations and the communication research the CDC uses to craft its messages. By sharing communication strategies, CDC helped bloggers learn how to talk about the issue, not only to inform readers, but to actually motivate them to take action. This initial participation with the blogosphere enhanced CDC’s understanding of what it takes to successfully encourage message dissemination through peer-to-peer channels.
A recent example in which CDC applied lessons learned from blogosphere engagement involves the discovery of selenium toxicity in dietary supplements in April, 2008. Through a coordinated and rapid response, CDC was able to inform blog writers, including prior webinar participants and newly identified bloggers, about the risk of selenium toxicity. Many of these bloggers went on to spread the word on their own sites, increasing the reach of CDC’s messages and ultimately improving our ability to communicate risk information in a timely manner to the greatest number of individuals possible.
Another trend in social media, microblogging, offers a promising channel through which to disseminate messages, particularly those that are time-sensitive. Microblogging allows users to publish extremely short blogs of up to 200 characters, which are usually text-based and disseminated via text messaging and/or email. These microblogs offer an opportunity to relay real-time messages to individuals opting-in to receive them. Like message dissemination through traditional blogs, communication through microblogging relies on one person to inform a network of peers, allowing for exponential reach of messages as they are carried from person to person.
Finally, traditional text messaging strategies can be an effective way to communicate, particularly in an emergency or crisis situation. Spurred by the Virginia Tech tragedy, in which an informal crisis communication system was established via text messaging, many colleges and universities are now implementing crisis communication plans that use text messaging as a way to communicate to students, faculty, and staff en masse in an emergency situation. During the 2007 seasonal flu campaign, CDC piloted its first text messaging effort, allowing users to opt-in to receive text messages weekly when an informational source on flu prevalence was updated. This is just the first step in creating a comprehensive crisis communication plan for social media at CDC that will include text messaging as one of its primary channels.
Challenges and Future Directions
Of course, participation in social media channels can be intimidating for any large organization with a brand to protect, including the government. Empowering individuals to communicate messages to their peers opens the door to misinformation. However, opening this door is the only way to ensure that messages reach the right people at the right time. Because individuals are more frequently turning to their peers for information rather than traditional sources of information (like the government), it is essential to deliver this information where the people are … or risk becoming obsolete.
It is true that inaccurate information will persist, but channels like Wikipedia demonstrate that accuracy is valued by the majority of individuals. Most bloggers and others who take on the role of message dissemination do care about the accuracy and reliability of the information they share. Reaching out to individuals where they are – which is increasingly in Web 2.0 and social media – and empowering them with credible and accurate information is the best way to disseminate relevant and accurate messages.
Rated by Americans as the most trusted federal agency according to a 2007 Harris Interactive poll, CDC is in an excellent position to build on our successes in social media and expand our reach further. Enhancing the impact of our risk communication efforts through increased involvement with social media channels is a solid first step.
Perhaps due to Jay’s influence, the U.S. Department of Health and Human Services (the CDC’s parent agency) is well ahead of most of the rest of the U.S. government in its attention to Web 2.0. HHS Secretary Mike Leavitt is the only cabinet secretary so far to host his own blog (on the HHS website). He started it in May 2007, as part of a five-week pandemic preparedness blogging exercise that also brought HHS into more intimate contact with some of the leaders of the pandemic preparedness corner of Web 2.0. Largely as a result of that experience, a March 2008 pandemic preparedness exercise on getting the word out via the media included online information sources such as Avian Flu Diary, FluTrackers, FluWiki, WebMD, and CIDRAP News.
I do think blogs are the easiest piece of Web 2.0 for traditional organizations to come to terms with. Dealing with Facebook, YouTube, MySpace, and the like will be much harder. Mike Leavitt has a blog, but how many “friends” does he have on MySpace?
I’m not going to be any help. I barely know what Facebook, YouTube, and MySpace are. The fact that I do know what they are probably means they’re well on their way to being passé, and people who are really au courant in Web 2.0 are already onto something newer.
So let me add a footnote about something older: how to make your own website more in tune with the spirit of Web 2.0. (It’s okay to laugh at a person with a really old-fashioned website trying to help you make your website more in tune with Web 2.0.) The essence of the Web 2.0 spirit is openness. Of course a crucial part of openness is interactivity. But it’s not enough for your website to be open to input from users. A truly “open” website should also be open to criticism, wherever it may be found. And it should be personally open, spontaneous and human.
Making your Website Interactive
Just about all of my clients have websites, and nearly all of the sites are technologically more advanced than mine. But most are even less interactive – less “social.” At least I have this Guestbook. A lot of corporate and government sites have nothing interactive other than a way to send them an email … and even that is often hard to find. Among the interactivity improvements I often recommend:
- Create a Guestbook, a vehicle for users of your site to send you questions or comments that you post with your response.
- Create a Forum, a vehicle for users of your site to communicate with each other about you. Keep the Forum unmoderated and uncensored except for commercialism, obscenity, irrelevance, and the like. Read the Forum assiduously, but add your own comments very sparingly – mostly to offer a link to something elsewhere on (or off) your site that’s on-topic. Allow criticism free rein, trusting that somebody else, often an employee, will correct any imbalance. Be tolerant even of factual errors. When necessary, correct important errors elsewhere on your site, leaving the Forum unobstructed.
- Create an address book. Everybody in your management – certainly everybody who ever gives speeches or talks to the media about controversial topics – ought to be reachable by email and snailmail. And the email and snailmail addresses ought to be easily locatable on your website, as long as the user has a name or a job title. This is a very old-fashioned sort of interactivity, and it’s baseline. However open your website is to outside comments via a Guestbook and a Forum, it’s not going to feel open if it doesn’t make it easy for people to write to somebody who said something that made them want to respond.
Making your Website Responsive and Human
Even organizations that have learned to be responsive and human in the mainstream media often have websites that are almost solipsistic – failing to acknowledge criticisms and controversies, or acknowledging them very belatedly and in incredibly one-sided ways.
For example, a client recently was the subject of an investigative piece in a major newspaper. Somehow the reporter had gotten hold of a particularly damning internal memo, and built the story from there. Once it was published, competing media naturally covered the controversy too. To its credit, the company cooperated with the reporter. It acknowledged the facts in the memo with appropriate contrition, it scheduled community meetings to address the problem openly, and it made convincing commitments to do better in future. As a result, the mainstream media coverage was balanced, and even the newspaper that first broke the story later editorialized about the company’s good response.
What’s on the company’s website? Nothing. The controversy is all over the Web, but nowhere on the company’s site.
Leaving embarrassing information off your website when that information is readily Google-able isn’t dishonorable. It’s just foolish. If a company has critics, and if a Google search will quickly reveal X and Y, then the company website should be wallowing in X and Y.
To make your website at least as responsive and human as your other media efforts, consider these recommendations:
- Make sure your site addresses all widespread (or growing) criticisms of your organization and all significant (or growing) controversies in which it is involved. If there is a criticism or controversy to be found elsewhere – in the mainstream media, on other organizations’ websites, in the social media – make sure it can be found on your site too. Feel free to rebut a false rumor or to explain your side of a debate; that’s much better than ignoring the rumor or the debate. But work just as hard to acknowledge accurate rumors and to explain fairly your critics’ arguments. And do it fast; make sure your website is as up-to-date as your responses to criticism and controversy in the mainstream media.
- Pepper your site with links to other places on the Web with content relevant to your organization. Link to news stories, especially critical news stories. Link to other organizations’ websites, especially those of hostile organizations. And make sure your Forum links to other forums about your company or agency, especially unsympathetic ones. For instance, the CDC website might consider linking to www.CDCchatter.net. (To its credit, an internal CDC online newsletter, CDC Connects, has discussed CDCchatter and linked to it.)
- Post lots of documents: the full text of a legal decision, for example, or the originals of your correspondence with a critical NGO, or the transcript of a media interview. Whenever you reference a document that’s available elsewhere on the Web, link to it. Whenever you reference a document that isn’t elsewhere on the Web, post it. Break this rule when necessary to avoid offending other organizations that expected confidentiality – but not to avoid embarrassing yourself. And not to facilitate cherry-picking: If you’re quoting from something, post the whole thing.
- Create a blog – or several blogs – for everyone in your management whose experiences and viewpoints will be of interest to others, and who is willing to expose himself or herself in ways that are spontaneous and human. No blog at all is preferable to a blog that sounds stilted and ghostwritten. (I don’t object in principle to a ghostwritten blog that’s nonetheless personal; I just think it’s extremely difficult to pull off. Getting a little editing help is something else entirely.) If a blog is any good it should trigger some dialogue, so every blog ought to have its own Forum-like opportunity to comment.
- Take your sitemap and your indexing program seriously, so people can find what’s on the site. Work especially hard to make sure people can find criticisms and controversies, especially newly hot ones. Many people will come to your site just to see if you had the courage to address the issue fairly; expecting that you probably didn’t address it at all, they won’t look very hard before reaching that conclusion. There’s little merit in hiding courageous content somewhere deep in the bowels of the site.
The goal of all these recommendations is to make your website the go-to place for anyone who wants to say something about your organization or learn something about it (warts and all). If www.xyzcorp.com is appropriately open and interactive, there will be little reason for anyone to check out www.xyzcorpsucks.com, and thus little reason for your critics to create it in the first place. But if www.xyzcorpsucks.com exists, link to it!
Do my clients take this advice? Some of it, sometimes. I have been more successful in getting companies to create subsidiary websites that interactively, responsively, and humanly address specific controversies than in getting them to open up their main sites to those controversies. I have rarely convinced government agencies to do either.
Bottom line: A wise organization makes sure that its website plays at least five roles vis-à-vis criticisms and controversies:
- It represents its own viewpoint.
- It disseminates complete and accurate information, especially about competing viewpoints.
- It acknowledges the validity of those competing viewpoints when they are valid.
- It hosts a robust dialogue.
- It links to information and dialogues elsewhere.
Few organizations’ sites are currently doing more than one or two of the five. So as organizations contemplate how they will reach out to others’ websites, to blogs, and to all the many corners of Web 2.0, they shouldn’t forget to bring their own websites up to reasonable Web 1.0 risk communication standards.
Selling fire protection
|Field:||Fire inspection firm|
|Date:||May 2, 2008|
We service fire equipment (fire extinguishers, fire alarms, sprinklers, backflows/cross connections, emergency lights, supression systems) for various firms. To date, we have targeted large corporate entities that require our service offering. The odd time, in between large clients, we try to service smaller clients – and have found it extremely difficult to offer a service that protects life and property to clients that could care less.
Here in Canada, if we come across a client that is not interested in our service offering and shows very little respect for our offering or those of others, we shall report their establishment to the authority having jurisdiction, which is usually the local fire department.
Do you have any insights on how we can provide our services, ensuring the survivability of our clients and their premises?
I have no experience with selling fire protection. But I do know some of what you’re up against:
- Even though disastrous fires aren’t all that rare, most people have never experienced one. So a disastrous fire is a high-magnitude, low-probability risk. That’s an especially hard sort of risk to get people to take precautions against. If you pretend the probability is high, they’ll smell a rat and dismiss you as an alarmist (and a self-interested one at that). If you go on endlessly about the high magnitude, they’ll get on the other side of the seesaw and concentrate all the more on how unlikely it seems. Your best bet is to focus on how awful a worst-case fire could be, all the while acknowledging its fairly low probability – maybe even exaggerating its low probability (while giving them the numbers) so your customers feel impelled to say that doesn’t sound so low to them. Playing the risk magnitude/probability seesaw game skillfully is one key to getting people to protect themselves against unlikely-but-horrible risks.
- For many customers, I’ll bet, you’re not selling fire protection at all. You’re selling peace of mind. That is, a key pitch is probably not what you can do to save your prospects from disaster in a serious fire, but rather what you can do to save them from having to worry about a serious fire. Don’t overdo that pitch; you’re not claiming that people who buy your equipment no longer need to be careful about fire safety. Still, most people take precautions against unlikely-but-horrible risks at least partly in order to worry less. Some of this is propitiating the gods. Rationally we know that your equipment will make a fire less severe, not less likely. Emotionally we feel like bad things are less likely to happen to people who prepare for them properly. Both factors help alleviate anxiety – which can help sell precautions.
- Most small businesspeople face risks that strike them as a lot higher-priority than fire protection – a whole list of problems that could put them out of business. This is a seesaw too. I would try telling prospective customers that you know how precarious a small business can be (you run one too), and you realize that fire protection may be the last thing on their minds. That’s a seesaw too; the goal is to get them to tell you that, on the contrary, the risk of fire is one of their ever-present back-of-the-mind worries. At the same time, of course, your reference to all their front-of-the-mind worries will help them feel understood.
- Remember that precaution advocacy is always a slog. People adopt precautions slowly. It takes them a while to get a new risk onto their “worry agenda”; then it takes them a while to decide to do something about it; then it takes them a while to decide what they’ll do; then it takes them a while to get around to doing it. At every step, reminders are essential. In other words, somebody who tells you no thank you isn’t necessarily deciding once-and-for-all not to buy fire equipment. Consider asking if it’s okay to follow up (with literature or a phone call or both) in six months or so. And look for teachable moments when fire protection is likelier to be on people’s minds – right after a heavily publicized local fire, for example.
By the way, I have some reservations about whether you should rat out failed prospects to the local fire department. It may be that you have a legal obligation (or feel an ethical obligation) to take action when you encounter a company whose fire protection inadequacies are themselves illegal or unethical. Civil engineers, for example, are bound by their code of professional ethics to blow the whistle on unsafe conditions that come to their attention. But when you’re in business to sell the solution, threatening to report the problem to the authorities feels a little too much like extortion: “Buy fire protection equipment or we’ll tell the government that you’re unprotected.”
Labeling BGH in milk
|Date:||April 14, 2008|
I’m writing to get your opinion (if you are willing) about the whole milk labeling issue that we are currently embroiled in here in Ohio.
I feel that the milk labels used that say things like “rBGH-free” or “synthetic hormone-free” are in the best interests of public health. But the Ohio Department of Agriculture (ODA) wants to put restrictions on the use of these labels that may make it almost impossible for dairies to alert the consumer that “hey, we don’t use that on our cows.” I believe consumers have the right to know that information (and dairies have the right to tell us), and then to be able to make their own choice whether or not they believe the potential health risk associated with drinking milk from cows injected with rBGH is too great or not.
The ODA additionally wants dairies to add a statement on labels that says “the FDA has determined that no significant difference has been shown between milk derived from rbST-supplemented and non-rbST-supplemented cows.” I wonder if this statement couldn’t seriously mislead consumers into thinking that long-term studies have actually been done, and lo-and-behold we now know that there is no increased health risk from long-term use of these dairy products. In fact no such studies of length have been undertaken, and many, many public health organizations have deep reservations about the use of synthetic hormones in dairy cattle.
In one of your columns that I ran across you mentioned this issue very briefly. Though I have never read your work previously, I felt that perhaps you might have something of interest to say about this issue, so I thought I would write and ask you. I write to you as a mom, a consumer, and a concerned Ohio citizen.
[Vocabulary note for readers new to this issue: “BGH,” “rBGH,” “BST,” “rBST,” “rbST,” and other variations all refer to the same substance, bovine growth hormone (BGH), also called bovine somatotropin (BST). The “r” in front of either set of initials means it’s “recombinant” – bioengineered in a lab and fed to the cow rather than manufactured naturally by the cow itself.]
I’m not sure which column you’re referring to, but it’s probably “Between Required and Forbidden: The Value of Voluntary Precautions.” That 2004 column argued that companies and government agencies too often insist that all precautions should be either required or forbidden, neglecting the middle ground in which stakeholders (employees, consumers, etc.) get to make their own precautionary choices. I used the BGH example in one paragraph:
In more recent years, for example, a controversy over bovine growth hormone (BGH), a bioengineered product fed to dairy cattle, led some dairy companies to take steps to ensure that their cows had no added BGH (the cows do make some of their own). Some state regulators balked at letting these companies put the “no added BGH” message on the milk carton, or wanted them to add long caveats disclaiming any possible health implications. The message was true, they conceded, but without the caveats it misleadingly implied that added BGH might be harmful, whereas the government position in the controversy was that added BGH is “safe.” Even though the cartons made no actual claim of risk reduction, the regulators were reluctant to permit dairies to provide information that would permit consumers to take precautions the regulators considered unnecessary. Dairies that continued trying to cater to the market for no-added-BGH milk were often sued by Monsanto (a producer of BGH) and legally “cowed” into dropping the labels. What I want to note is that, once again, those in charge have passed up the opportunity to make a precaution voluntary and let the people (the market) decide.
Your comment raises two issues: (a) what we really know about the safety of BGH, and (b) given the state of our knowledge, what sort of labeling should be required/permitted.
The BGH safety controversy
I’m not qualified to assess the science on BGH, but I can say some things about the controversy over the science. Consider the following list of propositions:
- The bulk of the evidence fails to show a health risk to consumers from drinking milk from cows that were fed recombinant BGH, and the majority of experts in the field have reached the conclusion that milk from rBGH-fed cows is safe.
- Much of the research on which this conclusion is based was funded by organizations with an economic stake in the conclusion, or conducted by researchers with a prior commitment to the conclusion, or both – justifying some skepticism about the neutrality and therefore the reliability of the findings.
- Those who share the mainstream consensus that rBGH is safe tend to speak as if there were no contrary studies, as if there were no possible methodological quibbles with the mainstream studies, and as if there were no qualified observers with continuing reservations about the long-term safety of rBGH. All of these implications are false.
- Those who dissent from the mainstream consensus have a few studies of their own – which have methodological flaws of their own. The dissenters (at least the extremists among the dissenters) tend to speak as if their studies conclusively rebutted the mainstream consensus, as if that consensus were grounded in a corporate-government economic conspiracy with no basis in real science, and as if rBGH ought to be high on any well-informed person’s list of clear and present dangers to the survival of the species. That’s all false too.
- The mainstream consensus is likelier to be right than the dissenters. That is, the smart money is betting that milk from rBGH-fed cows is no big deal – that it poses either no health risk at all or a health risk so small it’s very hard for researchers to nail. On the other hand, the dissenters might turn out right; there might be a nontrivial risk from rBGH after all. And if the dissenters are right, the cost of having consumed lots of milk from rBGH-fed cows could be high (damaged health) – whereas if the mainstream is right, the cost of having made do with no-added-BGH milk is pretty small (somewhat more expensive milk). It is thus not irrational to follow the lead of the dissenters even if you suspect you’re probably taking an unnecessary precaution.
None of this is unique to the BGH controversy. In fact, it is all typical of many risk controversies.
For people who think BGH is probably dangerous (as I’m guessing you do), the labeling issue is a no-brainer. Obviously, dangerous food ingredients should be labeled.
Even for people who think BGH is probably not dangerous, labeling is a no-brainer if you also think (as I do) that the dissenters might turn out right in the end, and that a cautious consumer could rationally decide to avoid milk from BGH-fed cows. (For the record, I drink BGH-laced milk without hesitation – but I do lots of things a cautious consumer could rationally decide not to do.) If avoiding milk from BGH-fed cows is a rational thing to do, then labels that permit consumers to do so if they choose are obviously a social good.
The labeling issue is tougher if you’re pretty thoroughly convinced (as the U.S. Department of Agriculture is) that milk from BGH-fed cows and milk with “no added BGH” are indistinguishable from a health perspective. You could of course support labeling anyway on strict libertarian grounds, arguing that dairies should be entitled to put any true claim they want on their packaging, even if that true claim might lead consumers to a false conclusion.
But there is a long string of precedents permitting the government to outlaw commercial claims that are technically accurate but misleading. Should we let a cigarette company accurately advertise “no added nicotine” even if we know that many consumers will mistakenly imagine the ad means that brand actually has less nicotine than others? Should we let the sellers of canned white salmon compete with the sellers of canned pink salmon by claiming on the can that their salmon is “guaranteed not to turn pink in the can”? It is arguable that we should, but it’s certainly not crazy to think we shouldn’t.
Is it fair to characterize the “no added BGH” claim as misleading? The USDA thinks it is. And certainly there is ample evidence that a boast that “our product has no X” does function in part as a warning that X isn’t a good thing for a product to have. All those labels that particular foods are low in cholesterol, for example, certainly contribute to the societal consensus that cholesterol is bad for you.
Or consider the recent furor over the safety of ingredients from China. Suppose a company wished to put on its packaging the accurate information that the product “contains no ingredients from China.” Obviously, this label would provide useful guidance to consumers who wished to avoid exposure to Chinese-made ingredients. Just as obviously, the label would reinforce the widespread impression that Chinese-made ingredients are less safe, on average, than domestic ingredients. Since that impression is true (in my opinion), raising the level of public concern is part of the value of the label. But if the impression were untrue, the label would be misleading.
But that’s not the whole story. For one thing, people may and often do have non-health reasons for avoiding genetically modified products like rBGH. For people who object to genetic modification on moral grounds, the “no added BGH” label is analogous to kosher and halal labels that testify to a food’s adherence to religious standards. Governments not only permit such labels; they even police their accuracy – without necessarily endorsing the religious injunctions they permit people to obey.
More importantly from a risk communication perspective, a “no added BGH” label has two effects on people’s perception of the risk, not just one. Yes, the label does imply (to some extent) that BGH must be bad for you – and thus it contributes to anti-BGH outrage (or its milder cousin, concern). But the label also enables people who wish to avoid BGH to do so, giving them a sense of control – which reduces their anti-BGH outrage.
Consider for example the experience of the lawn pesticide industry. Signs that inform passers-by that pesticides have recently been applied have the same two outrage-related effects as “no added BGH” labels: they increase the concern of people who haven’t previously considered that lawn pesticides might be harmful; and they reduce the anger and fear of people who already consider lawn pesticides harmful, are anxious to minimize their exposure, and are infuriated when the absence of information makes that difficult. I don’t have data to prove it (and many lawn care companies disagree), but my intuition is that the latter effect is bigger than the former – and that the signs therefore reduce overall societal concern about lawn pesticides. Whether you think this is a good effect or a bad effect depends, of course, on how dangerous you consider lawn pesticides.
And there’s a third relevant effect of the lawn pesticide signs. Some people see such a sign, consider anew the possibility that lawn pesticides might be dangerous, and then shrug off their concern and take a shortcut across the grass. The act of doing so sizably reduces their concern; they have just communicated to themselves that they’re not worried about lawn pesticides. Similarly, many people presumably look at a milk carton that says “no added BGH,” contemplate the possibility that BGH might be dangerous, check out the price of the milk compared to other milk, decide to go ahead and buy ordinary milk instead … and thus convince themselves that BGH is not worth worrying about – all courtesy of a label that BGH manufacturers and conventional dairies opposed so strenuously.
All in all, I think the “no added BGH” label alarms some people and reassures others. And of course it informs everyone. I’m for it.
But I’m okay with requiring still more accurate information on the label: the information that the government thinks BGH is safe. So here’s my proposed label:
I doubt the USDA would like my proposed label much. It would probably complain that my label, too, would tend to alarm people. In fact, learning that experts disagree about whether X is dangerous or not can sometimes be even more alarming than learning that all the experts agree X is pretty dangerous. We tend to interpret expert disagreement as a danger signal. If the experts can’t agree, we figure, they’re going to end up debating the problem instead of solving the problem – so maybe we’d better steer clear until they make up their minds what’s going on.
Still, there’s an important difference between a label that scares people because it implies that BGH is dangerous (which may or may not be true) and a label that scares people because it states that there’s an ongoing debate about whether or not BGH is dangerous (which is certainly true). USDA is entitled to prohibit a label that is misleadingly scary, but it ought to learn to live with one that’s scary but not misleading.
For more on the pros and cons of labeling, see my essay, “Because People Are Concerned,” especially pages 42–47.
Media coverage isn’t proportional to mortality statistics –
and it shouldn’t be
|Field:||Health department planner|
|Date:||April 9, 2008|
You might be interested in the attached article. Would your response be that they’re paying attention only to the hazard and not the outrage?
Note: I can’t post the article without copyright problems. But the abstract is online.
I am really, really sick of studies that show the media pay a lot of attention to some risks that have killed very few people, while paying very little attention to some other risks that have killed many.
It’s true, of course – but it’s not news.
The thinking behind such studies is defective. As you point out, the authors are ignoring outrage. That is, they’re ignoring the basic journalistic values that say outrage factors like fairness, moral relevance, voluntariness, and trust are relevant to newsworthiness – that upsetting risks are intrinsically more newsworthy than boring risks. Instead, they assert – or simply assume – that coverage should be proportional to hazard, and that showing it isn’t constitutes proof that the coverage is sensationalistic and misleading.
But even in pure hazard terms, it’s pretty nutty to claim that media coverage ought to be proportional to mortality statistics. Some exceedingly deadly phenomena (old age, say) are well-understood, and there is little need for media coverage to alert the public to them: “Extra! Extra! The elderly are likelier than other people to die!” Other hazards with extremely low mortality (at least so far) are new, poorly understood, and ripe for precautionary action.
A potential influenza pandemic, for example, kills very few people until it becomes an actual influenza pandemic; do the authors really mean to suggest that media responsibility means waiting to cover H5N1 until it starts killing lots of people? Are they proud of the media for not giving much attention to AIDS until it had become pandemic? Would they argue that a hurricane (Katrina, say) deserves very little coverage while it’s offshore, that it needs to kill people before it merits journalistic attention?
It’s possible to come up with coherent criteria for what sorts of risks merit the most media coverage. Like most studies of its ilk, this study makes little effort to do that. Without ever building a case that mortality statistics should be the chief criterion – a very tough case to build, in my judgment – the article simply zings the media for failing to adhere to that criterion.
One could similarly demonstrate that medical school curricula and hospital budgets are not proportional to mortality. In those cases I suspect the authors would immediately explain all kinds of sensible reasons why there are better ways to allocate the curriculum and the budget. Well, there are better ways to allocate media attention, too.
None of which is meant to suggest that media coverage of risk is anywhere near optimal – just that half-baked commentary doesn’t contribute much.
For some of my own research on media coverage of risk, see “Mass Media and Environmental Risk: Seven Principles.” Published in 1994, it starts where the authors of this piece leave off, with the observation that “the amount of coverage accorded an environmental risk topic is unrelated to the seriousness of the risk in health terms. Instead, it relies on traditional journalistic criteria like timeliness and human interest.”
Good reputation and bad reputation: Are there positives that can offset the negatives of outrage?
|Field:||Consultant to mines|
|Date:||April 9, 2008|
I have your list of “Twelve Principal Outrage Components” – factors that are perceived as “safe” or “risky.” A lot of the “safe” components imply a lack of something negative rather than a positive. Is the absence of outrage the best we can hope for, or are there features that can move us into the black on the public opinion balance sheet?
I am comparing several options for an industrial process and there are inevitably some “risky” components – for example, it is industrial and it will be controlled by others. But are there features I can look for that would offset some of the negatives? Or doesn’t outrage work that way?
For the most part, I’m sorry to say, outrage doesn’t work that way. The essence of outrage is how upset (angry, frightened, concerned, etc.) some risk tends to make people. Assuming the situation is genuinely not hazardous, not at all upset is as good as it gets.
It is important to understand that reputation is two variables, not one. Positive reputation (how loved you are) is one aspect of reputation. Negative reputation (how hated you are) is a different aspect. The two are not highly correlated. Thus, it is possible to be both much-loved and much-hated. (Some U.S. examples include Microsoft, Wal-Mart, and Hillary Clinton.)
Improving your positive reputation – through local philanthropy, say – benefits a company in many ways. But it doesn’t have much impact on your negative reputation. People who disapprove of your company for doing X are unlikely to stop disapproving when they learn about that wonderful-but-irrelevant Y you’re also doing.
Outrage management, on the other hand, is designed to improve your negative reputation when a risk controversy (or any controversy) has made negative reputation a problem. Here the symmetry falls down. Though its purpose is to improve your negative reputation, good outrage management can often improve your positive reputation as well. Even people who don’t especially disapprove of your company for doing X may greatly approve of your company for candidly admitting you did X, taking the blame, promising to improve, giving your critics the credit for your improvements, etc. Consider for example the long-lasting benefits to the reputation of Johnson & Johnson’s Tylenol brand after the company took responsibility for the 1982 cyanide poisonings, aggressively asserting that its packaging was insufficiently tamper-proof.
Instead of picturing one “reputation” meter with positives on one side, negatives on the other, and zero in the middle, picture two separate scales: your positives and your negatives. Every organization, obviously, should want both the highest possible positives and the lowest possible negatives.
If you have to choose which one to focus on, I’d go after your negative reputation. The benefits to a company of being “less hated” are huge: higher morale, easier recruiting, less regulatory hassle, less stakeholder controversy, less shareholder anxiety, less customer resistance, etc. The benefits of being “more loved” are also real; increased customer loyalty is probably the most often documented. But in general an improved positive reputation has a lower payoff than an improved negative reputation – especially since activists and regulators with an iconoclastic bent may actually choose to attack much-loved companies right along with much-hated ones. Companies that are neither much-loved nor much-hated are likeliest to get a free ride.
So do high positives have any value at all to counterbalance the outrage when a controversy arises? Well, yes – though not as much as your question implies you’re hoping for. One major advantage: An organization with high positives is likelier to get the benefit of the doubt when something goes wrong. People want such an organization not to turn out a villain, and will try to interpret ambiguous evidence in its favor. On the other hand, if people ultimately become convinced that it is a villain, they can feel betrayed and all the more outraged.
When high positives and high negatives coexist, it’s really important not to lean on the positives as your defense when stakeholders are berating you about the negatives. We know this in our personal lives. When you’re in the middle of an angry confrontation with your daughter because she’s flunking history, she shouldn’t try to reduce your anger by pointing out that she makes her bed every morning without a fight. Right now the focus is on her poor performance in history. Mentioning her more endearing qualities (her high positives) won’t help her avoid your anger; it will only make you feel she’s evading the issue.
Similarly, a company in reputational trouble needs to focus on the negatives: what you did wrong, why it happened, how sorry you are, how understandable your stakeholders’ outrage is, how you propose to make it right, how you plan to keep it from happening again, etc. Resist the temptation to talk much about your positives right now; wait for that till after the controversy has moderated.
One of my clients is a company with very high local positives: It’s the employer everyone wants to work for; it has done endless good works in the community; etc. But for years the company’s smelter was the region’s largest point source of dioxin – and management never even realized it, much less talked about it or did anything about it. Now that the smelter is closed, the key question for the community is whether past emissions may ultimately affect people’s health.
The company’s positives will certainly help its neighbors forgive it for failing to notice, disclose, and address its dioxin problem promptly. Of course the positives won’t help much if the evidence shows significant health effects; people will notice and they will rightly be outraged. But partly because of the company’s high positives and partly because of its good outrage management, most of the community isn’t inclined to misinterpret benign health data as alarming.
The company wisely keeps its positives as high as it can. But when people want to talk about dioxin, the company talks about dioxin, and it does so with contrition. It saves communication about its good works for other moments.
Bottom line: Zero outrage is as good as it gets as far as managing your negative reputation is concerned. (Remember that I’m assuming the actual hazard is low. Zero outrage is a problem, not a benefit, if a company is harming people or the environment.) When you’re working to get your stakeholders’ outrage as close to zero as possible, don’t be distracted by an effort to enhance your positive reputation.
As a practical matter, what usually matters most isn’t your negatives compared to your positives; it’s your negatives compared to other organizations’ negatives. The crucial question is how much outrage you are arousing compared to how much is aroused by other sources of outrage in the community.
Think of this as “the law of conservation of outrage.” People vary in their capacity for outrage. Some of us are almost always calm, while others are often fearful, worried, or angry. But each individual’s capacity for outrage varies very little from month to month. We all carry around a mental list of things to be fearful, worried, or angry about. When we have energy to spare, we pick off the top of the list.
Activists do the same thing. Just like companies, activist groups have more things they’d like to do than they have time, staff, and budget to get to. So they have to prioritize. They have a list too, and they pick off the top of the list.
As a tool for predicting outrage, my handout on “Twelve Principal Outrage Components” can help you judge how high you’re likely to be on stakeholders’ and activists’ lists. As a tool for managing outrage, it can help you figure out what your outrage vulnerabilities are, so you can begin figuring out what to do about them.
If you’re nowhere near the top of anybody’s list, your job is to stay where you are – and as long as you do that, you’re not likely to have much outrage to manage … even if you can’t achieve zero outrage. If you are sitting on top of some people’s lists, on the other hand, your job is to manage the outrage until you’re not near the top anymore.
Responding to damaging rumors when the information is confidential
|Field:||Mining industry stakeholder relations officer|
|Date:||March 21, 2008|
How do we control outrage that manifests as speculation and gossip? In particular, what do we do when we can't tell the truth about a rumor – e.g., legally confidential circumstances surrounding dismissal of employees, injuries, etc.?
The all-purpose response to the question of how to deal with rumors is to answer them with the facts.
- If the rumor is true, confirm it.
- If it’s false, rebut it.
- If it’s partly true, confirm the true bits and rebut the false ones.
- If you’re not sure whether it’s true or false, say that you’re not sure, say what you’re doing to find out, say what you think so far (are you almost sure it’s true or almost sure it’s false?), and say again that you’re not sure and may turn out wrong.
- Even if you haven’t got a clue, you can still reply to the rumor: “Yes, we’ve heard that too, and we haven’t been able to confirm or disconfirm it. We’re as much in the dark about it as everybody else.”
I recently completed a column on rules for responding to rumors. It was published in The Synergist and is now posted on this website.
But you raise an important question I neglected to cover in the column: What to do when a rumor is circulating that’s causing a lot of outrage and you know the rumor is false – but you can’t say so without breaking confidentiality. So here are some suggestions for that situation.
Make sure you’re really constrained from telling the truth.
My clients often tell me some piece of information is confidential – and when I push them I learn that what they mean by “confidential” is that they’re not required to release it, not that they’re not entitled to do so. Human Resources (HR) departments in particular often have confidentiality policies that go way beyond legal requirements.
If you have information that will clear up a damaging rumor and you’re legally entitled to release it, release it!
If you decide not to tell the truth, don’t pretend you’re not allowed to.
Often you’re entitled to release the information but your HR managers would rather you didn’t, perhaps because they don’t want to set a precedent that could be used against them in all sorts of other situations where the company really doesn’t want anybody to know what happened. Or you’re entitled to release the information but only some of it is going to sound like a rebuttal. Part of the rumor is true, there’s no easy way to rebut the false bits without confirming the true ones, and on balance you end up deciding to keep mum.
Okay, that wouldn’t be my call but I’m not in your shoes. But at least you shouldn’t pretend (even to yourself) that it wasn’t your choice.
If your company chooses not to release a piece of information that’s yours to release or withhold as you prefer, say that. Acknowledge that you have information that would rebut the rumor, and explain that you have decided not to release it because…. Of course people may not believe you; they may suspect that the rumor is true and that’s why you don’t want to release what you know.
It may help to concede that you understand your reticence will lead some people to that conclusion. You wish you could see your way to releasing the information but you just can’t.
If you’re really not allowed to release the information, try to get permission to make an exception.
Find out from your legal people whose okay you’d need to break confidentiality. Not infrequently it’s the person who started the rumor in the first place – an employee who got fired and is making up the reasons, for example.
Find out also if the information about your request to be released from your confidentiality obligation is itself confidential. Often it isn’t. Then you’ll be able to say something like this: “We can’t respond to this rumor the way we’d like to because we have a legal obligation to X to say nothing about the conditions of her departure. We asked X last Thursday to release us from that obligation – here’s the letter we sent her – but so far she hasn’t agreed. We’ll be happy to give you the full story as soon as she authorizes us to do so.”
Or: “We’ve asked the legal department to find a way to relieve us of our confidentiality requirement in this instance, but they report that there is no way to do so.”
Assuming you’re stuck – you really want to release the information and you really can’t – say so.
And say it with appropriate angst; express as vividly as you can how frustrating it is not to be legally permitted to correct the record. Acknowledge also how frustrating it must be to others not to be able to find out what you know.
If appropriate, explain the history of the confidentiality rule and the rationale behind it – is it a new rule or a longstanding one, stupid or sensible (even though you wish you could make an exception this time)? And if appropriate, concede that your company has sometimes cried “confidentiality” in the past when what was going on was really just company secrecy – but this time you really do want to open the files and you can’t.
Figure out what you are allowed to say, and say it.
From time to time, one or another client of mine makes false claims about what my advice was, but we have a confidentiality agreement that forbids me to make any statements about what I told the client. Usually I can say something like this: “Without violating confidentiality, let me tell you what my standard advice is in situations like X. And here’s a URL on my website where you can find that advice spelled out.”
Similarly, your company is presumably entitled to describe its generic policies and how they would get applied in any situation where X happened. Of course you shouldn’t get so specific that you’re breaking confidentiality while pretending not to: “If an employee with the initials PS did X last Thursday, here’s how we would have responded.”
Bear in mind a last-ditch option you always have: to go ahead and break confidentiality and take the consequences.
If a false rumor is doing enough harm, that may be an option worth considering.
Was it good or bad crisis communication for Hong Kong to
shut down its primary schools because of a flu outbreak?
|Date:||March 17, 2008|
I thought of you today while working on a story about Hong Kong’s decision to close all primary schools for two weeks because of a seasonal flu outbreak.
I was wondering if I could ask you some questions via email about how it was handled from a risk communication standpoint.
- Health Secretary York Chow ordered all kindergartens and primary schools to close for two weeks during an outbreak of seasonal flu. It was the most drastic move taken in Hong Kong since SARS hit in 2003. The decision came after two children with flu died, but autopsy reports show they had underlying illnesses that were likely complicated by the flu. Given the territory’s history with SARS and bird flu (and panic), do you think York should have waited a bit before telling a half million kids not to go to school?
- Could this decision spur more anxiety and panic within Hong Kong and the region? Some countries, like Thailand, are now warning citizens to be careful when traveling to Hong Kong. So, this could have negative economic fallout.
- WHO says the flu is actually milder this year than last year, so the issue may never have come to light if it weren’t for these deaths. But seasonal flu kills many people every year, so were two deaths worthy of this kind of reaction?
- What lessons from SARS/bird flu could have been applied here? And what other action could York have taken to show people he was taking the issue seriously without going this far? Is there a more balanced approach he could have taken, at least until he had a better understanding of what caused the deaths?
As my wife and colleague Jody Lanard commented to me this morning, “the ghost of SARS past and the ghost of pandemic future are very much in the air in Hong Kong right now.” Jody took the lead in drafting this answer, which reflects our shared view that Hong Kong is doing exemplary crisis communication in its response to the current high-hazard, high-outrage flu outbreak.
We believe that all human flu outbreaks are high-hazard. Flu causes a huge amount of morbidity and mortality every year. Usually these outbreaks are high-hazard, low-outrage. So not every routine flu outbreak justifies closing all the primary schools. But Hong Kong’s officials have been appropriately sensitized by SARS, bird flu, and pandemic fears – sensitized not just to the hazard that a flu outbreak represents, but also to the outrage that such an outbreak provokes in its citizenry … this year though not necessarily every year.
Hong Kong’s officials are appropriately sensitized also to the high cost of over-reassurance and delayed precaution-taking. Who can forget Secretary of Health E.K. Yeoh saying, near the start of the 2003 SARS crisis:
Hong Kong is absolutely safe and no different from any other big city in the world…. Hong Kong does not have an outbreak, okay? We have not said that we have an outbreak. Don’t let the rest of the world think that there is an atypical pneumonia outbreak in Hong Kong.
As you know, many experts believe that closing schools can be an effective way to slow the spread of influenza. But from a risk communication perspective, that’s only part of the story. Closing schools is also an effective way to demonstrate that the government is empathic with normal people’s normal fears, and that it is determined to respond quickly to the risk that provokes those fears.
Officials should certainly prefer to be accused of over-reacting rather than under-reacting. A wise government shows that it is responsive, candid, and caring – even at the possible cost of a temporary shock to the stock market or to tourism (and even at the certain cost of making it hard for many parents to get to work while their kids are stuck home). Survey results from many countries indicate that citizens often think their government cares more about trade, tourism, and market prices than about health – a view that can seriously undermine trust and morale when health problems loom.
It might be a different story if Hong Kong hadn’t gone through major outbreaks of SARS and of bird flu. It might be a different story if Hong Kong’s government hadn’t under-reacted and over-reassured at the start of its 2003 SARS crisis (though not during its 1997 bird flu crisis). It might be a different story if the specter of a possible pandemic hadn’t preoccupied health officials and journalists, on and off, for the past four years. It might be a different story if Hong Kong’s citizenry were already accustomed to the annual onset of flu season. It might be a different story if two young children hadn’t died as they did, provoking extensive media coverage and inevitable parental anxiety.
But with its citizens newly aware of the risks of flu, vividly mindful of the bird flu and SARS crises of the recent past, worried about the prospect of a pandemic, and leery of another possible government under-reaction, it was wise of the Hong Kong government to let itself be seen this time as over-reacting.
Paradoxically, the government’s decision to err on the side of caution will help local people put the current outbreak into perspective more quickly. This is the risk communication seesaw in action. A slight over-response to what looks so far like a statistically ordinary but emotionally upsetting flu outbreak is a very good way to dispel any rumors that the outbreak is out of control, or that the government is ignoring or covering up how serious it really is.
Now to your specific questions.
Should Secretary Chow have waited before closing the schools in order to avoid panicking the public?
Short answer: No.
Apparently, Hong Kong is having a fairly typical flu season in statistical terms, but the two child deaths and the clustering of the sudden school outbreaks raised parental alarm. That’s normal. Children’s deaths and illness clusters (in schools, in this case) tend to generate more concern – more outrage – than the same number of deaths in older people or the same number of illnesses spread out in the general population.
When flu strikes in schools and kills a few children, the predictable result is a lot of media coverage and an understandably alarmed public. The same thing happened during Australia’s 2007 flu season, for example, and early in the 2003–2004 flu season in Colorado, in the U.S. Do a Google search for Australia + influenza + children + deaths, and then change “Australia” to “Colorado,” and you will see headlines virtually identical to those in Hong Kong this past week.
Also, Hong Kong citizens have been successfully sensitized to flu. This is a major and recent achievement – people in Hong Kong now take influenza seriously when it flares up. All too soon they will settle into the New Normal (as we Americans have done) and will pay too little attention to flu. But not this year.
Add in Hong Kong’s special sensitization from its bird flu and SARS experiences, and you end up with a pretty decent rationale for school closings.
Singapore faced a similar dilemma about whether to close schools early in the 2003 SARS crisis. Here’s how Jody described the situation in her presentation at a 2004 WHO outbreak communication conference – held in Singapore.
Before very much was known about how SARS was transmitted, there was tremendous pressure on the government to close schools. Officials, many of whom had school-aged children, felt this pressure personally. But the Ministry of Health said there was no medically necessary reason to close the schools. Prime Minister Goh publicly described a cabinet meeting in which some members thought closing the schools was over-reacting, and some not; on balance, they decided that over-reacting was better. The education and health ministries jointly announced the school closings as a “precautionary step.” Despite the lack of medical grounds, they said, “principals and general practitioners have reported that parents continue to be concerned about the risk to their children in schools.” PM Goh added, “I think it’s useful to do so just to assure the parents that by taking all actions in order to tackle the problem [we] try to break the cycle.”
In your framing of this question, you refer to “panic” as having resulted from Hong Kong’s bird flu (1997) and SARS (2003) outbreaks. Neither outbreak was marked by panic. Hong Kongers were appropriately frightened in 1997 and 2003. In 2003, they were appropriately furious about Health Secretary Yeoh’s early downplaying of the crisis. During SARS, Hong Kongers prudently avoided crowded places, and outsiders prudently avoided Hong Kong.
We cannot find any documented episodes of panic in Hong Kong during either crisis. Every time we asked reporters to elaborate on claims that people were panicking in Hong Kong, they always responded that people were frightened, wearing masks, and avoiding public places – none of which is remotely close to panic. People felt panicky, perhaps, but on the whole they behaved prudently.
Numerous expatriate reporters in Hong Kong were in the same boat. Some wrote that they felt terrified, hunkering down in their mostly-empty hotels and begging their editors to bring them home. But they continued to do their jobs well. That’s not panic either.
Could the decision spur anxiety and panic in Hong Kong, and keep people from traveling to the region?
Closing the schools seems very unlikely to provoke anything like panic. As for its effect on parental anxiety, some parents may reason that “if they’re closing the schools it must be really dangerous!” while others may figure “it’s good they closed the schools to help keep my children safe.” (And of course many parents are mostly irritated at having to find a way to mind their kids and still get to work.)
Some headline writers might have provoked a little extra anxiety when they confused the seasonal flu with bird flu. The Bangkok Post on March 12, for example, headlined: “Hong Kong closes school after pupil dies of bird flu.” And ABC.au wrote: “Hong Kong closes schools to stop bird flu spread” (but then changed it to “Hong Kong closes schools to stop flu spread” within hours).
Secretary Chow went out of his way to keep people from “over-reacting to his over-reaction” – mostly by candidly admitting that it might very well be an over-reaction. He apologized for the late-night announcement of the school closures, which inconvenienced a lot of parents. And he let the public in on the government’s thinking as it tried to decide whether to close the schools or not. Consider for example this March 14 quotation:
“ It was quite a difficult decision but we realize the number of infections is increasing,” Chow said. “It is not something based entirely on public health data at the moment. But I think the public would appreciate that what we are doing might be a little drastic – but it is reassuring to the community. We cannot wait for the figures to get bigger before we make a decision. We had to make certain assumptions that if there are now two deaths related to influenza … then we need to do something.”
The World Health Organization’s spokesman in the region, Peter Cordingley, was empathic about Hong Kong’s response, also helping frame it as an appropriate response to people’s alarm, rather than as a reason for people to become alarmed. Here’s what he said in an AFP article headlined “Hong Kong faces anger and fear over flu”:
Peter Cordingley, a WHO spokesman for Western Pacific, welcomed the Hong Kong government’s measures.
“This is not a matter to inform the WHO. It’s a seasonal influenza; it’s nasty, that’s all it is. It’s not SARS-related or bird-flu related. It happens every year,” he said.
He dismissed suggestions the government was overreacting, saying Hong Kong was understandably on edge after it was “paralysed” by the SARS outbreak.
“It’s perfectly normal there should be a high level of public concerns.…”
And what’s wrong with Thailand warning its citizens to be careful of the flu when traveling in Hong Kong? Countries often issue such warnings about places with influenza outbreaks. Here’s the start of a 2003 entry on the website of the Hong Kong Ministry of Health:
Travellers urged to be vigilant against disease
The Department of Health has called on people travelling out of Hong Kong to be on guard against infectious diseases and travel-related hazards.
The plea follows reports of two recent outbreaks of influenza A in neighbouring areas.
In both cases the specific advice given was the usual flu precaution mantra – wash your hands a lot, avoid crowds, don’t share personal items, etc.
Do two deaths in what’s shaping up to be a mild flu season merit so strong a reaction?
Technically, maybe not. In communication terms, probably.
As all medical reporters know, the main problem vis-à-vis seasonal flu is apathy. So occasional episodes of over-attention are welcome and valuable teachable moments. The risk communication job during those moments is to help people make productive use of their adjustment reaction – that is, to find ways to respond empathically to people’s temporarily excessive concern as they are adjusting to scary news, and to use that concern to teach people the best ways to protect themselves.
One good way to judge the Hong Kong school closings is this: Did the school closings help draw people’s attention to the largely unrecognized seriousness of the seasonal flu, its threat to children as well as their elders? Did it help draw their attention to other precautions (besides school closings) that could help protect themselves and their children? Did more Hong Kong citizens learn about washing their hands a lot, covering their mouth and hands when they sneeze, and other commonsense ways to reduce flu risks? Will they be likelier to remember and implement these precautions in some future routine flu season that isn’t getting a lot of attention and isn’t leading to any school closings? Will they be likelier to remember and implement them in the event of a pandemic?
There are data from Singapore after SARS showing that people who felt the government was listening to their concerns took more of the recommended precautions than those who felt ignored. We hope the same will be true in Hong Kong during this upsetting but apparently normal flu season. That is, we hope the government’s “responsive over-response” will help the population learn more about influenza and how to cope with it.
Would the crisis communication lessons from SARS and bird flu have led Secretary Chow to take a more “balanced” approach?
We think some of the very best risk communication lessons from SARS and bird flu have been applied here: erring on the side of caution; sharing dilemmas about difficult decisions; responding to public concerns with empathy, even when the response is not entirely based on scientific data. Secretary Chow has also done a good job of apologizing for the inconvenience of the school closings.
In the weeks ahead – and especially when the schools reopen – he will have an opportunity to reframe the issue for future years. It probably made sense this year to close the schools, he will want to say, in large measure because so many parents were upset and worried. But it was a tough call, and we probably don’t want to close the schools every year. So now we need to think through how this decision should be made in years to come – based partly on the severity of the outbreak and the importance of reducing contagion, partly on the level of public concern and the importance of providing reassurance, and partly on people’s feedback about this year’s decision.
Unlike some of his predecessors, we haven’t heard Secretary Chow sounding like a P.R. spin doctor. In fact, he has leaned very far in the opposite direction. He hasn’t emphasized that Hong Kong is “perfectly safe” and “no different from any other city,” or that “people should feel fine about visiting Hong Kong as long as they take the usual flu-season precautions.” Instead, outsiders are making those statements, while accusing him of over-reacting. That is a terrific position for him to maintain.
In other words, Secretary Chow’s “unbalanced” precautionary approach was beautifully balanced by outside experts commenting on his decision.
A more “balanced” approach would have had a substantial downside – including the risk of belatedly discovering that it wasn’t a routine flu season after all. Imagine the outcry if Secretary Chow had decided not to close the schools, and then it had turned out that the two children’s deaths were precursors of many more, due to a mutation producing a more virulent flu strain than usual.
Even as we were preparing this response, an article appeared in the March 15 Shanghai Daily with the headline, “HK breathes a sigh of relief over flu reports.” The story notes that no extra-virulent strain has been found, a finding that “eased fears of a new, deadlier strain of flu and bolster claims by both the World Health Organization and the Hong Kong government that the recent outbreak isn’t cause for alarm.”
This finding will be all the more believable to ordinary citizens of Hong King in the face of Secretary Chow’s “drastic” [his word] response to their earlier concerns.
Lessons from the Westland beef recall
|Field:||Food systems consultant and educational specialist|
|Date:||March 10, 2008|
Peter, thank you for your work.
I work as a consultant for micro-scale beef and dairy producers, and also as a consultant on school health and health education in the local schools. I believe the recent Westland beef recall, and the associated media coverage, would be excellent fodder for you to write an article on.
As usual, government and industry are talking their “minimal risk ” talk, while big media stirs up parental and activist outrage.
Through my consulting work at Acadian Angus, I have put together a series of resources for school administrators and food services departments, and even some curriculum materials for 8th grade teachers. These resources take a unique approach to risk communication (a term I only learned of this morning) and risk assessment. I am not finding quite the same approach anywhere else.
I argue, as you do, that “Risk = Hazard + Outrage,” but what I point out in this largest-ever meat recall, and other national-scale recalls, is that both the hazard and the outrage here are a result of the national scale of production and distribution. In other words, national-scale recalls can’t occur without national-scale distribution. The outrage in these recall cases stems from a nation that is confused about the safety of a product – even though the actions that cause the actual food safety concern are usually the result of a very few individuals, or even a single farm.
My introductory resources on this issue are available in the “Community-Based Food Procurement ” section of my website, www.AcadianAngus.com.
I would be very interested in your take on this national issue, and the approach I am taking.
Thanks again for your very helpful website!
For readers who missed the Westland recall, in October and November of 2007 the Humane Society of the United States gathered video evidence of “downer” cows (that is, cows that couldn’t walk on their own) being horribly abused at a Westland/Hallmark slaughterhouse in Chico, California. On January 30, 2008 the Humane Society released the video to the Washington Post and then posted it on its website. The resulting outcry immediately forced the slaughterhouse to shut down.
Two weeks later the U.S. Department of Agriculture (USDA) forced the company to recall 143 million pounds of meat – the largest beef recall in U.S. history. According to the USDA, its investigation had revealed regulatory violations that made the meat “unfit for human consumption.”
National versus local food distribution networks
You’re obviously right that a local distribution network can’t provoke a national recall – or a national food safety catastrophe. Of course you’re a local food producer yourself, but the fact that you have an axe to grind doesn’t make you any less right. Like many people, I was amazed and a little aghast at how many supermarkets, school systems, and packaged food products ended up with beef originating from this one Westland source.
It’s not the first time I’ve had my nose rubbed in the implications of our centralized food distribution system. I do some work with the National Center for Food Protection and Defense, an HHS-funded research center at the University of Minnesota. We’ve been modeling various sorts of terrorist attacks on the U.S. food supply (and figuring out how to talk to people about them). It’s a little daunting to realize that if terrorists can manage to contaminate one lettuce field just before it’s harvested, a few days later families in 40-odd states will start getting sick.
While it’s certainly true that a more decentralized system would limit the scope of any single attack, accident, or recall, it doesn’t follow that a more decentralized system would lead to fewer consumer illnesses. Are mom-and-pop farms less likely to end up putting something dangerous into the food supply than huge agribusinesses are? I don’t see much reason to think so. Certainly small farms are less (or should I say “even less”) carefully watched by government regulators. The economic pressures on small family farms are different from those on corporate farms – but no less onerous, I would think. I’m sure the best mom-and-pop operations are more responsible than Big Ag; I’m equally sure the worst ones are less responsible than Big Ag.
If anything, a huge number of tiny food safety problems would be harder to cope with – harder to prevent and harder to respond to – than a tiny number of huge ones. So from a hazard perspective, there’s not much to be gained by decentralizing the food supply.
From an outrage perspective, a huge number of tiny outbreaks or recalls would surely lead to less news coverage, less public anxiety, and less regulatory response than a tiny number of huge ones. Once in a while a small event catches the fancy of the media, the public, and the government, but most small events fly below everybody’s radar except the few people directly affected. It’s the big events, like Westland, that get our attention.
So if you want to keep up the pressure for improvements in food safety, centralization helps. And if you want to keep the public from worrying about the slipups, decentralization helps.
Of course there are lots of good reasons for patronizing local farms, and lots of good reasons for preferring policy options that encourage a decentralized food distribution system. But I doubt that food safety is one of them
Other lessons of the Westland recall
What else can we learn from the Westland recall?
Recalls aren’t all that effective, except as a way to punish one company and warn the others.
In the U.S., at least, food recalls are pretty good at getting back whatever hasn’t been sold yet. They’re not very good at getting back what has already been sold. And of course they can’t get back what has already been eaten. This one was no exception. News stories about the amount of beef “recalled” are talking about how much they wanted back – not how much they got back.
The safety of downer cows remains an issue.
Everywhere in the world, farmers are understandably tempted to slaughter and sell sick animals instead of keeping them out of the food supply. Depending on what made the animal sick, the danger from eating it can be substantial or tiny or nonexistent. Without tests, you can’t always know what made the animal sick – or, indeed, whether it is sick at all; a downer might simply have slipped and broken its leg on its way up the chute.
The USDA general policy for cows is no downers in the food supply. If a cow can’t walk, it can’t be slaughtered. But there’s an exception for cows that arrive at the slaughterhouse still on their feet, pass inspection, and then go down. Those cows can still be slaughtered if a USDA vet reinspects and passes them. The Westland recall was based on a USDA finding that some of the downers at the Chico slaughterhouse weren’t reinspected as the rules required. Technically and legally, the awful mistreatment documented in the Humane Society video – moving downers along with forklifts, electric shocks, and high-pressure water hoses – had nothing to do with the recall.
Downers can be “nonambulatory” for all sorts of reasons, but the reason that gets people most anxious is mad cow disease (bovine spongiform encephalopathy or BSE). This is an extremely horrible and extremely rare disease. Since it is concentrated almost exclusively in brain and nerve tissue, the main bulwarks against it are not feeding animal brains and spinal cords to cows, and not feeding cow brains and spinal cords to people. Still, it’s true that a cow with BSE will eventually be unable to walk. So by definition it’s got to be true that a downer at a U.S. slaughterhouse is very slightly likelier to have BSE than a cow that’s walking, and the meat from that downer is ever-so-slightly likelier to carry BSE than meat from a cow that walked to slaughter.
Disgusting video works.
What happened at the Westland slaughterhouse is much more an animal welfare issue than a food safety issue. The Humane Society surely knew this from the outset – or it wouldn’t have felt okay about waiting a couple of months to release its video.
But all those Members of Congress who have since claimed to be shocked, shocked that there are downers in the U.S. food supply are onto something. Disgusting video works.
People really don’t want to eat meat from animals that have been tortured. Animal rights groups argue that all animals raised for food are tortured – that if you’re really against animal torture you need to think hard about becoming a vegetarian. Most of us aren’t willing to go that far. We have two rules of thumb. First, we don’t want to eat meat from animals whose lives and deaths have been even more inhumane than normal. And second, we don’t want to eat meat from animals whose lives and deaths we have been forced to think about. On both grounds, the Humane Society video put us off Westland meat.
That video did more than force a recall of 143 million pounds of almost-certainly-safe meat. It did more than force a large slaughterhouse to close, probably for good. It forced the USDA to give serious consideration to ways of improving government surveillance of the food supply.
The USDA’s mea culpas were ubiquitous, both in Congressional testimony and in media interviews. “We know we have to do better,” Undersecretary for Food Safety Richard Raymond told Joe Nocera of The New York Times. “We are embarrassed on our watch.”
I don’t know what that’ll mean over time. There is talk of putting more inspectors on site, though it’s not clear whether the American taxpayer really wants to pay for that. There is also talk of using video cameras for remote inspections – which might be able to accomplish the task at lower cost. Picture a centralized USDA inspector who’s watching dozens of monitors at once, looking for signs of illness or mistreatment. Picture slaughterhouse managements and employees who know that there are lots of cameras on site, but don’t know where all of them are.
Of course the Westland scandal could also blow over without much change. That has happened before, Lord knows. Still, it wouldn’t be too surprising if a horrific video of cows being tortured ultimately led to genuine improvements in the U.S. food safety inspection system.
Vaccination and autism: Responding to the Hannah Poling case
|Field:||State government policy director|
|Date:||March 9, 2008|
What I would add to this site:
More and more health departments, particularly in agricultural states, are facing intense challenges to best practices, such as the use of monochloramine to disinfect surface water, to eliminate fluoride, or to allow increased sales of raw milk, to name a few examples. Even well-told science seems to carry less and less authority in these debates.
Is there no such thing as a neutral fact anymore? With the trustworthiness of government institutions in doubt, people construct their own understanding of hazards in the context of their values and experience. Another outcome is that people are reluctant to accept anything other than zero risk.
It would be great to engage in more national dialogue about how to create shared meaning and trust on environmental health issues.
Regarding current news coverage of vaccine safety and autism, I am interested in your guidance on the following.
- What messages might best guide parents who are hearing about the recent case about Hannah Poling on the news and at the same time trying to decide whether to vaccinate their kids.
- More broadly, how can the public health community communicate the importance of making a distinction between science-based decision-making and intuition-based decision-making. In other words, none of us makes decisions based on science alone. If I am car-shopping, I may know all there is to know about model A and model Z, but I wind up buying model A because I really, really like it. The last leap in decision-making is often like that, “irrational.”
- How can we effectively combine the message from the CDC’s Dr. Gerberding, “We need to keep an open mind about the causes of autism,” and messages I’ve heard from colleagues about the need to publicly defend vaccine safety absolutely, without allowing for any possibility of exceptional cases.
I have written about vaccination and autism twice before in this Guestbook, in July 2005 and again in October 2007. A leitmotif in both discussions was that when the science is 90% or 99% or 99.9% on your side, that’s what you need to say. Saying instead that it’s 100% on your side is likely to make any exception – an anomalous case or a discrepant study – loom much larger than it should, especially in the minds of people who don’t necessarily trust you in the first place.
The Hannah Poling case has divided vaccination proponents. Some want to hold fast to the claim that there is absolutely no connection between vaccination and autism. Others want to acknowledge that there may (or may not) be rare exceptions to that generalization, and that Hannah’s case may (or may not) be one such exception. I think the second group is right – right in two ways. They are right that science hasn’t eliminated all possibility of an occasional link between vaccination and autism, perhaps specifically in children with mitochondrial disorders like Hannah’s. And they are right that admitting as much is better risk communication than denying it – more honest, more credible, more sustainable, and ultimately more conducive to getting kids vaccinated.
The 99.9% Solution
In the U.S. and the U.K., the controversy over vaccination safety has focused largely on thimerosal, a preservative that used to be used in many vaccines and is still used in some flu vaccines. In a long string of studies, scientists have searched diligently for a statistical link between thimerosal and autism – and they have failed to find convincing evidence of such a link. That adds up to a strong case that thimerosal doesn’t cause autism. It’s not quite “proof” that thimerosal doesn’t cause autism – partly because there are a few discrepant studies whose authors say they found such a link; partly because it’s impossible to prove a negative, and partly because science never proves anything once and for all. But we know, really know, that the link between thimerosal and autism is either nonexistent or extremely weak.
What do I mean by an “extremely weak” link? It remains possible (and always will) that under some rare set of circumstances – too rare to show up even in very sensitive statistical studies – thimerosal might contribute to autism. There is no convincing scientific evidence so far that thimerosal contributes to autism even in rare cases, but it’s possible. And what if it does? It would be too rare an event to have any meaning as a guide to overall public policy or parental decision-making.
In the face of an impressive body of science that fails to show a link between thimerosal and autism, and in the face of the elimination of thimerosal from nearly all vaccines anyway, many vaccination opponents have moved to a different argument. It’s not necessarily the thimerosal in vaccines, they now suggest, it’s vaccination itself that can trigger an immune response that, in some children, results in autism. Some public health professionals get frustrated when their critics shift their grounds this way. But of course it’s perfectly reasonable – it’s good science! – to abandon a hypothesis that looks increasingly unlikely and move on to one that may still seem plausible. If you set aside the thimerosal-specific studies, there still isn’t any compelling evidence that vaccination contributes to autism, but there is less evidence that it doesn’t.
The risk communication question here is straightforward. When the science is overwhelmingly on your side but there are still a few anomalous cases or studies on the other side, should you ignore or disparage the anomalies or should you acknowledge them respectfully. Of course many public health professionals doubt there are any anomalies. But those who have studied the literature know there are some – and those who have studied the history of science know there are always some.
Many argue that acknowledging the anomalies “opens the door” – and ends up leaving the misimpression that an all-but-settled scientific question is a wide-open debate. I argue to the contrary. Failing to acknowledge the anomalies makes you look not just intransigent and closed-minded, but dishonest. If you’re 99.9% right and claiming to be 100% right, any compelling example of the 0.1% you denied makes you look like a liar.
The risk communication question (separate from ethics) is which works better toward the goal of getting kids vaccinated – claiming to be 100% right or claiming to be 99.9% right. There is ample theory and research to answer this question. In a nutshell: When trust in you is high and people will never encounter evidence of the discrepant 0.1%, the “100% right” claim works better; it simplifies the situation and dispels all doubt. But when trust in you is not so high, or when people are likely to get word of the discrepancies, then the “99.9% right” claim is far more sustainable.
The vaccination safety controversy is obviously in the latter category. As you point out, so are many other health and environmental controversies today.
Of course if you’ve been claiming to be 100% right all along, shifting to 99.9% is likely to cause a stir – a temporary backlash or adjustment reaction. It’s better to make the shift before a stunning exception (or possible exception) comes along than after. But if you’re caught flatfooted by such an exception, it’s better to make the shift in response than to keep insisting there are no exceptions.
Why are so many health professionals deeply committed to the alternative hypothesis that it’s more persuasive to claim to be completely right than to acknowledge the existence of exceptions? For some, perhaps, this may result from their clinical experience; their own patients may sound grateful when they’re overconfident and distressed when they’re nuanced. But I suspect something else is going on: ego. Many in public health are deeply injured by the distrust they experience. Their unconscious emotional commitment to being right – 100% right – about vaccines and autism may be stronger than their professional commitment to getting kids vaccinated.
I recently posed the following question to a group of public health professionals:
Suppose there were data that showed indisputably that more kids would end up getting vaccinated if you conceded that on rare occasions vaccination might lead to autism, and apologized for having implied otherwise, than if you kept denying that there could ever be a connection. Would you then make the concession and the apology?
There was silence. Then various people in the group advanced the argument that the so-called exceptions aren’t scientifically valid. “Okay,” I said, “but we’re postulating that you can get more kids vaccinated just by granting that the exceptions might be valid, not that they necessarily are but just that they might be. Given that assumption, what would you say?” I couldn’t get the group to agree to swallow its pride in order to vaccinate more children.
“Never say never” isn’t just a cardinal principle of science. It is also a cardinal principle of science communication. In their zeal to defend the safety of vaccination, and in their unconscious ego commitment to being 100% right, too many public health professionals have disregarded this principle for decades.
They have disregarded it, I should add, even in controversies less one-sided than the autism debate. While the claim that vaccination can cause autism is a very unlikely but not quite disproved hypothesis, the claim that the oral polio vaccine can cause polio is well-established. Of course the polio vaccine prevents far more polio cases than it causes. Wise parents in Indonesia, Nigeria, or any other country where polio is making a comeback should line up to get their kids vaccinated. But some parents in these countries have tragically succumbed to rumors that the vaccine is a Western genocidal plot. In the face of those rumors, and even before those rumors, public health leaders have generally decided that they can’t afford to be candid. So they do their best to hide the truth that the oral polio vaccine (no longer used in most developed countries) can occasionally cause polio, in order to advance the truth that the vaccine is a lifesaver. In the short term, their dishonesty may save lives. In the long term, it will inevitably undermine their credibility, damaging public acceptance of vaccination and of public health generally.
It’s not surprising that a profession that willingly lies about the safety of the oral polio vaccine, for good reasons, would for the same good reasons overconfidently assert that vaccination has absolutely nothing to do with autism, period, end of story.
The Case of Hannah Poling
And then along comes Hannah Poling. Hannah apparently had (or acquired) an undiagnosed mitochondrial disorder, a genetic condition that typically shows no symptoms until the sufferer is stressed – at which point a variety of awful things can happen, some of which are in the autism family. Because of a series of ear infections, Hannah’s pediatrician skipped some of her normal childhood vaccinations, then gave her five shots containing nine vaccines, all at once to catch up. At that point, this socially and intellectually thriving 18-month-old child started to deteriorate, and she has now been diagnosed as autistic. Her parents filed a claim with a federal program designed to compensate the victims of severe vaccination side-effects. The lawyers for the Department of Health and Human Services decided not to contest the case, and agreed that Hannah should receive compensation without any sort of evidentiary proceeding.
So what happened here? Maybe it’s a coincidence that Hannah deteriorated socially and intellectually right after her multiple simultaneous vaccinations. Or maybe the vaccinations were among the stressors that caused her mitochondrial disorder to turn symptomatic. It’s difficult, maybe impossible, to tell which.
This is a long way from a scientific finding that vaccination causes autism. It’s decision by lawyers to settle a lawsuit because they considered it possible that getting so many vaccinations at once might have triggered a particular child’s mitochondrial disorder.
What are the implications of the Hannah Poling case for a parent trying to decide about childhood vaccinations? If you know your child has a mitochondrial disorder, you might want to talk with your pediatrician about which vaccinations are so important that you should go ahead and get them anyway, and which are discretionary enough that you might want to hold off. But apparently most mitochondrial disorders are undiagnosed early on. If you’re unaware of any such disorder, there aren’t any implications.
Or maybe there are. A small number of very concerned parents could worry that their child, too, might have a mitochondrial disorder – or could worry more broadly about the tenuous but not impossible link between vaccination and autism. I can see such parents wanting to delay their young child’s vaccination schedule a bit, until the child’s brain development was further along and the chances of autism were lower. I can even see an empathic pediatrician okaying such a decision, even though it puts the temporarily unvaccinated child at a small additional risk of infectious disease.
But in general I am not advocating that doctors tailor their vaccination schedules to individual parents’ concerns. Rather, I am advocating that doctors tailor their communications to parents’ concerns. In particular, I am advocating that the public health profession acknowledge that those concerns are not completely without foundation, that the evidence is only 99.9%, not 100%, on the side of vaccination.
(My wife, a former pediatrician, wants me to say the evidence is 99.999% on the side of vaccination. It wouldn’t change the principle: that one should always concede the existence of anomalies and exceptions.)
What are the implications of the Hannah Poling case for science? We know most autistic children don’t have mitochondrial disorders – or at least we know they don’t have any of the mitochondrial disorders we know how to diagnose. We don’t know for sure whether the prevalence of mitochondrial disorders is higher in autistic kids than in normal kids. There are some studies suggesting that this may be so, though no evidence that vaccines triggered the autism in those children. Is most autism a result of an underlying mitochondrial disorder exacerbated by vaccination? Obviously not. Might a few autism cases come about that way? Yeah, looks like maybe so.
The Hannah Poling case was a big deal mostly because the public health establishment has been so insistent on its “100% right” strategy. Public health communications about vaccination safety led the public to believe that a case like Hannah’s would never happen. Well, it happened. So either the experts knew it might happen – in which case they misled us on purpose. Or they didn’t know it might happen – in which case it’s important new data requiring something of a paradigm shift.
The truth is the former. They always knew people, especially people with unusual diseases, might have rare bad reactions to vaccinations. If they’d been saying so all along, the public’s reaction would have been, “oh, yeah, looks like this might be one of those rare, horrific exceptions they’ve been telling us about.”
The overreaction to Hannah’s case on the part of the public and the media isn’t the fault of the public or the media. It isn’t even the fault of the anti-vaccination activists, who naturally seized on the case as “proof” that vaccination causes autism and that the establishment has been covering up the evidence. (They’re almost completely wrong about the former, but at least partly right about the latter.) It’s mostly the fault of the public health profession, for claiming to be 100% right in the first place.
Importantly, that overreaction already seems to be dissipating. It is dissipating in large measure because the U.S. Centers for Disease Control and Prevention did an excellent job of explaining what the case meant – and what it didn’t mean – in a telebriefing for reporters on March 6. CDC Director Julie Gerberding was as “adamant” (her word) as ever that parents should vaccinate their children. But she and the experts she shared the microphone with didn’t suggest that Hannah’s case was just lawyers perpetrating junk science, or that Hannah’s vaccinations had nothing to do with her symptoms. Instead of hewing to the “100% right” line, they explained that Hannah’s case was exceptional.
Particularly impressive was Dr. Edwin Trevathan, a CDC pediatric neurologist, who said with total candor that scientists simply didn’t know whether vaccination could sometimes be the stressor that causes a mitochondrial disorder to turn symptomatic.
Of course anti-vaccination activists will continue to take the position that the Hannah Poling case proves that vaccination causes autism. But the CDC telebriefing gives me hope that pro-vaccination activists will not continue to take the position that vaccination can never, ever contribute to autism.
Read what Hannah Poling’s father said in an interview with Kathleen Doheny of WebMD Medical News:
“I want to make it clear I am not anti-vaccine,” he says. “Vaccines are one of the most important, if not the most important advance, in medicine in at least the past 100 years. But I don’t think that vaccines should enjoy a sacred cow status, where if you attack them you are out of mainline medicine.
“Every treatment has a risk and a benefit,” he says. “To say there are no risks to any treatment is not true.
“I don’t think the case should scare people,” Poling adds. “Sometimes people are injured by a vaccine, but they are safe for the majority of people. I could say that with a clean conscience. But I couldn’t say that vaccines are absolutely safe, that they are not linked to brain injury and they are not linked to autism.”
Even as Hannah’s parents claim that her multiple vaccinations damaged her health, they are prepared to concede that this was an unusual outcome, and that childhood vaccinations do far more good than harm. A public health profession that was committed to the “99.9%” right approach would think seriously about asking the Polings to star in a series of vaccination ads.
You can’t hector people into pandemic preparedness
|Field:||Pandemic preparedness activist|
|Date:||February 29, 2008|
In your article on “NIMBY,” you say that one of the reasons why people often object to a new development in their neighborhood is process. “I disapprove of your process,” you quote a hypothetical NIMBY as saying. “You’re telling when you should be asking.”
Is this not also true of pandemic preparedness? Shouldn’t we be asking people if they are ready and not telling them to be ready?
I don’t know, maybe I am just trying to wring the sponge dry, grasping at straws as I look for ways to reach others about pandemic preparedness. But when I read your article this jumped out at me.
People don’t respond well to being told what to do. “You need this, you need that, read this, read that … buy, buy, buy.” This could be just my way of seeing the world, but it seems like it’s taking a flawed Madison Avenue marketing strategy and applying it to trying to get people ready for a disaster.
Thanks for your article, though. There is a casino trying to make its way into our area. Maybe after the pandemic I will find my way over to that issue.
I have learned a lot from you over the years.
I think you’re right that we tend to lecture people about pandemic preparedness when we ought to be trying to provoke a dialogue.
As you know all too well, it’s hard to provoke a dialogue with people who start out apathetic. The rationale for monologue in what I call precaution advocacy is a lot stronger than it is in an outrage management situation like a NIMBY controversy. NIMBYs have a lot they want to say to the developers who are trying to put a power plant or a halfway house into their neighborhood. By contrast, people who aren’t paying attention to the risk of a pandemic mostly just want to be left alone.
But you’re still right!
Even apathetic people aren’t usually apathetic about the idea that they’re apathetic. They almost always have something to say about that – usually something pretty outraged.
Some of the people we call apathetic haven’t yet given much thought to pandemics one way or the other. Some of them got a bit worried about pandemics back when the media coverage was heavy, and then let the issue go when the reduced coverage implied (wrongly) that the risk must have gone down.
Telling either group what fools they are to ignore pandemic preparedness is surely not the best way to motivate them to consider or reconsider the issue.
Very few people feel that they have too few problems in their lives and are looking for more things to worry about. So people naturally get weary and defensive when we try to hector them into adding yet another problem to their worry agenda. At the very least, we should temper our hectoring with some awareness of the burden we’re trying to impose on them: “I hate to bother you with yet another problem, but have you considered pandemics.…”
Still, “you need this, you need that, read this, read that … buy, buy, buy” isn’t necessarily an ineffective message for people a little further along the path toward becoming a prepper. (For those new to the jargon, a “prepper” is a preparedness devotee. We are preppers, to a degree, at our house.)
Between apathy and committed action comes a period when people are interested, maybe even a little worried, but still really unclear about what they should do about the problem. They may look apathetic because they aren’t doing anything. But people who aren’t doing anything because they don’t know what to do are different from people who aren’t doing anything because they don’t care. The latter group will feel harassed and guilt-tripped by a list of things they should be doing about pandemics. The former group will feel grateful for the list.
(If the list is unbearably long and burdensome, of course, it’ll feel oppressive to anybody but a confirmed prepper.)
I know you know all this already, because you have put it to good use in your own efforts to spread the word about pandemic preparedness. I read your November 2007 report in the Flu Wiki Forum about your day at a Kmart entrance, telling shoppers all the things you had bought at Kmart to help you get ready for a pandemic. Let me close by quoting what you said there:
What an amazing experience! I have never had so many people say thank you, God bless you, this is wonderful … and they really meant it…. There were only a handful of people who walked on past…. I have to say that I sensed a hunger in some people. They wanted to know.
“Standing Firm” responds:
Yes, I am needed in my town. Although people still don’t really appreciate that yet, they are beginning to – which both excites and scares me.
My Kmart experience was apparently frowned upon in some circles. What were they worried about? Panic??? Hardly likely – even standing in front of a major retailer. I caught people’s eye for a fleeting second, and sometimes that is the seed. Someone will eventually come along and water, maybe even me. People really did appreciate the information and handouts so it was a watershed event for me personally.
I think the article in the newspaper did more good, but first someone had to generate the news story.
What I have sometimes found myself doing is trying to take the pandemic problem onto my shoulders. When we do this, people will not own the problem themselves. They will allow us to carry the weight so they don’t have to.
How did I communicate the risk successfully? I demonstrated concern by acting on my concern. It was the evidence of action that told people something was up. It was not my words that grabbed them.
The hands-on display at Kmart – that visual worked, and no posters in the background explaining the science were necessary. Because I was acting on what I believed, and acting publicly, people trusted what I know.
Any evidence of a Madison Avenue approach will rapidly stop the trust implicitly needed in this type of communication.
I’m not as down on Madison Avenue as you are. In a consumer society, the people who know the most about the science of persuasion are likely to be advertising professionals. They’re going to use their expertise to sell widgets. Why shouldn’t we use it to sell pandemic preparedness?
Still, I agree with you that even a whiff of Mad Ave is off-putting. Hectoring people doesn’t work, and in the long run neither does slick seduction – at least not if it shows.
Alberta’s oil royalty: The industry’s risk communication mistakes
|Date:||February 21, 2008|
During a recent workshop, you briefly described your views on communication from the oil and gas industry during Alberta’s royalty review. Could you please elaborate?
First some background for readers unfamiliar with the issue.
The Canadian province of Alberta has long depended on oil and gas revenues as its principal source of prosperity. The industry is the province’s biggest taxpayer and employer; since the provincial government owns most of the oil and gas, the industry also pays a substantial royalty for the privilege of harvesting it.
The government recently took another look at its royalty rates. An independent panel said the rates were too low, and in October 2007 Premier Ed Stelmach announced plans to increase them, effective 2009.
I’m not an economist, and I have no judgment of my own on whether the increase was too big, too small, or just right.
The impression I have from what I’ve heard and read is that the increase for Alberta’s hottest industry segment, oilsands, was nowhere near high enough to lead to any kind of slowdown; if the goal was to maximize the royalty (a reasonable goal), that part of the increase was probably lower than it could have been. With respect to several more “mature” and less potentially profitable segments of the industry, on the other hand, the government might have overplayed its hand. The increase there may have been high enough to motivate the industry to reallocate some of its production resources elsewhere in the world. That would reduce the total royalty for those segments. It would also cost the province jobs and taxes, and it would leave some oil and gas deposits half-harvested, with the hard-to-reach parts left in the ground.
In several January 2008 presentations to Alberta energy industry audiences, I criticized the industry’s communications during the royalty debate, accusing it of “bad risk communication, grounded in bad audience segmentation.” Inasmuch as different companies had somewhat different communication postures, this is a broad-brush assessment. But I think it’s generally accurate. And for readers not particularly interested in oil and gas royalties in Alberta, it’s also a good example of how to think through a concrete risk communication challenge.
My analysis divides the Alberta public into six segments. (Let’s leave aside Alberta government officials and publics outside of Alberta; what follows is complicated enough already.)
Audience #1: Supportive of a big royalty increase, and confident the increase won’t cause economic damage
This audience believed that the province should take a higher percentage of the profits from oil and gas development, and that it could safely do so without having a perceptible effect on industry economic activity. That was exactly the position taken by the government’s independent panel, so it was obviously a reasonable position for a non-expert citizen to adopt, although the industry considered it dangerously mistaken.
Industry communicators decided (or assumed) that Audience #1 was by far the most important segment of the public. The industry devoted nearly all its communication efforts to “educating” this audience – that is, to rebutting the independent panel’s conclusions and arguing that the proposed royalty increases would inevitably lead the industry to look elsewhere for more profitable opportunities.
The rebuttal made several risk communication mistakes, including three very important ones:
- The rebuttal failed to distinguish economically marginal oil and gas fields from the Athabasca oilsands. The oilsands were (and are) promising enough to withstand a substantial royalty increase. Anyone who knew anything about the oilsands knew that the industry wasn’t about to abandon that deposit. Implying otherwise cost the industry’s more general economic claims a lot of their credibility.
- The rebuttal failed to acknowledge a more basic credibility problem. “Of course we were bound to claim that the royalty increase was too high, no matter what,” industry statements should have conceded. “And we were bound to warn that any increase endangered the prosperity of the province. We understand that nobody is going to take our word for it on either point.”
- The rebuttal’s tone was wrong. Too many industry statements came across as threats or bluffs, not just information. Among other things, it would have helped for the industry to acknowledge that what it was saying might sound to some like a threat or a bluff.
But the fundamental problem here was the industry’s excessive focus on Audience #1. I don’t have survey data on the relative size of my six audiences. But I would bet that Audience #1 was a smaller and less important segment of the Alberta public than several others on my list.
Audience #2: Opposed to a big royalty increase for fear of economic damage
This audience agreed with the industry view that raising the royalty substantially would reduce industry economic activity and might devastate the provincial economy.
Obviously, there wasn’t any need for the industry to convince this audience of anything. Communications aimed at Audience #2 were rightly intended to mobilize supporters to communicate more with others.
Of course now that the royalty increase has been announced, Audience #2 is a key audience. People employed by support industries whose jobs may now be threatened, for example, are understandably outraged – not just at the government for raising the royalty, but at the industry for losing the argument. The same goes for communities whose economic base may be about to disappear.
Audience #3: Supportive of a big royalty increase despite agreeing that economic damage will result
Audience #3 agreed with Audience #2 that a substantial royalty increase could reduce industry economic activity, but thought that on the whole that would be a good thing. This audience included environmentalists whose opposition to further oil and gas development was principled and unalterable. Importantly, it also included ordinary Albertans who had experienced several decades of boom and were unhappy enough with its downside that bust had begun to look like an attractive alternative.
There was relatively little for the industry to say to Audience #3 with regard to the royalty controversy. These Albertans supported very high royalty rates – the higher the better – precisely because they would deter further oil and gas development. So industry warnings about the danger of reduced oil and gas development were beside the point.
I’m not implying that opponents are never an appropriate target for communications, or that the Alberta oil and gas industry should give up on communicating with its critics. Some people in Audience #3 may have misimpressions about the negative impacts of oil and gas development – and over time it may be possible for the industry to correct those misimpressions. More promising is the opportunity for the industry to say that its critics’ concerns are legitimate, that it is taking those concerns onboard, and that it is putting policies into place to reduce the negative impacts of oil and gas development. All this applies not just to concerns about environment and health, but also to concerns about lifestyle and other downsides of an oil and gas boom.
But the royalty debate was not a promising venue for addressing these concerns. Audience #3 was simply using the royalty issue as an opportunity to advance its more fundamental goal: slowing or reversing oil industry growth in Alberta. So far as the royalty debate was concerned, people who looked forward to reduced industry economic activity were basically the opposition, not really an audience for industry communications.
Audience #4: Ambivalent about the boom, and therefore about a big royalty increase
This audience was torn between the positions of Audience #2 and Audience #3. Audience #4 agreed that a substantial royalty increase could produce a significant reduction in industry economic activity, and was of two minds about whether that was a good thing or a bad thing.
I suspect this audience was crucial – and is still crucial. On the one hand, these Albertans are benefiting from Alberta’s oil boom. And they know it. On the other hand, they are suffering from Alberta’s oil boom. And they know that too. They experience the traffic jams in the streets of Calgary, prices shooting through the roof throughout the province, a labor shortage that makes it hard to find a mechanic or a fast food restaurant, more and more of the landscape bearing the stigmata of oil and gas development.
Whenever people are ambivalent, communication happens on a seesaw. That is, ambivalent people focus on the half of their ambivalence that everybody else seems to be ignoring.
How did the seesaw affect Audience #4? The industry emphasized that a royalty increase and a resulting industry slowdown could have disastrous economic impacts on the province. That propelled Audience #4 to the opposite seat on the seesaw: the attractions of a slowdown.
An oil industry that understood the dynamics of the seesaw would have said, in essence: “Maybe it wouldn’t be such a bad thing for us to slow down for a while. That’s what we’re hearing from a lot of our neighbors. Many people tell us that despite its economic benefits they’re getting weary of the oil boom….” Such a message would have been extraordinarily counterintuitive to industry communicators, of course. But it could have changed the royalty debate in a big way, because it could have helped Audience #4 bear in mind the costs of a slowdown: “Are you kidding?” Audience #4 might have responded. “We don’t want a recession in Alberta!”
Audience #5: Outraged at the industry, and therefore supportive of a big royalty increase
Outraged people are more interested in “getting even” than in “getting rich.” Albertans who were outraged at the oil industry were thus more deeply committed to punishing the industry than to benefiting Alberta. So they basically ignored the question of how the royalty would affect Alberta’s economy. They supported a royalty increase in the hope that it would damage the industry.
It’s important to understand the distinction between outrage and substantive opposition – in terms of this analysis, the distinction between Audience #3 and Audience #5. Obviously the two overlap. Substantive opponents often get outraged, and outraged people almost always glom onto substantive arguments. But there’s still a fundamental difference between people who disagree with you and people who hate you – between people whose considered opinions put them on the opposite side from you about a particular policy and people who are on the opposite side because they will never, ever be on your side about anything. And there’s a fundamental difference between discussing the merits of the case with a substantive opponent and ameliorating the antipathy of an outraged stakeholder.
The Alberta oil and gas industry has lots of survey data (not to mention firsthand experience) documenting the high and growing level of public outrage.
Industry leaders find that outrage baffling. They see themselves as public-spirited, generous benefactors of the province. They know that oil companies are widely hated elsewhere, but at home in Alberta they have always felt admired. Losing the public’s admiration feels awful. I haven’t spent that much time with the industry top brass in Alberta. But it isn’t hard to sense the levels of emotion: under the calm, worry; under the worry, irritation; under the irritation, hurt.
Understandably, industry leaders are having a tough time absorbing the reality of being widely disliked even on their home turf. So they are having a tough time recognizing the need to acknowledge Albertans’ grievances – and then to find ways of mitigating those grievances. And they’re having a tough time coping with their own worry, irritation, and hurt – their outrage at their neighbors’ outrage.
During the debate over royalties, Albertans in Audience #5 masqueraded as members of Audience #1 or Audience #3. That is, they tended to claim that increasing the royalty would be good for Alberta, either because it wouldn’t reduce industry economic activity or because it would. Actually, they didn’t care much whether the royalty increase was good for Alberta or not, as long as it was bad for the industry.
The great irony here, of course, is that an excessive royalty is far likelier to hurt Alberta than to hurt the multinational oil companies, which can easily transfer economic assets elsewhere. The efforts of Audience #5 to punish the oil industry will fail. Its desire to punish the oil industry will remain strong until its outrage is addressed.
Audience #6: Apathetic about the royalty controversy
Audience #6 wasn’t really paying much attention to the royalty controversy.
In any controversy, the apathetic audience is almost always the largest audience. But it’s not usually an important audience, because it’s content to sit the issue out.
The danger is that a message aimed at some other audience can provoke the apathetic audience into taking a position … on the other side.
That happened in Alberta’s royalty debate. Industry warnings of possible economic disaster were aggressive enough to attract the attention of millions of Albertans who don’t normally follow public policy issues. And the warnings were self-serving and unconvincing enough to turn that attention into opposition. Of course plenty of Albertans in Audience #6 remained steadfastly apathetic. But those who got interested became mostly opponents. Some discovered their ambivalence and joined Audience #4. Others discovered their outrage and joined Audience #5.
The long-term stakes for oil and gas development in Alberta extend far beyond the royalty debate. On one specific issue after another, the industry will be making its case to a population that includes some who are substantively opposed (Audience #3), many who are outraged (Audience #5), and many more who are ambivalent (Audience #4). The industry already knows how to debate substantive issues with its substantive opponents. If it is to prosper, the industry will need to learn how to diagnose outrage and ambivalence, and how to address them.
The industry’s failure to address stakeholder outrage was a key problem in the royalty controversy. It is still a key problem today.
Honesty as strategy
|Field:||Head of communications, Environment, Health,|
and Safety Department, University of
North Carolina at Chapel Hill
|Date:||January 19, 2008|
|Location:||North Carolina, U.S.|
I have been involved in marketing and communications for thirty years, and have represented hospitals, manufacturers, banks, etc. I have always tried to get my clients (and employers) to be proactive and as transparent as possible in communicating with the public. It is always a tough sell, because invariably companies and organizations just won’t cross that line for fear of stirring up too many questions and problems.
I have also been a consultant for a number of universities in pandemic emergency planning and invariably I can’t get them to develop any extensive communications program with their internal publics of students, staff and faculty, much less their external publics of parents and alumni. I think it would be so beneficial for them to be seen as a proactive university in helping all of their publics plan for such an event, yet they are so afraid that they will create too much of a reaction.
So, in preparation for an in-service day presentation on communications for our department, I read your book on outrage. I was very pleasantly surprised to see such an emphasis on telling the truth and inviting “outsiders” into the issue and the conversation.
I also have a degree in theology, and so for a long time I thought I might just be imposing my moral and ethical beliefs on my companies and employers in trying to get them to act proactively and honestly with their publics. So it was very nice to know that someone as highly regarded as you promotes honesty and openness, not only as a moral issue, but as the most effective communications method.
In addition to that, I found your analysis of risk and outrage quite helpful in risk communications.
So thank you very much for all of your insight and work. You have made a significant contribution to businesses and organizations, as well as the citizens of this country.
Thank you for your comment – what a lovely way to start the New Year!
Clients (and friends) do sometimes accuse me of being a closet ethicist. And occasionally during a seminar break someone from the audience will ask me if I am Born Again, or a Buddhist, or a practitioner of whatever religion that person adheres to and sees reflected in my risk communication advice. As a non-practicing Jew, I always experience this as a huge compliment. But I continue to argue that risk communication principles – transparency, responsiveness, and the rest – are grounded not in ethics or religion but simply in understanding what works in today’s combative communications environment.
Of course it’s possible to come up with an exception, a situation in which the morally right course of action diverges from an organization’s self-interest (even its enlightened, sustainable, long-term self-interest). Such exceptions do occur. But it’s remarkable how seldom!
Some of my favorite riffs on the practicality of doing the right thing.…
- Secrecy, I tell my clients, is an extremely risky strategy for coping with information that reflects badly on an organization. The rule of thumb is that such information does roughly twenty times as much harm if revealed by an outsider (a journalist, a whistleblower, or an activist) than if revealed by the organization itself. It follows that secrecy pays for itself if and only if the organization can sustain a 95% success rate at keeping secrets. If an organization successfully hides 98% of its secrets, then one-time-in-50 a secret will come out and do twenty times as much harm – leaving the organization still ahead of the game. But if an organization successfully hides only 90% of its secrets, then one-time-in-10 a secret will come out and do twenty times as much harm – and the organization would be better off revealing all its own secrets. At least in the developed world, I argue, very few organizations today can manage a 95% secrecy success rate.
- Similarly, I often point out that exaggeration works for activists but not for corporations. There is a good reason why this is so. If an activist group exaggerates how dangerous something is, that helps protect us. It’s a conservative bias, like a smoke alarm that’s oversensitive and sometimes goes off when you’re cooking dinner. But if a company exaggerates how safe something is, that endangers us – it’s like turning off the smoke alarm when a serious fire is starting. So the public gets angry at corporate exaggeration and shrugs off activist exaggeration. Since exaggeration works well for activists, it requires ethics for them not to exaggerate too much. Since exaggeration backfires for companies, ethics shouldn’t be necessary. Intelligence should be sufficient.
- The entire risk communication approach I call outrage management arose as a business recommendation, not an ethical one. When companies (or government agencies, universities, etc.) do something that arouses outrage in their stakeholders, they pay a quantifiable price associated with the damage to their reputations. Customer loyalty goes down, shareholder appeal goes down, employee morale goes down, etc. But of course outrage management has a price, too – paid partly in cash, partly in organizational discomfort and ego damage. If the cost of outrage management is higher than the cost of outrage, I tell my clients, go ahead and endure the outrage. But track its cost carefully, because it’s probably going up. Activists are already doing the essential work of changing the payoff matrix by making outrage more costly to your organization. When you see that the two curves are going to cross soon, that stakeholder outrage will soon be costing you more than outrage management would cost you, then think about acknowledging, apologizing, sharing control, sharing credit, and the rest.
It’s worth noticing that the business case for honesty and outrage management is much stronger than it was a generation ago. When today’s CEOs were just starting out (in the early days of the Freedom of Information Act, long before Sarbanes-Oxley and Google), corporate secrecy was a much safer bet. It was harder for critics to learn your secrets, and harder for them to spread the word if they did. Corporate exaggeration was also more tolerated. Most important, stakeholder outrage was less costly – and thus outrage management was less advisable. I think the same sorts of corporate behaviors that arouse outrage today aroused outrage a generation ago. But outraged stakeholders found it much harder to meet up with others of like mind, much harder to win the attention of the media, and much harder to convert their outrage into powerful pressure for change.
Whereas stakeholder outrage might have been just a routine cost of doing business a generation ago, today it is often (and increasingly) unacceptably costly. So it’s outrage management that needs to be seen as a routine cost of doing business.
Take a moment to feel some sympathy for CEOs who are having trouble recognizing this sea-change in their business environment. Most companies have always paid attention to ethical principles. But business ethics used to diverge pretty significantly from Sunday School ethics. On their way to the top, today’s business leaders made what felt like necessary compromises – and learned to live with them. Now along comes risk communication, insisting that ethical principles and business self-interest are much more aligned than they used to be, that the necessary compromises of the past are simply bad business today. It’s not hard to understand why CEOs might find themselves resisting this message.
By the way, we owe the improved alignment of ethical principles and business self-interest mostly to activists and to new technologies. It’s not something business accomplished; businesses are just learning to respond to it. And it’s not something outrage management accomplished; outrage management (and risk communication) got hot because of it.
Back when the payoff matrix was different – when corporations and governments didn’t have to listen so much to their stakeholders – I found myself interested chiefly in helping activists arouse outrage. My early writing is all about how to push large institutions to be more candid, responsive, and accountable to their stakeholders, and how to build a public constituency to help with the pushing. I still do that … and it still needs doing. But now that there is a pretty good business case for corporations and governments to be more candid, responsive, and accountable to their stakeholders, I am also interested in helping them realize that this is so and learn how to meet the need.
Some of my activist colleagues from the old days think I changed sides, “jumped ship” as an old friend wrote me recently. But I see it as a two-step process. Social change works a little like a caterpillar. First activists must change the society so that powerful institutions are under increased pressure to be more candid, responsive, and accountable. Then those institutions must change their organizational cultures so they can be more candid, responsive, and accountable. Together, these two changes help produce the kind of society in which I want my children to live.
Copyright © 2008 by Peter M. Sandman