Posted: December 29, 2001
This page is categorized as:    link to Crisis Communication index
Hover here for
Article SummaryAccustomed to naturally occurring diseases, the U.S. Centers for Disease Control and Prevention (CDC) had a difficult time coping with the anthrax bioattacks of late 2001. Since risk communication was one of its core problems, it asked me to come to Atlanta and help. This four-part “column” grew out of my Atlanta notes. If the CDC was adjusting to bioterrorism, so was I. I put aside my usual outrage management recommendations and developed 26 recommendations specifically on the anthrax crisis. These became the basis for my (sadly) expanding work in crisis communication, and for the crisis communication CD-ROM and DVD Jody Lanard and I ultimately put out in 2004. This column was my first extended discussion of most of these crisis communication recommendations, and it is my only published assessment of the CDC’s anthrax communication efforts.

Anthrax, Bioterrorism, and
Risk Communication:
Guidelines for Action

(Page 4 of 4 – Return to page 1 link up to index)

18. Give people things to do.

link up to indexOne of the most vivid characteristics of most people’s response to September 11 was a powerful desire to do something – something to help the victims, and something to protect themselves and their loved ones. A significant piece of the “misery” that was (in my judgment) more prevalent than fear in the aftermath of 9/11 was the absence of something to do. Action binds anxiety. This is yet another reason why it is wiser to “recruit” people’s worry and hypervigilance than to try to “allay” it.

Even more important, action reduces denial. This is partly because action reduces the need to deny; if I can do things to protect myself, I don’t need to pretend there’s nothing to worry about. But it is also important to remember that the relationship among information, attitudes, and behavior isn’t what we sometimes imagine. The common-sense view is that information shapes attitudes and attitudes shape behavior. But as Leon Festinger’s Theory of Cognitive Dissonance established back in the 1950s, it usually works the other way: Behavior shapes information-seeking, and attitudes eventually catch up. In other words, if you can get people to do things to protect themselves, they will search out information that makes sense of what they are doing (resolves the dissonance). They will thus teach themselves that the danger is real (otherwise the action would be unnecessary) and that the danger is manageable (otherwise the action would be futile). This is exactly what we want people to believe. The educational process jump-starts more easily with action than with information.

Of course some emergencies come and go too quickly for cognitive dissonance to play much of a role; there isn’t time for information-seeking and dissonance reduction while evacuating a burning building. But most crises roll out slowly enough for dissonance reduction to be an important outcome of action – and an important reason for getting people to act.

Medical logistics aside, in risk communication terms it would have helped to ask people to give blood in the days and weeks after September 11, instead of sending them home to stew in front of their TVs. And in the anthrax attacks that followed, it would have helped if there were a mass vaccination program, or if people were encouraged to stockpile antibiotics, or if they were urged to spray their mail with Lysol. Something! I am not recommending that the authorities recommend any behavior they judge to be medically unwise … or even medically unhelpful. Recommending useless anodynes isn’t just an ethical problem. It’s a credibility problem. But I think it is a priority task to find things that are medically acceptable for ordinary people to do about bioterrorism.

This is something else President Bush got right in his November 8 speech from Atlanta, which urged volunteer action as an antidote to fear: “All of us can become a September the 11th volunteer,” he said.

I am reverting now to some less obvious – and less confident – recommendations.

19. Give people a choice of actions to match their level of concern.

link up to indexThe best risk communication gives people a choice of things to do – the act of choosing is itself an assertion of control that binds anxiety, prevents panic, and reduces denial. Ideally, your menu of protective responses ranges around a recommended middle. “X is the minimum precaution; at least do X. Y is more protective, and we think wiser; we recommend Y. Z is more protective still, and we think a little excessive – but if you’re especially vulnerable or especially concerned, by all means go that extra mile and do Z.”

The X-Y-Z choice tells people how concerned you think they ought to be, the level of concern represented by protective response Y. But it also gives people permission to be more or less concerned than you think they ought to be – and for whatever level of concern they are experiencing, it prescribes a set of precautions. For those of us who are excessively fearful, you are not trying to “allay” our fears – which to the best of my knowledge cannot be done directly. Instead, you are helping us manage our fears, by giving us precautions to take that match our level of fearfulness. Paradoxically, that allays our fears.

Remember Semrad. Help the patient recognize the pain, then bear it, then put it into perspective. When the experts try to allay our fears with reassuring words, they leave us alone with our fears – alone in the dark with the monster – exactly the opposite of Semrad’s advice. Telling us what precautions we can take helps us bear our fears. Then we will be better able to put the fears into a proper, even if shifting, perspective.

I understand that many professionals consider it inappropriate, even unethical, to prescribe for a fear they deem fanciful. Even if the prescription is harmless, you may feel you have no right to dignify mistaken concerns. Obviously you shouldn’t do anything you consider unethical. But I want to make sure you understand my point. When people are fearful – whether their fear is justified or not – precaution-taking is a lifeline. And the absence of any prescribed precaution is terrifying.

Furthermore, I am not mostly talking about prescribing placebos for fanciful fears. I’m talking about prescribing real protective strategies for uncertain risks. It isn’t rare – in fact, it’s the norm – to find yourself in a situation where you know enough to give advice but not enough to insist that any behavior other than your recommended behavior is flat-out wrong. Yet some behaviors are flat-out wrong. So you prescribe a range: your recommendation surrounded by less and more protective strategies that you consider not unreasonable. You must do this, of course, in a way that does not send a double-message. “It’s nothing to worry about but take antibiotics anyway” is a double-message. “It’s probably nothing to worry about, but if you want to take antibiotics just to be on the safe side, that’s okay too” is not.

CDC Director Jeffrey Koplan provided a superb example in a November 4 interview with Wolf Blitzer on CNN. Koplan was explaining that CDC now recommended that many people who had been started on antibiotics should stop, because it had been determined that their exposure was negligible after all. But he gave them permission to ignore the recommendation and keep taking the meds for the full sixty days: “Some of them may decide that, for the sense of their own security, they want to. That’s a decision they may make.” Paradoxically, it is easier to decide to come off of Cipro if CDC says that’s what it recommends, but if you’re really worried, go ahead and stay on it.

Of course any recommendation is received in the context of the recommender’s reputation. Though not widely known, CDC is widely respected. Its reputation is as a credible, possibly even over-protective, health-oriented agency. This is the ideal positioning for the sort of prescription-in-the-alternative I am proposing. Compare CDC’s enviable reputation to an agency in a more difficult position – let’s say the Nuclear Regulatory Commission advising on the safety of a nuclear facility after a terrorist attack. The NRC is often seen, fairly or not, as an apologist for the nuclear industry. If the NRC were to say, “the reactor is probably safe, but if you want to move out of the area, go ahead,” most neighbors would see this as a double-message and a tacit admission of unacceptable risk; the evacuation might well be panicky, and the NRC would surely be crucified.

20. Harness the hypervigilance … to disentangle it from the paranoia.

link up to indexLet me deal with the easy part of this recommendation first – hypervigilance. Hypervigilance is an unpleasant but necessary and appropriate response when danger is present … or close. It’s all about watching for trouble. What will happen next? What might happen next? What precautions can I take? What do I need to know in order to protect myself?

The best response to hypervigilance is to harness it, to approvingly suggest things to look for and worry about and protect against, and protocols for doing so.

One useful way to harness hypervigilance about anthrax was teaching people how to distinguish inhalation anthrax from influenza. This had a medical purpose as well, of course; it’s better if people with flu don’t rush to the hospital fearing anthrax. But a way to be vigilant for anthrax is useful in risk communication terms as well. (The fact that anthrax in its early stages is hard to distinguish from flu is part of what makes it so dreaded. If it turned your nose purple, people would know what to look for and calmly make periodic nose color checks.) I am similarly enthusiastic about the search for a clinical diagnostic tool for anthrax; a home test kit would be better still.

Hypervigilance is an appropriate response to a new, scary threat – a much healthier response than denial. That’s why you should try to harness it, not disparage it or “allay” it. But it still has high psychic costs. Even if it doesn’t mushroom into panic (which it rarely does) or flip into denial (which it often does), it gives you headaches and ulcers. It is generally conducive to misery. The point is that if hypervigilance is accepted, legitimated, and harnessed, it settles faster into something tolerable: the new vigilance.

The problem is that hypervigilance often comes intertwined with paranoia. It certainly did in the wake of September 11. In fact, one of the most lasting effects of September 11 may turn out to be an emerging American paranoia – a pervasive new awareness that someone wants to kill us. Prior to the 9/11 attacks, most Americans luxuriated in the assumption that we were largely immune to terrorism. Maybe we felt too powerful to attack; maybe we felt too lovable to attack. Whatever we felt, we learned we were wrong. “They want to kill me. ME! Someone really wants to kill ME!” No data about lightning strikes or billions of pieces of mail or lottery tickets or car crashes can mitigate this newly discovered truth. It isn’t about the odds. It’s about the fact.

Any therapist will quickly add that paranoia includes not just a sense that others mean you harm (leading to some mix of fear and injured self-esteem), but also a projection of the suppressed desire to harm them back, or first. This, too, applies in the case at hand. Our paranoia is the reality that others want to kill us, plus our projected desire for revenge, our own suppressed homicidal rage toward them. It does no good to accuse people of these feelings, of course; that only pushes them further beneath the surface. But it is helpful to acknowledge them indirectly: “Some people are so scared and angry they can hardly bear it.”

(I’m not claiming that if you think this is psychobabble you must be in denial. Maybe it’s psychobabble.)

The hypervigilance is rational. The paranoia isn’t. The only way to disentangle them is to legitimate and harness the hypervigilance. If I am told my hypervigilance is foolish and inappropriate, if I don’t have anything to do, anything to watch for, any way to protect myself, then the paranoia reigns supreme.

21. Ask more of people.

link up to indexMany of the worst public statements about anthrax have implied a dichotomy: Either the risk is zero or the risk is unacceptable. This assumes, of course, that any non-zero risk is unacceptable. That is a huge mistake. Zero risk is unattainable, and so non-zero risk must be acceptable. It was clear fairly early in the anthrax crisis that environmental sampling would inevitably yield results positive for anthrax even in situations where the risk was negligible. It was essential to say so.

We must all get used to living with small numbers of anthrax spores. And we must all get used to living with the possibility of bioterrorist attack. Saying anything else is extremely harmful.

It is also insulting. People can tolerate considerable risk. We tolerate the risks we have no choice but to tolerate. More important, we tolerate the risks we choose to tolerate. Voluntariness is in fact a key factor in outrage management. The same hazard can be up to four orders of magnitude more acceptable if the risk is voluntary than if it’s coerced. Consider two ski trips. In the first, you decide to go skiing; in the second, someone rousts you out of bed in the middle of the night, straps slippery sticks to the bottoms of your feet, and pushes you down the mountain. Here’s what I want you to notice: The experience on the way down the mountain is exactly the same – sliding down a mountain is sliding down a mountain. Nonetheless, the first trip is recreation, and an entirely acceptable risk; the second is assault with a deadly weapon, and an unacceptable risk. The difference is who decides. And that’s just skiing. The amount of risk people will voluntarily accept to accomplish an important task is obviously far greater than the amount they will accept for recreation. In a war, we tolerate huge risks.

I think it is telling that some of the most objectionable statements about anthrax risk have come from officials of the Postal Service. There is a connection between Postal Service management saying that they “will do whatever it takes to assure that the mail remains safe” and the fears and resentments of postal workers. Obviously that’s not all that is going on there. Several postal workers died, after all; morale and trust in the Postal Service weren’t high to begin with; the contrast between insufficient early precautions at post offices and excessive precautions at the Capitol raised the specter of class and even race. But what is most crucial, I think, is that the Postal Service promised and is still promising to make the mail safe – instead of asking us, with regret, to bear the reality that the mail is less safe than it was.

What nobody seems to have said to the postal workers at the Brentwood facility is this: “It isn’t fair that sorting mail has turned into a dangerous job, but it has – not as dangerous as firefighting or policing or soldiering, but more dangerous than it should be. We will try to reduce the risk, but we can’t eliminate it. Nobody should take this risk unwillingly. Those who want to be reassigned will be reassigned, without penalty of any sort. We hope we will have enough volunteers to keep the station open.”

This is about Semrad again. Ask us – postal workers and postal patrons – to bear the risk. Give us permission to find it unbearable, but expect us to be able to bear it, and help us bear it. Don’t tell us it’s not there.

The authorities can help us rein in our paranoia by not over-reassuring us; by asking us to share the burden of practical worry; and by noticing that we can bear more than they seem to think. They have diagnosed us wrong, and are aiming their messages at a problem we do not have: the inability to bear danger. We are sturdy despite our fears. Ask more from us. Harness our hypervigilance.

And back to firmer ground for the finale.

22. Never use the word “safe” without qualifying it.

link up to indexThe dichotomy between zero risk and unacceptable risk (in Semrad’s terms, unbearable risk) plays out most frequently in the misuse of the word “safe.” Any time that word is used without an adjective or a number or a comparison, it is being misused.

I tell my clients that the correct answer to the question, “Can you assure me that X is safe?” is always no. No X is safe. Some X’s are pretty safe, or safer than Y, or safer than they used to be, or safer than the regulatory standard – but not safe.

You can’t say X is “acceptably safe” either. That’s not your call. The question of “how safe is safe enough?” is not a technical or medical question. It is a values question, answered for society as a whole by the political process, and for the individual by that individual.

The seesaw operates here. The question people are likely to ask first is: “Is X safe?” The question you want them to ask is: “How safe is X?” If you answer the first question by insisting that you cannot certify that X is perfectly safe, people will quickly back off their insistence on a dichotomous view of risk and ask you how safe it is. Then you can answer the question – or, if it hasn’t got very good answers yet, you can present what you know and share the dilemma. By contrast, if you tell people the first question is a stupid question, they’ll never get to the smart question. And if you give a falsely reassuring answer to the stupid question – “yes, it’s perfectly safe” – you will undermine your credibility and feed their paranoia.

This, too, is part of asking more of us, and of helping us bear what you are asking of us. Once again, President Bush got it right in his Atlanta speech. “The first attack against America came by plane, and we are now making our airports and airplanes safer….” “We’re imposing new licensing requirements for safer transportation of hazardous material.” “…by making our homes and neighborhoods and schools and workplaces safer….”

23. Find a non-zero standard for anthrax.

link up to indexEven though risk isn’t a dichotomy, some risk choices are: you take antibiotics or you don’t; you close down a post office or you don’t. Risk management often requires drawing a bright line somewhere along the risk dimension, distinguishing the risks you will treat this way from the risks you will treat that way.

Note about “conservative”: This is a dangerous word to use except with technical or medical audiences. Professionals at CDC know that a “conservative” risk estimate or risk management strategy is one that errs on the side of caution. But to most laypeople a “conservative” estimate is a low estimate, so a conservative estimate of the size of the risk is one that probably understates the risk – exactly the opposite of what the word means to professionals. Don’t bother trying to explain this; just don’t use the word.

For anthrax, recommending where to draw the line is CDC’s job. If the agency doesn’t know enough yet to draw the line, it still has to recommend where to draw the line while it is trying to learn more. Ideally this preliminary standard should be conservative (see note at right); it’s less harmful to correct an overly cautious risk assessment than an overly optimistic one. And it should be tentative, grounded in dilemma-sharing; even as they specify the preliminary standard, officials should predict that it may change and they may wish in hindsight they’d set it differently. But some preliminary standard has to be there. (Don’t wallow in the tentativeness. Acknowledge it regretfully but matter-of-factly. Show us that you can bear it.)

In my work with environmental risk controversies, I’ve seen only four different ways of drawing this line that seemed to work. Here they are in order of preference to the public.

Draw the line at zero. This works fine if you can live with a demand for zero anthrax spores … which you can’t. So you’d better make one of the other three work.
Draw the line at a No Effects Level (NOEL). This is the best replacement for zero; you have grounds for reasonable confidence that nobody or virtually nobody will be endangered by this number of spores – or whatever the algorithm is: this density of spores of that diameter. You are looking for a NOEL. The sooner you get one, even a tentative one, the better.
Draw the line at background. Apparently urban background for anthrax spores is zero, which is too bad. But what about background in, say, Crawford Texas? It makes a kind of intuitive sense to say that you won’t make postal workers (for example) tolerate a risk any greater than people who live in an animal-handling town but aren’t themselves animal handlers.
Draw the line As Low As Reasonably Achievable (ALARA). People like ALARA the least of the four, but they live with it: “We’d like to get lower and we will in time, but this is the best we can do now.” ALARA has the advantage of being grounded in technology rather than in science.

I’d rather you found a NOEL. If you can’t, you’ll have to do something with background and/or ALARA.

24. Be careful with risk comparisons.

link up to indexI define “hazard” as the seriousness of a risk from a technical perspective. “Outrage” is the seriousness of the risk from a nontechnical perspective. Experts view risk in terms of hazard; the rest of us view it in terms of outrage. The risks we overestimate are high-outrage and low-hazard. The risks we underestimate are high-hazard and low-outrage.

When technical people try to explain that a high-outrage low-hazard risk isn’t very serious, they normally compare it to a high-hazard low-outrage risk. “This is less serious than that,” the experts tell us, “so if you are comfortable with that, you ought to be comfortable with this.” In hazard terms, the comparison is valid. But the audience is thinking in outrage terms – and viewed in outrage terms, the comparison is simply false. “This” is lower hazard than “that,” but it is higher outrage.

An anthrax attack is high-outrage and (for most of us, so far) low-hazard. You can’t effectively compare it to a low-outrage high-hazard risk, such as driving a car – which is voluntary, familiar, less dreaded, and mostly under our own control. Even naturally acquired anthrax fails to persuade as a basis for comparison. People are justifiably more angry and frightened about terrorist anthrax attacks than about natural outbreaks, even if the number of people attacked is low. Imagine an oil company trying to justify an offshore oil spill by explaining that more oil seeps into the marine environment via cracks in the ocean floor than the company could ever manage to spill.

Even a volatile risk comparison can work if it is clear that you are trying to inform the public’s judgment, not coerce it. But too often in the anthrax crisis, comparisons were used to coerce judgment. Two suggestions should help. (1) If you’re trying to inform our judgment, the natural thing to do would be to bracket the risk: bigger than X, smaller than Y. If you only tell us it’s smaller than Y, we can tell we’re being cornered, coerced. (2) Tell us the reasons why it makes sense that we are more worried about anthrax than, say, auto accidents or flu deaths: the outrage factors; the paranoia (“They’re trying to kill me!”); the fact that this crisis may be a precursor of future, much worse ones.

Here’s another quote from my wife’s rewrite of Timothy Paustian’s web site:

You say to yourself, “I’m not that important. Terrorists wouldn’t send me a package, so why am I so anxious?” Partly because terrorism is so outrageous; because the threat is invisible; because we all feel pretty united and connected to each other right now; and because it’s so random. We all feel more threatened by a sniper on the loose than by a predictable number of car crashes. We feel there is a new kind of sniper on the loose. It’s not irrational to be more anxious than the actual numbers suggest, so don’t let anyone call you hysterical. So how to manage your worry?….

When outrage is high, risk statistics are as sensitive as risk comparisons. The same two suggestions can help. As a bad example, look at the way anthrax mortality numbers were used. Inhalation anthrax turned out less reliably fatal than originally thought (it’s not a guaranteed sentence of death after all), but still more fatal than most diseases – more fatal than smallpox, for example. The former fact was recited endlessly; the latter fact was seldom mentioned. It wasn’t hard to detect that the numbers were being used to reassure rather than to inform.

Another example, already discussed, is smallpox vaccination risk. Those who want to make the risk seem big merge the high-risk groups with the general population; those who want to make it seem small focus on the risk to healthy adults. The generic issue here is how to choose the denominator in a risk fraction. The risk of getting killed by a tornado is far greater if your denominator is golfers during a sudden Kansas storm (a small group at high risk) than if it’s the U.S. population overall (a big group at low risk). If you’re trying to enlighten your audience, you will offer several different risk estimates. Any single estimate will reflect the source’s choice of one denominator among many. That choice is likely to have been influenced by the source’s policy preferences. And the audience is likely to sense the bias.

25. Identify and legitimate misimpressions before correcting them.

link up to indexPeople are superb “still-thinkers.” Suppose we think “red.” You tell us the truth is “blue.” We are likely to go on still thinking “red,” or perhaps thinking “red” and “blue” simultaneously, even if they are incompatible. It is hard to correct misimpressions. Your chances are worst if you just keep insisting, “BLUE!” They’re better if you say, “blue, not red.” They’re best if you say something like this: “Lots of people think red. It makes sense that they think red, for the following reasons…. But it turns out blue is true, not red. Strange, but true.” In other words, you take on the burden of proof. You explain why you have the burden of proof – red is common sense; or red is what we have always thought before; or red is true in many more familiar situations; or red feels right; or, as is common for my corporate clients, blue seems self-serving and therefore skepticism about blue is merited. Then you argue the case for blue.

Years ago when I used to teach writing courses to undergraduates, one of the grammar problems I found most resistant to change involved the distinction between “it’s” and “its.” The only way I found I could successfully teach this distinction was by explaining why the error made sense. If “John’s” is possessive, “it’s” should be too – but the contraction came first historically, so we made an exception. (The other illogical exception, “yours,” is much less of a problem, because “your’s” isn’t a word at all.) You’re not stupid, I told my students; the word is stupid. Then they could learn to do it right.

Similarly, I have clients in the biosolids industry. “Biosolids” are human excrement, treated to make them safe (so my clients tell me) and then spread on fields as fertilizer. If you want to convince people that this is an acceptable thing to do even with food crops, you must start by acknowledging what we have all known since roughly the age of two about eating our own wastes. This is a tough sell even if you acknowledge that common sense and toilet training are arrayed against you; if you don’t acknowledge these things, it’s nearly impossible.

Several such tough sells arose during the anthrax attacks. For example, people who were tested with nasal swabs naturally assumed the testing was a measure of their own risk. It was actually part of an epidemiologic effort to map the extent of spore contamination. To correct this error, it was necessary first to identify and legitimate it: “Everybody figures this is to find out if they’re infected. It isn’t. It’s just to see if the area is exposed. We tested your shelf, your floor, your wastebasket, and your nose – and a positive in any of the four means there was anthrax around; so a negative in your nose doesn’t mean there wasn’t.” Convincing people to stick to their antibiotic regimen for the full sixty days also required telling them why they would be tempted not to (you’re not feeling sick; the side-effects of the medication are driving you crazy; lots of people nearby weren’t even given the antibiotics) – before telling them why they shouldn’t give in to the temptation (the spores can be inactive for weeks and weeks, then come to life and make you sick).

26. Watch out for your own outrage.

link up to indexMy clients like the thought that their stakeholders’ risk responses are distorted by outrage, ego, denial, paranoia, whatever. But they prefer to imagine that their own responses are purely science-based.

It would be very surprising indeed if this were the case. My typical corporate clients are understandably outraged at the activists who are attacking their competence and integrity, and undermining their profits – and at the public that gives the activists more credence than my clients believe is deserved. The parallelism here is perfect. The public is too outraged at the company to respond rationally to the data that the hazard is low. The company is too outraged at the activists and the public to respond rationally to the data that they are unwittingly exacerbating the public’s outrage.

It’s a little different for a public health expert or a scientist in a bioterrorism crisis … but only a little. CDC has no profit to protect, and people aren’t questioning CDC’s competence and integrity with anything like the vigor with which, say, a corporate polluter is attacked. CDC is the good guys. But CDC is so used to being the good guys that even a mild rebuke stings … and provokes outrage. That outrage is typically suppressed as unprofessional. But unacknowledged and unattended to, it can provoke a lot of inept risk communication: impatience, defensiveness, passive aggression, lack of empathy, etc.

Technical people, in my experience, tend to be profoundly uncomfortable with emotion and human complexity; that’s partly why they become technical people. It’s worse for engineers – my most common clients – but epidemiologists (for example) are by no means immune. So a lot of technical people don’t much like any sort of public communication; they’d rather talk to colleagues. But risk communication and outrage management are especially objectionable. I think most technical people have a deep commitment to keeping emotion from influencing their work. This is, paradoxically, an emotional commitment.

I have reams of data that normal people respond to risk in terms of outrage. I have less data — but some, and it wouldn’t be hard to get more – that more psychotherapeutic concepts like panic, denial, misery, hypervigilance, paranoia, and the rest are also relevant. But to many technical people, these concepts are as alien, unfamiliar, off-putting, and even frightening as pathogens are to the rest of us. And so I have become accustomed to technical clients insisting that the public should stop ignoring the technical data about the hazard … all the while ignoring my data about the outrage.

It all gets tougher, of course, under stress – and during a bioterrorist attack, extraordinary stress is a given.

There are a variety of strategies for coping with your own outrage – from vacations to “letting off steam” somewhere safe. Perhaps the most important point to note is that many of the strategies I have recommended for coping with your audience’s outrage can’t work very well if your own outrage isn’t under control. They’ll come out sounding sarcastic or saccharine or cynical or condescending. And of course the first step in coping with your own outrage is recognizing that it’s there.

And if you are a CDC communications person, you have three levels of outrage (and other psychological phenomena – misery, for example) to manage: the public’s, your scientists’, and your own. It won’t be easy, but I hope these 26 recommendations — and the risk communication model that underlies them – will help.

Copyright © 2001 by Peter M. Sandman

For more on crisis communication:    link to Crisis Communication index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.