Research funding and trustworthiness
|Date:||December 3, 2002|
|Location:||New Jersey, U.S.|
In regards to your Sound Science article, how much weight should then be placed on testing by pharmaceutical companies? Are there some industries where generally the source of funding has less of an influence on the results? What about studies that are financed by NSF?
I don’t know enough to assess the record of the pharmaceutical industry with regard to the trustworthiness of its research results …' or to compare it with other industries.
Obviously, any industry has an incentive to gild the lily and hide the warts. Just as obviously, there are countervailing forces, including the fear of getting caught. My guess is that most companies most of the time are pretty scrupulous about reporting what they have to report to government agencies, even when they paid for the research and conducted it themselves.
I doubt, for example, that a major pharmaceutical company would routinely doctor its reports to the U.S. Food and Drug Administration in a way that was unequivocably illegal. But I don’t for a moment doubt that a company would routinely put the required information in the most positive light it could justify; or that it would routinely bury potentially alarming information where it was hard to find; or that it would leave out any alarming information it wasn’t clearly required to submit.
In other words, I think my clients are usually “honest” in the limited sense of being able to make a coherent case that they weren’t actually dishonest. That’s a long way from proactively telling everybody everything they might want to know, however.
Research conducted and paid for by third parties (like the National Science Foundation) is likely to come closer to transparency as opposed to this sort of minimalist honesty. Research conducted and paid for by critics is likely to be more transparent still. A fairly benign report from a critic is worth more than a completely benign self-report.
Dealing with abusive stakeholders
|Field:||Public health researcher|
|Date:||November 21, 2002|
A question about agitators:
In a hypothetical scenario, a small rural community are concerned about a local environmental problem. Nobody knows for sure whether the problem is hazardous to their health but it is thought unlikely. The hazard is natural (and very unusual). The local health agency have been open, frank, communicative and responsive. The community and local agencies have together discussed solutions and appear content with the options available. One community member is particularly outraged and calls the local unit daily about his concerns. Recently in one of these phone calls, he was abusive to the unit manager. This is despite the fact that the unit are extremely responsive to his concerns and all possible solutions have been offered. What can be done to protect workers from harassment while at the same time safeguarding the process of risk communication?
Without knowing more, I can’t be sure that your abusive stakeholder is as much of an outlier as you suggest, or that the agency has been as responsive and communicative as you suggest. I’m going to assume in the rest of this answer that you’ve got it right. But keep asking yourself whether this individual might be the most visible of a significant and significantly unhappy cohort, and whether your responses might look different from that cohort’s perspective than they look to you.
Assuming a lone dissenter who is abusive.…
Your first priority should be the safety of your staff. Words like “abusive” and “harassment” are often used hyperbolically, but if you mean them literally, they add up to dangerous; call the cops and do what they suggest. Your second priority should be the mental status of the abuser. If he seems crazy (not just quirky, even quirky in an offensive way, but crazy), call the mental health professionals and do what they suggest. I have no expertise in these areas, except to note that it is a mistake to treat a law enforcement problem or a mental health problem as a risk communication problem!
If he’s just a pain in the rear, not a danger to self or others, try to diagnose why he’s letting himself get so out of control. Is it outrage? Or is it strategy? (Your use of the word “agitators” suggests he might have decided this is a good way to wrest concessions from your agency or support from the community.) A third possible motive to consider is self-esteem. Is this fight adding meaning to an otherwise unsatisfactory time in his life? To some extent, your response should depend on what you think is going on for him.
Of course there is commonsense advice that applies to all angry interactions, whatever the underlying mix of outrage, strategy, and self-esteem. Two that come to mind first: Acknowledge his strong emotion before you segue to the substantive issues he is raising. And ask him to be less abusive in a way that focuses mostly on how hard it is for you to take it, not on how wrong it is for him to dish it out.
Don’t forget to look after the wellbeing of staff members who are bearing the abuse (including yourself). To some extent, stakeholders have a right to unload their anger at agency staffers, as long as it isn’t actually scary. Staffers need to debrief what happened and their reactions so they won’t unload back on the stakeholder.
Independent of your lonely troublemaker’s motives and behaviors, your main task is to preserve the larger community’s sense that you are responding well to the issue. I would see your interactions with him as partly theatre, and pay at least as much attention to the audience’s likely reactions as to his. There is a seesaw operating here. If you seem to be dismissing him, taking the position that you are not going to dignify an abusive attack with a substantive response, the audience is likely to see some merit in his concerns and forgive his abuse. If you are endlessly responsive, on the other hand, the audience is likely to see him as abusive and not worth the attention you are giving him. You need to be the last, not the first, to reach the conclusion that he ought to be ignored.
Another implication of the “theatre” metaphor: Build a public record. Once you decide that a compromise or a decent working relationship is extremely unlikely, stop focusing on trying to achieve one, and focus instead on being able to prove later that you did try. Write memos to the file after telephone conversations. Send him letters that will make your efforts and his intransigence clear to any third party who eventually reviews the correspondence. Eventually, the time may come when you can appropriately refuse to take his calls or answer his letters – but not until you have built a record of your efforts to work things out, and not until the rest of us think you should have blown him off a long time ago.
Silicone breast implants (1)
|Field:||Former breast implant patient|
|Date:||August 5, 2002|
What I would add to this site:
I would like to see you tell the REAL story about breast implants. … If you need a starting place, let me know! You could save many lives. Or is that less important than promoting the interest of big business?
I take exception to your remarks about breast implants. Long after breast implants should have been removed from the market until a safer implant is designed, they are continuing to be marketed in increasing numbers. An estimated 300,000 received implants last year.
The so-called studies are seriously flawed … many examined only medical records, not the women.… In the case of the studies that are on-going, women who develop problems and have their implants removed are removed from the studies … women who develop problems and tell their doctors are ignored. Those of us who have been ill have begged to be studied to no avail.
Many people would like to think the implant uproar was/is about money for the women. … It’s not! … It’s about the lives of mothers, daughters, sisters and wives vs. money being made by the medical profession and implant manufacturers.
Many women have lost everything to breast implants. Remarks such as your perpetuate the myth that implants are safe when in fact there are many risks.
Please do more research before making such comments.
I have received three emails about silicone implants this week. I think something I wrote on the subject must have been quoted in an on-line discussion. It is probably my “Yellow Flags” column, in which I used the implant controversy as an example of what happens when a company/industry ignores or suppresses tentative bad news: When the “yellow flag” information finally comes out, people tend to see it as more damning than it actually is, precisely because the company ignored or suppressed it.
What made my example offensive, I think, is this sentence: “There is now good evidence that silicone breast implants probably do not cause systemic disorders.” The writers of the three letters dispute this evidence. They see my uncritical acceptance of it as ignorant and irresponsible … or worse.
They have a point. I am not an expert on medical hazards, only on people’s cognitive and emotional reactions to them. In my own field, risk communication, I sometimes take positions contrary to the conventional wisdom. Outside my field – and the hazards of silicone breast implants are definitely outside my field – I rely on what the majority of the experts are saying. But I shouldn’t repeat the claims of the majority without explicitly acknowledging that these claims are in dispute … and that I am unqualified to judge the dispute.
It is awful to be seriously ill. It is awful even if everyone agrees on what made you ill. It’s more awful if nobody knows what made you ill. And it’s the most awful if you think you know, and others – maybe your doctor, maybe your friends, maybe a courtroom verdict – tell you you’re wrong. I can imagine few experiences so embittering.
History is full of cases where victims lost the fight to get society to acknowledge the source of their victimization … and then months or years or decades later they were proved right after all. Sometimes what turned the tide was the slow, self-correcting advance of science; sometimes it was the piercing of a cover-up conspiracy. Often it was some of each; the conspiracy hid “yellow flags” and thus delayed the discovery that the problem was real and serious.
History is also full of cases where the victims turned out wrong in the end. Here too there are often cover-ups. Industry hides the yellow flags and thus delays the discovery that the problem isn’t much of a problem … and makes that discovery hard to believe.
In my column, I used silicone breast implants as an example of the second of these two paradigms. That’s what most scientists think it is. But they’ve been wrong before, and they could be wrong this time. It could be an example of the first. And I have no business sounding like I know it isn’t.
Silicone breast implants (2)
|Field:||Silicone breast implant survivor|
|Date:||August 5, 2002|
It is very easy to see where your money is coming from. The original company manufacturing silicone breast implants knew as early as 1965 that the gel traveled all over the body – to the brain, lungs, kidneys, every major organ of the animals on which they experimented. Yet they claimed it was “inert” and lied to plastic surgeons, telling them that it was “safe”; those same doctors in turn, lied (and are still lying) to their patients. There is very adequate proof of the movement of the silicone gel – still – but we wouldn’t want to kill the goose that laid the golden egg, now, would we?? Although my very ruptured implants were removed more than 2 years ago, my blood level for silicon is more than double what is normally seen in the general population; I have developed siliconomas in the palms of both hands, and 3 in the general chest/abdominal area.. Thousands of other women have also had this problem, as well as many worse conditions – and not just women have had problems, but men with penile and testicular implants. Many, many people with joint and bone implants have also learned the hard way that the silicone causes immune problems, as well as sloughing off.
“Know of which you speak” before spreading propaganda such as you have just written.
I did consult in the mid-1990s for one silicone company that had manufactured implants. Three impressions are relevant to this comment – though none of the three impressions is necessarily true. I may have been misled.
First, I got the impression that my client genuinely believed its implants were not responsible for the serious illnesses patients were accusing them of causing. My client felt like a victim, not like an evildoer. This is independent of whether society finally decides that implants are a major hazard or a minor hazard or no hazard at all.
Second, I got the impression that my client did less research on implant health risks than it should have done, that it paid less attention than it should have paid to early indications of possible problems, and that it didn’t publicize those early indications nearly enough. This, too, is independent of how great a risk we finally decide implants actually are.
Third, I got the impression that implants were never a significant profit center for the silicone industry, and were never expected to become one. I suspect arrogance and self-deception and defensiveness are more central to the story than greed.
But again, these are impressions only. My former client would deny some parts of these three impressions. Carolyn, the author of this comment, emphatically denies other parts.
Silicone breast implants (3)
|Date:||August 5, 2002|
Let me begin by saying that you do not know what you are saying in regard to silicone breast implants, I also believed the articles published by the media (perhaps paid for by DOW?) and ignored the whole issue of silicone implants and the possible link to diseases. I received implants in 1987. Much to my detriment I didn’t do any research and now I am sick. I am not a person who sues doctors or corporations. For years now my health has been going downhill. I thought I was just aging fast. Imagine my horror to find out that my symptoms match exactly the ones that thousands of other silicone implants victims are experiencing and have experienced for over a decade. My horror and anger was also multiplied by finding the implant insert online from 1985 (two years before my surgery) and it was printed in the insert about the association between silicone and autoimmune diseases. My surgeon did not bother to share this with me, nor has he ever contacted me to warn me to seek medical care. I am now on the other side of the fence. Perhaps if you had this same experience your article would not be so nearsighted or biased. Our government needs to study ALL of use who received implants. We need a true medical study, not one paid for by the corporation/s that criminally subjected us to harm.
This is the third of three comments I have received about implants. My responses are cumulative.
One thing to add, specific to this third comment: I fervently agree that safety research conducted by industry is hard to trust. Even research conducted for industry by presumably neutral experts is vulnerable to bias, and to charges of bias, because of its funding. It is remarkable how much of what we know about the risks of industrial chemicals, medicines, factory emissions, and the like comes to us from interested parties. A large part of our regulatory regime is grounded in the assumption that government agencies that require manufacturers to conduct such studies can rely on the results.
Of course research conducted under contract to plaintiffs’ attorneys is also suspect. Each side’s researchers tend to find what their funders would prefer them to find. I doubt that outright fraud is the main culprit here; human self-deception is.
As the comment suggests, more independently funded research would help a lot. But it would also cost the taxpayer a lot – and even government may not be truly independent. A less costly improvement would be to try harder to insulate industry-funded research from industry control. A number of companies have begun to experiment with research advisory boards populated not just by supporters and neutrals but by active critics as well. If the researchers are instructed to bring all research questions to the board, and only bills to the company, the chances of bias are much reduced. Arguably the researchers will still know who butters their bread, but this knowledge is less likely to contaminate research results when the research process is accountable to a diverse and skeptical board. I don't know how much of the silicone research was overseen by a mixed group of proponents, opponents, and neutrals, but probably not as much as should have been.
Reporting undesirable events
|Field:||Sustainable Industries Division, E.P.A.|
|Date:||June 6, 2002|
I attended a seminar of yours in Brisbane, Australia in 1996 – a fascinating experience which I still cherish (the audience seemed to be mainly engineers and scientists – many in denial? I work in a government regulatory agency and continue to witness matters in which the outrage hasn’t been diagnosed and I can find it difficult to engage the participants with the concept).
I recently met a person who is preparing a public environment report for their employer, a large chemical manufacturing plant. The person was grappling with the language for indicators/targets for undesirable events (e.g., no-one wants to say their target is, say, 10 accidental emissions). I vividly recall your seminar in which you mentioned the dilemma for the police in having a target for, I think, child murders. I later checked my notes from the seminar and I recorded “Zero is the only acceptable moral goal” but it was acknowledged that this can’t be achieved. The next thing I wrote down was “muddy sneakers on table” – I can only laugh and imagine your response was via an anecdote.
I have thought about other words for target – milestone, progress, next step, interim, provisional, short-term etc. but its difficult to get a suitable meaning. Perhaps there isn’t and how these indictors are presented is a case of the motivation/attitude and the circumstances of the report. A business as usual approach might be to just stumble through using conventional words .
An alternate approach could be to pull out these indicators and present them differently: perhaps using the type of principles you set out in “giving bad news.” A theme could be “things we don’t want but are yet to avoid in our business.” Within this context the individual meaning of terms such as target, goal etc. becomes less significant because of the power in the message of the theme.
Your comments are most welcome.
Here’s the anecdote you don’t remember:
Your child comes in from playing outside, takes off his muddy sneakers, and plops them down right in the middle of the dining room table. “Get your muddy sneakers off the table!” you explode. “Why?” he inquires. “The dining room table is an excellent location for my muddy sneakers – good infrastructure, near transportation routes, etc.
“Furthermore, you have a dozen layers of wax on this dining room table. The probability of mud from my sneakers penetrating all that wax and damaging the wood beneath is less than one in a million – de minimus, below regulatory concern. And in the unlikely event of any damage to the wood, I have an emergency response plan: a tissue I am ready to deploy. There is therefore no rational reason to get my sneakers off the table…. But since you appear to be irrationally concerned, Mom and Dad, and since I want to be a good citizen of this family, to go that extra mile to be responsive, I am prepared to make you a more-than-generous offer: I’ll take my sneakers off the table after dinner.”
At this point I ask the audience members to raise their hands “if that works in your household.” No hands? “We understand that muddy sneakers stand virtually no chance of damaging a dining room table. We nonetheless feel justified in making the kid get his sneakers off the table because we share a moral value that muddy sneakers don’t belong on dining room tables, whether they are doing any harm or not.”
The point, of course, is that many risk controversies are as much about good versus evil as about safe versus dangerous. This is especially true of environmental risk controversies; the word “pollution” was a staple of moral discourse long before it was borrowed by environmentalists, and most of us consider pollution a moral misbehavior. Factory emissions, for example, are not just harmful. They’re wrong. They’re wrong even when they’re not harmful. And while degrees of harm can be balanced against degrees of cost and benefit, degrees of morality can’t. It may not be possible to achieve zero emissions. But if emissions are morally wrong, then zero is the only morally acceptable goal.
I don’t think it matters much what a company calls its non-zero goal – though I agree that phrases like “short-term milestone” and “partial progress” can help to signal that the non-zero goal is just a step toward the ultimate goal of zero. What really matters, as your last paragraph suggests, is to communicate that the ultimate goal actually is zero, and that you wish you could get there more quickly … and, of course, to make genuine progress in the direction of zero.
Ten years ago my clients routinely embraced a goal of zero workplace accidents, but mocked the goal of zero emissions as foolish and unscientific. Today many more accept both goals. Both are hard to achieve – in many cases impossible to achieve (without shutting down altogether), at least so far. But companies that aim at zero get closer than companies that don’t.
When companies say they’re aiming at zero, stakeholders judge them by how close they get and how hard they try. When companies decline to aim at zero, stakeholders simply question their moral comprehension, and feel little need to examine their performance.
Coping with employer irrationality about safety
|Field:||Government safety inspector|
|Date:||May 23, 2002|
Thank you for posting your website. There’s terrific insight provided on the task of communication and risks. I’ve watched the “Risk = Hazard + Outrage” video several times and each time go away with a little more understanding of something new.
The interview Motivating Inattention captured my attention on why some employers ignore safety opportunities even though they are demonstrated cost effective measures. The thought that management may not be rational about safety just as employees may not be rational was one of those insights. I’d like to obtain your recommendations for overcoming some of the reasons mentioned such as guilt, low perceived status of safety, holding management accountable for something they never paid attention to and others.
I have to admit I don’t have a lot of on-the-job experience figuring out how to cope with employer irrationality about safety. Clients are a lot likelier to hire me to address their employees’ bad safety attitudes than their own. And the companies with the really self-defeating safety attitudes don’t hire me at all. So I’m guessing.
But look at some of the reasons why employers might ignore safety improvement opportunities even when they are cost-effective:
- Guilt – If the safety problem is solvable, then the employer is accountable for not having solved the problem earlier. It’s psychologically more comfortable to consider the problem inevitable.
- Ego – Worrying about safety is a low-stature activity. A boss who likes making deals, not installing non-skid floor coverings, is likely to resist focusing on a safety investment, even if the return on the investment is high.
- Hostility – Many employers are angry at their workforce; many more are contemptuous of the workforce. There may be some satisfaction in all those accidents, some sense that careless employees deserve to get hurt.
What these motives have in common is that they are all unacknowledged and “unacknowledgeable.” That is, you can’t just tell a manager, “I bet you’re feeling guilty, or preoccupied with your own stature, or hostile to your employees.” The manager will deny it, and in the process will become even less likely ever to recognize it. (Some probably experienced just that reaction as they read the previous paragraph.)
The solution, then, is to get the unacknowledged motive into the room with you without forcing the client to own up to it. The key strategy is called counter-projection. Attribute the motive to someone else: “I worked with a client last month who felt really bad that he hadn't discovered this approach sooner.” Or lay it on a generalized other: “A lot of senior managers might feel that worrying about safety is something of a comedown, not what they went to B school for.” Or take it on yourself: “It's amazing how irresponsible employees can be sometimes!” The reasonableness or unreasonableness of the motive can then be addressed without triggering so much defensiveness from the client.
A lot of other outrage management strategies come into play once the unacknowledged motive is in the room – from the seesaw to a kind of absolution. (We are better able to act counter to our ignoble motives if we’re less ashamed of possessing them.) The ultimate goal is to help the client recognize the motive in himself/herself (even if the client can’t quite acknowledge it to you), then bear it and put it into perspective … and thus be less under its sway.
The Service Approach
|Field:||Consultant & executive coach|
|Date:||May 20, 2002|
Remember to serve. When you approach a “bad news” situation, use the power of a service attitute rather than trying to force and control. It will only result in a reciprical backlash of power. A service approach (How can I best serve employees, customers, etc.?) will most likely result in a complimentary response – respect, gratitude, and cooperation.
I rather like this approach. Still, I have three reasons for offering clients a more complicated rationale for addressing stakeholder concerns:
- I think it’s important (and not always easy) for companies to understand how their own self-interest is well-served by good management of stakeholders’ outrage.
- When companies and their managements are themselves outraged, they can’t easily adopt a service approach; if anything they want to disserve the targets of their outrage.
- A successful consulting business requires a less obvious formulation.
I’m not sure of the relative importance of these three explanations.
Misleading connotations of the word “outrage”
|Field:||Former risk manager for schools, government and consultants|
|Date:||April 1, 2002|
|Location:||New York, U.S.|
You don’t know me, but when you first spoke and released your video tape at the Salt Lake IH conference in what seems like another lifetime, I had found someone who verified and explained my own approach to dealing with “outrage” down in the trenches at public meetings, on TV and with hostile groups I had to difuse after someone else (usually a lawyer or CIH ) had inflamed their outrage. I bought all of your stuff at the time (gee has it been over ten years?) and polished my own approaches building on my proven style with your insights. You may want to look at the book “The Day America Told the Truth” and the trust scale based on what they claim is valid data. I found revealing my early life with a firefighter/paramedic background in any situation created more trust and allowed me time build the presentation format you had defined. I have some other insights we might share by email some time.
I saw your CDC presentation and my first thought was “at last the government has put its ego aside long enough to find the real experts in this and other fields” and that gave me great comfort, ( qualified of course knowing even a broken clock is right twice a day). I see you have developed the seesaw and other aspects more fully then when I made your first IH tape mandatory for my staff of 45 safety and IH consultants.
My comment is of course one I suspect you hear often.
The word “outrage” has a anger aspect to it in the common person’s mind usually associated with betrayal. So, I see why you selected it to represent the different value systems and risk ranking that persons acquire.
Since all anger comes from fear ( if you ask the five whys) I might suggest that it might be time to evolve the word outrage to one more clearly understandable with the word alone and not requiring your opening presentation ( unless that is what you intended).
I find “outrage” to contain fear, distrust, and value as core components. Perhaps the phrase “fear trust value scale” might be useful in some context. oops… there goes my old IH clinical habit energy again ha ha.
I offer that as food for thought from one whose has tested and benefited from your work in the field and when the TV reporters tried to engage me in NYC at the 911 cleanup and later for the anthrax interviews locally, I again turned the twist of fear into confidence using the old Sandman formula. You can see a few of the interviews at a clients site www.hivroom.com and grade my approach.
So, thank you again for all of your work, and humor, and of course the thing people don’t often think about…….your courage……as after all that is what you are trying to tell them to get……courage to do what they know is right, even if they are the first ones in government to do it and is it not the formula of their boss or the SOP.
Kudo’s on the CDC description of your expertise…..it was well earned.
Thanks again and consider the suggestion as communication is part of our business and clarity is a benchmark.
Thank you for your very flattering comments. And thanks even more for being out there doing similar work. There are times when I enjoy feeling like the Lone Ranger, but in the final analysis I’d rather be part of a movement.
I have to agree that the word “outrage” is less than perfect. When I coined it, I had in mind a sort of “righteous anger” – a how-dare-you-do-this-to-me reaction that seemed then and still seems crucial in many risk controversies. I wanted to focus on the righteousness aspect as an antidote to the widespread misperception that the essence of the problem was irrational fear. My point was that it might be mistaken to be fearful in high-outrage low-hazard situations, but it is surely rational to be wrathful – and that when a company is mistreating people, they will inevitably suppose it might well be endangering them too.
Obviously the term works less well in situations where mistreatment isn’t the key to the interaction – where people are more fearful than angry, or angry mostly because they are fearful. I often hear from clients that their stakeholders “aren't really outraged (at us), they're just upset (about the situation).” Then it takes me a while to explain why the term “outrage” works in those situations too … since it really doesn’t.
And that’s English-speaking clients. As soon as I leave the English-speaking world, finding a usable translation for “outrage” becomes a significant hassle.
Still, I haven’t come up with a word that works equally well in both my mistreatment paradigm and your fearfulness paradigm. And even if I did, I’m not sure I’d switch now. Having created jargon and written/spoken endlessly about it, I think I’m stuck with it.
While I’m confessing, my signature formula “Risk = Hazard + Outrage” causes other problems as well. My definition of “hazard” is exactly what risk assessors mean by “risk.” They’d be a lot happier if I left their jargon alone when creating my own; they’d much prefer “Something = Risk + Outrage.” But I really wanted to broaden the definition of risk, not just say we ought to think about things other than risk too. And what would I call that Something? I can’t live with “perceived risk” because that implies that outrage distorts risk judgment, that it leads people to make a mistake, a misperception. I toyed for a while with “Risk Acceptability = Risk + Outrage” but decided against it; “Risk Unacceptability” would make more sense here, but it’s ugly. In the end I stayed with R = H + O.
Communication now about possible future terrorism
|name:||Mary Jo Deering|
|Date:||March 23, 2002|
I’ve known your work and appreciated it for years. Thanks for looking at this area. My question, or perhaps interest, is about “preventive risk communication” as part of preparing for a terrorist event. What should/can responsible people/organizations/agencies be saying/doing NOW to lay the groundwork for a calmer public response to an event? (I believe the public should also be involved in this preventive communication.) Is anyone working on this?
On a different note, people interested in a strategic look at information flows might go to http://ncvhs.hhs.gov/nhiilayo.pdf. This report, “Information for Health: a Strategy for Building the National Health Information Infrastructure,” was recently submitted to the HHS Secretary.
The question of what should be said now about possible terrorist attacks in the future is obviously important. The answers are not so obvious. But for what it’s worth, here are a few of my ideas.
- It is critically important to keep saying that there will be future terrorist attacks. As time goes by without another one (at least another big one), people naturally tend to imagine the danger is past. It’s important to say this, too, and to warn against it. I thought from the start that panic was a small problem and denial a big one. Now, less than seven months after 9/11, people are already more attentive to security inconveniences than to security inadequacies. Denial masquerading as complacency is the biggest threat to preparedness.
- I would like to see much more attention paid to terrorist plans or attempts that have been foiled. The media seem to give government claims of foiled terrorist plots minimal attention – because they think the claims are mostly hype? – and the public seems largely unaware of them. Foiled plots are the best way to make the case that future terrorist attacks are likely – that many or even most can be prevented (which justifies the prevention effort) but some will inevitably succeed (which justifies the preparedness effort). The government should do more to make sure its claims along these lines are accurate and credible.
- Much more needs to be said about the resilience of Americans and American institutions, our ability (and obligation) to survive whatever terrorists can produce. One important reason why people are tempted to deny the high likelihood of future terrorist attacks is that they imagine the attacks to be huge and unsurvivable. We overestimate the risk’s magnitude and thus feel psychic pressure to underestimate its probability. This is reminiscent of the widespread Cold War assumption that in the event of a nuclear attack the only possible response would be to “bend over and kiss your ass goodbye.” Preparations for The Day After were dismissed as macabre, as defeatist, or as right-wing absurdity. Catastrophes on the scale of 9/11, and even much bigger catastrophes, are simultaneously horrific and survivable. Work, real work, is needed to prepare to cope with such attacks and their aftermaths. This work is possible only if Americans believe it is worth doing.
- People need to know in some detail what terrorism emergency preparedness looks like. Of course there are things that can’t be said for security reasons. But even after allowing for security needs, I think the government has been far too vague since 9/11 about how we would respond to particular sorts of attacks. Undoubtedly part of the reason is the government’s fear of being seen as insufficiently prepared (by American citizens more than by prospective terrorists). There seems to be a real effort to ramp up preparedness while continuing to claim to be more than adequately prepared. But the desire to hide inadequacies is getting in the way of providing credible detail about what has been accomplished and what more can be accomplished. This absence of detail feeds the false assumption that preparedness isn’t possible, which in turn feeds the denial and impedes the preparedness.
- An issue of special importance, I think, is the prospect of emergency quarantines in the event of bioterrorism. Anthrax isn’t infectious. But if a future terrorist attack uses smallpox or some other infectious agent, limiting the epidemic will require shutting down highways and airports. In all likelihood these intrusive and terrifying steps will need to be taken while the diagnosis is still tentative; by the time we know it’s smallpox it’ll be too late. Americans need to be thinking and talking about this possibility now. I think the government is afraid to raise the issue, but its failure to raise the issue now will make a successful quarantine, if one is needed, much more difficult to achieve. (This is a good opportunity for the seesaw approach. I’d like to see the government suggest that a quarantine may simply be too restrictive of freedom for Americans to accept – and let the public insist that it’s preferable to a smallpox epidemic.)
Crisis communication versus risk communication
|Field:||Public health communication|
|Date:||March 5, 2002|
What is the difference between crisis communication and risk communication? How are they related? Is the difference merely semantics or is there a real difference?
In principle, crisis communication is about things that have already gone badly wrong – you’ve killed or injured (or at least nearly killed or injured) some people, the rest of us are anxiously waiting to find out whether we need to evacuate or not, and dozens of reporters are demanding answers. Risk communication, on the other hand, is about things that might go wrong (the accident that might happen some day) or things that might have gone wrong (the cancers that some people think your emissions may have caused).
It’s a clearer distinction in principle than it is in practice. This is true for at least two reasons. First of all, a situation that your company’s technical experts see as a hypothetical risk may be seen by some of your stakeholders as an ongoing health, safety, or environmental crisis. And second, a situation that your company’s technical experts see as a hypothetical risk may be seen by corporate management as an ongoing reputation crisis. The second complexity is one I experience often. Clients frequently call me for help with the “crisis” posed by an angry neighborhood meeting or a hostile activist group. In my language they need help with risk communication (and outrage management), not crisis communication.
Several things are different in a real crisis, as opposed to a reputation crisis:
- Warning people takes priority over reassuring people. Crisis communication has a lot to do with what you say to victims and potential victims. Reaching the apathetic and persuading them to protect themselves is important. So is advising the attentive about how best to protect themselves. (Calming panic may also play a role, of course, but panic is rarer than people imagine.)
- Communication logistics are likely to be difficult. In a genuine emergency, telephone lines may be clogged (and so may the roads). The people you need to reach may be hard to reach; the people trying to reach you may not get through. It can be a huge problem just to find someone who knows what’s going on and isn’t too busy managing the emergency to explain it.
- The audience is a lot bigger. Most risk communication focuses on a relatively small number of stakeholders. But a real crisis is of interest to a much wider public. And in the very biggest crises – September 11, for example – everyone feels like a stakeholder.
- Outrage management matters less. People tend to suspend their outrage during a crisis. They’re too much at risk, and too dependent on your managing the situation, to cast blame. In fact, one way you know the crisis is well on its way to being over is when people start expressing their outrage at why you let it happen or how you handled it.
Bottom line: Lots of people talk about risk communication problems as if they were crisis communication problems. But a real crisis poses a different set of communication challenges from the reputation “crisis” of a risk controversy.
Anthrax, politicians, and PR
|Date:||February 9, 2002|
What I would add to this site:
Your analysis of statements by cabinet level spokespersons on anthrax – some were dreadful.
I thought your discussion of anthrax and bioterrorism was excellent. The lessons learned about risk communication from the 2001 anthrax attack need to be incorporated into planning for future bioterror attacks. Alas, we almost certainly will have to face something lik this again and it might be much worse. So your reflections are very valuable.
I think Tommy Thompson has endured enough derision for commenting that the first anthrax case might have have resulted from an exposure while out fishing in Florida. From a scientific standpoint this was vanishingly unlikely, which he probably knew when he said it and certainly knows now. More to the point, false reassurance is strategically unwise – it boomerangs, ultimately adding to people’s paranoia. That, I suspect, he didn’t know then and probably doesn’t know now.
It is hard for anyone to grasp the paradox of the seesaw: that sharing and legitimating people’s worry is actually more reassuring than inventing spurious reasons why they should calm down. But it’s especially hard for politicians – because in the world that politicians most often occupy, it simply isn’t true. False reassurance works pretty well when people aren’t all that upset anyway, when they’re not paying much attention, when they are publics rather than stakeholders. A politician who shared and legitimated a concern that was felt only weakly and only by some would risk deepening and broadening that concern instead of ameliorating it. Secretary Thompson had a typical politician’s response to an issue that wasn’t a typical politician’s issue. He wasn’t being stupid, or even dishonest; he was using his expertise in an arena where it didn’t apply.
Besides politics, the other profession that systematically does poor stakeholder relations (and thus poor risk communication) is PR. Which is why I was delighted to get this comment from Ford Rowan. Ford’s agency, Rowan & Blewitt, is a national leader in “issue management” and “crisis management” – which is where stakeholder relations and public relations intersect most painfully. And Ford comes from the PR side. A decade ago we worked together for several clients; that is, several clients hired us both so they could mix-and-match the advice we gave. Our recommendations were by no means the same, but they weren’t diametrically opposed either. I came away seeing Ford as a “risk communication sympathizer” among PR people. Apparently he still is.
Risk communication for government emergency responders
|Field:||State government communications adviser|
|Date:||January 22, 2002|
I look after issues communication management on behalf of a marine safety authority in Australia, which is involved in preventing (if possible) and co-ordinating recovery from environmental and other marine incidents.
Thus, my focus is slightly different from that of a corporate communicator – the organisation I represent is very unlikely to be blamed by the public for any incident that occurs. Nevertheless, when an oil spill or other crisis does emerge, we are first in line for media comment and community liaison. The outrage is no different – it is just rarely directed at us.
How can I adopt your methods to better suit the business I am in? That is, when we aren’t at fault, are responsible for co-ordinating recovery, but are unlikely to be blamed by the public, are there some ways to maximise the benefit and effectiveness of our issues communication?
First of all, let me dispute your premise. Although stakeholders aren’t likely to get outraged at a government marine safety authority for causing an oil spill, they are quite likely to get outraged at you for failing to prevent one, or for mismanaging the emergency response or the cleanup. Of course in mid-emergency there will be little outrage expressed at your agency; stakeholders feel too dependent on you to dare to criticize. But when the crisis is over, accumulated outrage can safely be articulated. How well you manage the outrage then, and how well you have been managing it all along, will determine how much flak you get.
As I write this, the Enron bankruptcy is big news in the U.S. – and lots of the outrage is aimed at regulators who (the public believes, and so do I) didn’t do their jobs properly. Similarly, there is now considerable criticism of the U.S. Centers for Disease Control and Prevention (CDC) for mishandling the anthrax attacks (this time I disagree), even though no one blames CDC for causing the attacks.
I’ll go further. It is quite common in my consulting that the company responsible for causing the problem is doing a pretty good job of outrage management, while the government agency responsible for managing the solution is doing a pretty poor job of outrage management. Stakeholders get angry at the agency – which then defends itself by scapegoating the company. I tell my corporate clients that if they have to choose, they are better off with people outraged at the company itself than with people outraged at a government agency with the power to blame or punish the company!
Even if there is no outrage aimed at your agency, it will still be true that how you frame the event will have a great deal to do with how much outrage is aimed at the company. That is, if you are lucky enough to be seen as the white-hatted saviors of the marine environment, then you get to tell us whether we should see the spill as a result of corporate misbehavior, or as an inevitable consequence of an oil-based economy in which we are all complicit, or as an extraordinary piece of bad luck, or what….
And don’t forget that outrage isn’t just anger; it is also fear, concern, disgust…. As the trustworthy good guys (your assumption), you get to tell us whether to see the spill damage as temporary or permanent, economic or environmental or aesthetic, catastrophic or tolerable, etc. To do a good job helping us make these distinctions, you need to understand risk communication.
The seesaw, for example, is absolutely crucial here. Suppose you judge that people are overreacting – in particular, that they are drawing the natural but in this case inaccurate conclusion that because the spill is ugly it must be environmentally disastrous. (I’m not saying this is true of all oil spills.) The seesaw tells you to spend a lot of time acknowledging the ugliness. It tells you to make your claim that the marine environment will recover subordinate to your acknowledgment that it looks irrecoverable … and disgusting. “Even though the experts tell us we were lucky, and there probably won’t be much permanent damage to the marine environment, looking at the shoreline right now makes that hard even for me to believe.” This is a sentence you are unlikely to produce without an understanding of risk communication.
And I hasten to add: It’s a sentence that probably won’t work if your premise is wrong and people are directing more of their outrage and mistrust toward your agency than you suppose.
Graduate work in risk communication
|Field:||Health education specialist|
|Date:||January 14, 2002|
I have a question for you. I recently watched a CDC teleconference on Risk Communications, and the presentation included a brief segment showing you talking to a class. I was very impressed by your delivery, and I have become very interested in the area of risk communications. So much so, that I am considering going back to school to do my doctorate. As a health education specialist, I was anxious to find a new area of study that would help me grow both in my field as well as an individual, and I could not find an area of study that motivated me. Over the past year I have become more and more familiar with the importance of risk communications, and I have decided that this is the area I would like to learn and know more about.
Can you assist me in locating credible schools and/or programs in this area that you can recommend. If one cannot get a degree in this area, then I would consider putting together my own program and going through a credible institute such as Walden University. I would appreciate a swift reply, since I need to identify a school to apply to and look for financial aid and grant money. Once again, the segement of the CDC presentation where you were a presenter was very, very interested and has sparked my interest in this area. I hope you receive this email. I am not very “comfortable” with the internet and email thing… I am in the process of trying to overcome my “computer” phobia. That’s it for now, until I hear from you. I will search your website to see when and where you give seminars. Are you schedued to be in the Washington, DC metroplitan area anytime soon? (Virginia, Maryland & DC), if so, I would be interested in knowing.
I wish I could name a bunch of universities that you could consider for doctoral programs in risk communication. I can’t … even if you weren’t committed to staying in the Washington D.C. area.
The essence of the problem is that risk communication is too hot a consulting field to attract and hold full-time faculty. A number of the leading risk communication consultants (myself among them) are ex-academics who left because they found consulting more exciting and more profitable. There are some very good people left in universities – but they mostly are researchers who do only a little teaching (e.g., Caron Chess at Rutgers) or specialists in related fields who intersect risk communication but don’t focus on it – people in journalism, health education, risk assessment, etc. There are also a few programs that focus on the undergraduate level. None of these is ideal for your purposes, and none of them is near D.C.
My prediction: If you can find someone you want to study with who’s in a major university, that person will be able to figure out a doctoral program for you that will work administratively – but odds are you’ll have to take a lot of preliminary coursework in things you don’t need and you’ll have trouble finding much preliminary coursework in things you do need. It will probably make more sense to study on your own … with or without a degree-granting institution’s involvement. This web site should help (and maybe help you over the computer phobia hump as well). But bear in mind that there are other approaches to risk communication than mine. For a pretty good bibliography – again via your computer – check out http://dccps.nci.nih.gov/DECC/riskcommbib/.
You also ask when I am giving a seminar in the D.C. area. I have nothing on my schedule in/near D.C. now other than proprietary work for clients. But I do get asked from time to time to give one-day seminars that aren’t proprietary, usually sponsored by a professional organization. One I do every year is a Professional Development Seminar for the American Industrial Hygiene Association, scheduled at the start of its annual conference in May/June. This year’s is in San Diego, but maybe some year soon it will be near D.C.
Where outrage stands on various theoretical frameworks
|Field:||Doctoral candidate, psychology + risk communication|
|Date:||January 11, 2002|
What I would add to this site:
Perhaps you would like to venture some comments about where you see ‘outrage’ in relation to other approaches, e.g,. cultural risk or the psychometric framework?
Your site is very worth-while addition to the risk perception/communication arena, especially as your focus is upon practical applications. As you can imagine, PhD research can be wonderful on paper and hard to apply to real situations: the psychometric tradition is wonderfully descriptive, but difficult to use in practise.
There are very few actually engaged in research in Ireland at this level (as far as I know, I’m only one of about 5 or 6). There are no undergrad programs offered, little at postgrad level, and industry is slow to take it up, so it’s good to see it actually being taught. Perhaps your enthusiasm for it will drift across the Atlantic eventually…!
It’s also good to read comments from practitioners who find risk communication both interesting and worth doing – it make me feel that I'm not wasting my time!
Thanks for your kind words. I see more and more risk communication research being done in Europe and the U.K. (but maybe not Ireland); also more and more risk communication practice. As in the U.S., the two aren’t as connected as they ought to be.
I find myself often considered a seat-of-the-pants practitioner-type by the academics and a wooly-headed academic-type by the practitioners. That might mean I've found a useful intersection. Or it might mean I've fallen through a crack.
The difficulty I have trying to position my own work vis-à-vis the various theoretical paradigms convincingly demonstrates that I'm not much of a theoretician.
I owe a huge debt (pretty obviously) to the psychometric work of Slovic and others. When I do research – which isn’t much, any more – it's usually in that framework.
But when I go beyond my data – which is pretty often these days; well, okay, it’s always been pretty often – I tend to think more globally/integratively/heuristically than analytically. The researcher-me has noticed that the cultural approach hasn’t accrued much empirical support … surely far less than the psychometric approach. But the consultant-me likes creative leaps more than regression analyses. Just whose creative leaps will ultimately inspire research that accounts for the biggest slice of the variance is anybody’s guess, of course.
Meanwhile, I admit I try to have it both ways. I read a good deal of empirical research – but what I absorb is more the studies whose results feel right to me than the studies whose methodologies are most nearly air-tight. When clients want to ignore my advice, I sometimes remind them that there is an emerging science of risk communication; if they respect “data” the way they claim they do, they ought to pay more attention to the data about outrage. But when clients are listening to my advice, I go beyond the data, even beyond theory, and rely on experienced intuition; I tell them so, but I do it.
Copyright © 2002 by Peter M. Sandman