Posted: November 10, 2001
This page is categorized as:    link to Outrage Management index
Hover here for
Article SummaryThis EPA booklet has long been out-of-print. It predates my articulation of the hazard-versus-outrage distinction, but contains much of the thinking that went into that distinction. In fact every time I reread this it reminds me of principles and examples I ought to reinstate in my presentations.

Explaining Environmental Risk

TSCA Assistance Office, Office of Toxic Substances
U.S. Enivronmental Protection Agency, November 1986 (booklet)

Dealing With The Public

1. Risk perception is a lot more than mortality statistics.

If death rates are the only thing you care about, then the public is afraid of the wrong risks. That is, public fears are not well correlated with expert assessments or mortality statistics. This is often seen as a perceptual distortion on the part of the public, but a more useful way to see it is as an oversimplification on the part of many experts and policy-makers. In other words, the concept of risk means a lot more than mortality statistics.

Virtually everyone would rather drive home from a party on the highway than walk home on deserted streets. Even if we do not miscalculate the relative statistical likelihood of a fatal mugging versus a fatal car crash, the possibility of getting mugged strikes us as an outrage, while we accept the possibility of an auto accident as voluntary and largely controllable through good driving. (Eighty-five percent of all drivers consider themselves better than average.) Similarly, a household product, however carcinogenic, seems a lot less risky than a high-tech hazardous waste treatment facility – the former is familiar and under one’s own control, while the latter is exotic and controlled by others.

Risk perception experts (especially psychologists Paul Slovic, Sarah Lichtenstein, and Baruch Fischhoff) have spent years studying how people interpret risk. The following list identifies some of the characteristics other than mortality that factor into our working definitions of risk. Remember, these are not distortions of risk; they are part of what we mean by the term.

Less RiskyMore Risky
VoluntaryInvoluntary
FamiliarUnfamiliar
ControllableUncontrollable
Controlled by selfControlled by others
FairUnfair
Not memorableMemorable
Not dreadDread
ChronicAcute
Diffuse in time and spaceFocused in time and space
Not fatalFatal
ImmediateDelayed
NaturalArtificial
Individual mitigation possibleIndividual mitigation impossible
DetectableUndetectable

The very same risk – as experts see these things – will be understood quite differently by the lay public depending on where it stands on the dimensions listed above. Some thirty percent of the homes in northern New Jersey, for example, have enough radon seeping into their basements to pose more than a one-in-a-hundred lifetime risk of lung cancer, according to estimates by the U.S. Environmental Protection Agency and the State Departments of Health and Environmental Protection. But despite considerable media attention (at least in the beginning), only five percent of North Jersey homeowners have arranged to monitor their homes for radon, and even among these few the level of distress is modest – compared, say, to the reaction when dioxin is discovered in a landfill, objectively a much smaller health risk. State officials were initially concerned about a radon panic, but apathy has turned out to be the bigger problem.

The source of the radon in New Jersey homes is geological uranium; it has been there since time immemorial, and no one is to blame. But three New Jersey communities – Montclair, Glen Ridge, and West Orange – have faced a different radon problem: landfill that incorporated radioactive industrial wastes. Though their home readings were no higher than in many homes on natural hotspots, citizens in the three communities were outraged and fearful, and they successfully demanded that the government spend hundreds of thousands of dollars per home to clean up the landfill. The state’s proposal to dilute the soil nearly to background levels and then dispose of it in an abandoned quarry in the rural community of Vernon has provoked New Jersey’s largest environmental demonstrations in years, with thousands of residents swearing civil disobedience sooner than let the trucks go through. In nearby communities threatened by naturally occurring radon, meanwhile, the concern is minimal.

It doesn’t help to wish that people would confine their definitions of risk to the mortality statistics. They won’t. Mortality statistics are important, of course, and policymakers understandably prefer to focus on the risks that are really killing people, rather than the risks that are frightening or angering people because they are involuntary, unfamiliar, uncontrollable, etc. But successful risk communication begins with the realization that risk perception is predictable, that the public overreacts to certain sorts of risks and ignores others, that you can know in advance whether the communication problem will be panic or apathy. And since these differences between risks are real and relevant, it helps to put them on the table. Merely acknowledging that a risk seems especially fearful because it is unfamiliar or unfair will help. Doing something to remedy the unfamiliarity or unfairness will help even more.

Just to make things more complicated, risk perception is not linear, not for anybody. That is, you can’t just multiply how probable a risk is by how harmful it is to get how badly people want to prevent it. (If you could, there would be no insurance industry and no gambling industry.) In general, people will pay more to protect against low-probability loss than to pursue low-probability gain – but if the price is low enough to be dismissed as negligible, even an infinitesimal chance as a big payoff looks good.

Risk judgments are also very responsive to verbal cues. Doctors, for example, are much more likely to prescribe a new medication that saves 30 percent of its patients than one that loses 70 percent of them. A pollutant or an accident that will eventually give cancer to 10,000 people sounds very serious, but one that will add less than one tenth of one percent to the national cancer rate sounds almost negligible. There is in fact no “neutral” way to present risk data, only ways that are alarming or reassuring in varying degrees.

Finally, people’s perception of risk is greatly influenced by the social context. Our responses to new risks, in fact, are largely predictable based on our enduring values and social relationships. Do we like or dislike, trust or distrust the people or institutions whose decisions are putting us at risk? Do our friends and neighbors consider the risks tolerable or intolerable? Are they enduring higher risks than ours, or escaping with lower ones? All these factors, though they are irrelevant to the mortality statistics, are intrinsic parts of what we mean by risk.

2. Moral categories mean more than risk data.

The public is far from sure that risk is the real issue in the first place. Over the past several decades our society has reached near consensus that pollution is morally wrong – not just harmful or dangerous, not just worth preventing where practical, but wrong. To many ears it now sounds callous, if not immoral, to assert that cleaning up a river or catching a midnight dumper isn’t worth the expense, that the cost outweighs the risk, that there are cheaper ways to save lives. The police do not always catch child molesters, but they know not to argue that an occasional molested child is an “acceptable risk.”

Government agencies build their own traps when they promulgate policy (and public relations) in the language of morality, depicting food additives or chemical wastes or polluted water as evils against which they vow to protect the innocent public. It is not at all obvious which environmental “insults” (another term with moral overtones) a society should reject on moral grounds and which it should assess strictly in terms of impact. But an agency that presents itself and its mission in moral terms should expect to be held to its stance. And an agency that wishes to deal with environmental risk in terms of costs-and-benefits instead of good-and-evil should proceed gently and cautiously, aware that it is tromping on holy ground.

Nor is morality the only principled basis for questioning the costs-and-benefits premises of risk assessment. Just as the moralist challenges the rightness of trading off certain risks against costs or benefits, the humanist challenges the coherence of the tradeoffs. How, the humanist asks, can anyone make sense of a standard that tries to put a cash value on human life? Or, indeed, of a standard that assumes that a hundred widely scattered deaths per year are equivalent to a one-in-a-hundred chance of obliterating a community of 10,000?

Similarly, the political critique of the premises of risk assessment begins by noting that “the greatest good for the greatest number” has always been a convenient rationale for the oppression of minorities. Democratic theory asserts that individuals and groups should be free to bargain for their own interests, and should be protected from the tyranny of the majority. There is nothing unreasonable about the suggestion that equitable distribution of risks and benefits – and of the power to allocate risks and benefits – is often more important than the minimization of total risk or the maximization of total benefit. It may be efficient to dump every environmental indignity on the same already degraded community, but it is not fair.

3. Policy decisions are seen as either risky or safe.

Like the media, the public tends to dichotomize risk. Either the risk is seen as very frightening, in which case the response is some mix of fear, anger, panic, and paralysis; or the risk is dismissed as trivial, in which case the response is apathy.

In their personal lives, people do not necessarily dichotomize risk. Most of us are quite capable of understanding that the picnic might or might not be rained out, that the boss might or might not get angry, even that smoking might or might not give us lung cancer. Of course quantified probabilistic statements are genuinely hard to understand, especially when the probabilities are small, the units are unfamiliar, and the experts disagree. But beyond these perplexities lies another issue of enormous importance to risk communication. While people may (with difficulty) master a probabilistic risk statement that concerns what they should do to protect themselves, they are bound to resist probabilistic risk statements that concern what others (government, say) should do to protect them. On my own behalf, I may choose to tolerate a risk or to protect against it, but for you to decide that my risk is tolerable is itself intolerable. Quantitative risk assessments, risk-benefit calculations, risk-cost ratios, and risk-risk comparisons are all hard to hear when we bear the risk and someone else makes the decision.

4. Equity and control issues underlie most risk controversies.

Trust and credibility are often cited as the key problems of risk communication. Certainly few people trust government and industry to protect them from environmental risk. This is just as true of the passive, apparently apathetic public as it is of the activist, visibly angry public. The former is simply more fatalistic, more prone to denial, more completely drowned in undiscriminating chemophobia. The activist public, in other words, distrusts others to protect its interests and thus chooses to protect its own. The far larger passive public is passive not because it believes others will protect its interests, but because it doubts it can protect its own. Both publics listen to the reassurances of government and industry – if they listen at all – with considerable suspicion.

But to say that trust is the problem here is to assume that the goal is a passive public that doesn’t mind being passive. If the goal is an actively concerned public, then the problem isn’t that people are distrustful, but rather that government and industry demand to be trusted. Translate the question of trust into the underlying issue of control: Who decides what is to be done?

Any environmental risk controversy has two levels. The substantive issue is what to do; the process issue is who decides. So long as people feel disempowered on the process issue, they are understandably unbending on the substantive issue, in much the same way as a child forced to go to bed protests the injustice of bedtime coercion without considering whether he or she is sleepy. It isn’t just that people oppose any decision they view as involuntary and unfair, regardless of its wisdom; because the equity and control issues come first, people typically never even ask themselves whether they agree on the merits. Outraged at the coercion, they simply dig in their heels. It is hardly coincidental that risks the public tends to overestimate generally raise serious issues of equity and control, while most of the widely underestimated risks (smoking, fat in the diet, insufficient exercise, driving without a seatbelt) are individual choices.

Specialists in negotiation and conflict resolution have long understood this relationship between substantive issues and the process issues of equity and control. Consider for example a community chosen by the state government to “host” a hazardous waste incinerator. Justly offended at this infringement of local autonomy, the community prepares to litigate, frantically collecting ammunition on the unacceptability of the site. Both their anger and the legal process itself encourage community members to overestimate the risk of the proposed facility, to resist any argument that some package of mitigation, compensation, and incentives might actually yield a net gain in the community’s health and safety, as well as its prosperity.

In interviews with community members faced with such a situation, the control issue tends to overshadow the risk assessment. But when citizens are asked to hypothesize a de facto community veto and envision a negotiation with the site developer, they become quite creative in designing an agreement they might want to sign: emissions offsets, stipulated penalties, bonding against a decline in property values, etc. It is still too early to tell whether a negotiated hazardous waste treatment facility is feasible. But thinking about such a negotiation becomes possible for community members only when they feel empowered – that is, when the issue of outside coercion has been satisfactorily addressed.

On this dimension people’s response to information is not much different from their response to persuasion. We tend to learn for a reason – either we’re curious, or we’re committed to a point of view and looking for ammunition, or we’re faced with a pending decision and looking for guidance. These three motivations account for most information-seeking and most learning – and none of them exerts much influence when an individual citizen is offered information about, say, a Superfund clean-up plan. A few stalwart souls will read out of curiosity, though it won’t take much technical detail to put a stop to that. Activists will scour the plan for evidence to support their position or for evidence that their position wasn’t properly considered. (Activists know what they think and believe they can make a difference.) And those charged with litigating, funding, or implementing the plan study it in order to do their jobs.

And the general public? Why learn if you feel powerless do anything about what you have learned? On the other hand, when the public has felt it was exercising real influence on a decision – the ASARCO smelter in Tacoma comes to mind – it has shown a surprising ability to master the technical details, including risk assessment details.

Not that every citizen wants to play a pivotal role in environmental decisions. We have our own lives to lead, and we would prefer to trust the authorities. If the issue is unimportant enough we often decide to trust the authorities despite our reservations; if the crisis is urgent enough we may feel we have no choice but to trust the authorities, again despite our reservations. The gravest problems of risk communication tend to arise when citizens determine that the issue is important, that the authorities cannot be trusted, and that they themselves are powerless. Then comes the backlash of outrage.

5. Risk decisions are better when the public shares the power.

People learn more and assess what they learn more carefully if they exercise some real control over the ultimate decision. But this sort of power-sharing is, of course, enormously difficult for policy-makers, for a wide range of political, legal, professional, and psychological reasons. Interestingly, corporate officials may sometimes find power-sharing less unpalatable than government officials. Corporations have a bottom line to nurture, and when all else fails they may see the wisdom of sharing power in the interests of profit. But government officials have no profit to compensate for the loss of power, so they may find it harder to share.

“Public participation,” as usually practiced, is not a satisfactory substitute for power-sharing. To be sure, telling the public what you’re doing is better than not telling the public what you’re doing. Seeking “input” and “feedback” is better still. But most public participation is too little too late: “After years of effort, summarized in this 300-page report, we have reached the following conclusions.... Now what do you folks think?” At this point it is hard enough for the agency to take the input seriously, and harder still for the public to believe it will be taken seriously. There is little power-sharing in the “decide-announce-defend” tradition of public participation.

The solution is obvious, though difficult to implement. Consultations with the public on risk management should begin early in the process and continue throughout. This means an agency must be willing to tell the public about a risk before it has done its homework – before the experts have assessed the risk thoroughly, before all the policy options have been articulated, way before the policy decisions have been made. There are dangers to this strategy: people will ask the agency what it proposes to do about the problem, and the agency will have to say it isn’t sure yet. But on balance an agency is better off explaining why it doesn’t yet have all the answers than explaining why it didn’t share them years ago. In fact, not having all the answers can be made into an asset, a demonstration of real openness to public input. The goal, after all, is to enlist the rationality of the citizenry, so that citizens and experts are working together to figure out how great the risk is and what to do about it.

Of course no responsible agency will go public without any answers. What’s important is to propose options X, Y, and Z tentatively, with genuine openness to V and W, and to community comments that may eliminate Z. A list of options and alternatives – and a fair and open procedure for comparing them and adding new ones – is far more conducive to real power-sharing than a “draft” decision.

This sort of genuine public participation is the moral right of the citizenry. It is also sound policy. Undeterred by conventional wisdom, lay people often have good ideas that experts can adapt to the situation at hand; at a minimum, lay people are the experts on what frightens them and what would reassure them. When citizens participate in a risk management decision, moreover, they are far more likely to accept it, for at least three reasons:

  • They have instituted changes that make it objectively more acceptable.
  • They have got past the process issue of control and mastered the technical data on risk; that is, they have learned why the experts consider it acceptable.
  • They have been heard and not excluded, and so can appreciate the legitimacy of the decision even if they continue to dislike the decision itself.

6. Explaining risk information is difficult but not impossible, if the motivation is there.

High school teachers have long marveled that a student who couldn’t make sense of Dickens’s A Tale of Two Cities had no trouble with Hot Rod’s far more complex instructions on how to adjust one’s sparkplugs for a fast start on a rainy day. Motivation makes the difference. When people have a reason to learn, they learn.

It is still possible for communicators to make the learning easier or harder – and scientists and bureaucrats have acquired a fairly consistent reputation for making it harder. At Three Mile Island, for example, the level of technical jargon was actually higher when the experts were talking to the public and the news media than when they were talking to each other. The transcripts of urgent telephone conversations between nuclear engineers were usually simpler to understand than the transcripts of news conferences. To be sure, jargon is a genuine tool of professional communication, conveying meaning (to those with the requisite training) precisely and concisely. But it also serves as a tool to avoid communication with outsiders, and as a sort of membership badge, a sign of the status difference between the professional and everyone else.

Like any piece of professional socialization, the tendency to mystify outsiders becomes automatic, habitual more than malevolent. It’s hard for a layperson to get a straight answer from an expert even when nothing much is at stake. When a potentially serious risk is at stake, when people are frightened or angry or exhausted, when the experts aren’t sure what the answers are, when the search for a scapegoat is at hand, effective communication is a lot to expect.

In many risk communication interactions, in short, the public doesn’t really want to understand (because it feels powerless and resentful) and the experts don’t really want to be understood (because they prefer to hold onto their information monopoly). The public finds it convenient to blame the experts for obfuscation, and the experts find it convenient to blame the public for obtuseness. These motivational issues are probably more important than the traditional concerns of clarity in determining whether real knowledge will pass from expert to public.

Within the traditional concerns of clarity, the major issue is simplification. Even assuming a public that wants to understand and an expert who wants to be understood, risk information must still be simplified.

Insofar as possible, of course, it is wise to simplify language rather than content. That is, take the extra words to make hard ideas clear. Unfortunately, neither the expert source nor the lay audience is usually willing to dedicate the time needed to convey complex information a step at a time. So inevitably simplification becomes a matter of deciding what information to leave out. Experts are famous for their conviction that no information may be left out; unable to tell all, they often wind up telling nothing.

In fact, there are three standard rules of thumb for popularizing technical content.

  • Tell people what you have determined they ought to know – the answers to the questions they are asking, the instructions for coping with the crisis, whatever. This requires thinking through your information goals and your audience’s information needs, then resolutely keeping the stress where you have decided it should be.
  • Add what people must know in order to understand and feel that they understand the information – whatever context or background is needed to prevent confusion or misunderstanding. The key here is to imagine where the audience is likely to go off-track, then provide the information that will prevent the error.
  • Add enough qualifiers and structural guidelines to prepare people for what you are not telling them, so additional information later will not leave them feeling unprepared or misled. Partly this is just a matter of sounding tentative; partly it is constructing a scaffolding of basic points on which people can hang the new details as they come in.

Applying these three rules isn’t easy, but it is a lot easier than trying to tell everything you know.

The hardest part of simplifying risk information is explaining the risk itself. This is hard not only because risk assessments are intrinsically complex and uncertain, but also because audiences cling tenaciously to their safe-or-dangerous dichotomy. One path out of dichotomous thinking is the tradeoff: especially risk–benefit, but also risk–cost or risk–risk. But there is solid evidence that lay people resist this way of thinking; trading risks against benefits is especially offensive when the risks raise moral issues and the “victims” are not the ones making the choice. Another alternative to dichotomy is the risk comparison: X is more dangerous than Y and less dangerous than Z. But as we have already noted, risk means a lot more than mortality statistics, and comparing an involuntary risk like nuclear power to a voluntary one like smoking invariably irritates more than it enlightens – as does any risk comparison that ignores the distinctions listed at the start of this section.

The final option to dichotomy is to provide the actual data on deaths or illnesses or probability of occurrence or whatever. This must be done carefully, with explicit acknowledgment of uncertainty, of moral issues, and of non-statistical factors like voluntariness that profoundly affect our sense of risk. Graphs and charts will help; people understand pictorial representations of probability far better than quantitative ones.

Don’t expect too much. People can understand risk tradeoffs, risk comparisons, and risk probabilities when they are carefully explained. But usually people don’t really want to understand. Those who are frightened, angry, and powerless will resist the information that their risk is modest; those who are optimistic and overconfident will resist the information that their risk is substantial. Over the long haul, risk communication has more to do with fear, anger, powerlessness, optimism and overconfidence than with finding ways to simplify complex information.

7. Risk communication is easier when emotions are seen as legitimate.

It follows from what we have been saying that an important aspect of risk communication is finding ways to address the feelings of the audience. Unfortunately, experts and bureaucrats find this difficult to do. Many have spent years learning to ignore feelings, their own and everyone else’s; whether they are scientists interpreting data or managers setting policy, they are deeply committed to doing their jobs without emotion.

At an even deeper level, scientists and bureaucrats have had to learn to ignore the individual, to recognize that good science and good policy must deal in averages and probabilities. This becomes most obvious when a few people feel threatened by a generally desirable action, such as the siting of a hazardous waste facility. Experts who are confident that the risk is small and the facility needed may well try to sympathize with the target community – but their training tells them playing the odds is a good bet, somebody has to take the risk, the decision is rational, and that’s the end of the matter.

Thus the most common sources of risk information are people who are professionally inclined to ignore feelings. And how do people respond when their feelings are ignored? They escalate – yell louder, cry harder, listen less – which in turn stiffens the experts, which further provokes the audience. The inevitable result is the classic drama of stereotypes in conflict: the cold scientist or bureaucrat versus the hysterical citizen.

Breaking this self-defeating cycle is mostly a matter of explicitly acknowledging the feeling (and the legitimacy of the feeling) before trying to explain anything substantive – because any effort to explain substance first will be experienced by people as just another way of not noticing how they feel. The trick, in other words, is to separate the feeling from the substance, and respond to the feeling first. “I can tell you’re angry about this” won’t eliminate the anger – nor should it – but it will eliminate the need to insist on the anger, and will thus free energy to focus on the issue instead. “A lot of people would be angry about this” and “in your position I would be angry about this” are even more empathic remarks, legitimating the anger without labeling the citizen. All three responses are far more useful than pretending that the anger isn’t there or, worse yet, demanding that it disappear. Techniques of this sort are standard practice in many professional contexts, from police crisis intervention to family counseling. Training is available; risk communicators need not reinvent the wheel.

It helps to realize that experts and bureaucrats – their preferences notwithstanding – have feelings too. In a public controversy over risk, they are likely to have very strong feelings indeed. After all, they consider themselves moral people, yet they may be accused of “selling out” community health or safety or environmental protection. They consider themselves competent professionals, yet they may be accused of egregious technical errors. They very likely pride themselves on putting science or public service ahead of personal ambition, yet they may be accused of not caring. They chose their careers expecting if not gratitude at least a calm working environment and the trust and respect of the community. Instead they are at the center of a maelstrom of community distrust, perhaps even community hatred. It hurts. The pain can easily transform into a kind of icy paternalism, an “I’m going to help you even if you don’t know what’s good for you” attitude. This of course triggers even more distrust, even stronger displays of anger and fear. Risk communication stands a better chance of working when both sets of feelings – the expert’s and the community’s – are on the table.

Feelings are not usually the core issue in risk communication controversies. The core issue is usually control, and the way control affects how people define risk and how they approach information about risk. But the stereotypical conflict between the icy expert and the hysterical citizen is nonetheless emblematic of the overall problem. The expert has most of the “rational” resources – expertise, of course; stature; formal control of the ultimate decision. Neither a direct beneficiary nor a potential victim, the expert can afford to assess the situation coldly. Indeed, the expert dare not assess the situation in any other way. The concerned citizen, meanwhile, has mainly the resources of passion – genuine outrage; depth of commitment; willingness to endure personal sacrifice; community solidarity; informal political power. To generate the energy needed to stop the technical juggernaut, the citizen must assess the situation hotly.

A fundamental premise of “Explaining Environmental Risk” is that risk understanding and risk decision-making will improve when control is democratized. We will know this is happening when citizens begin approaching risk issues more coolly, and experts more warmly.

Explaining Environmental Risk

For more on outrage management:    link to Outrage Management index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.