Posted: August 2004
This page is categorized as:    link to Introductory articles
Hover here for
Article SummaryBack in the 1980s, Vincent Covello and I gave back-to-back presentations on risk communication as part of a U.S. Environmental Protection Agency lecture series. When Tony Wolbarst of EPA decided to collect the presentations into a book, he offered everyone a chance to revise and update. Vincent and I decided to merge our efforts into a single article on the state of risk communication, based loosely on what we had said originally plus what we now consider important. The result is a pretty good overview of the shared opinions of two well-seasoned practitioners.

Risk communication:
Evolution and Revolution

Anthony Wolbarst (ed.), Solutions to an Environment in Peril
Baltimore: John Hopkins University Press, 2001 pp. 164–178

Over the past thirty years, our country has witnessed a tremendous take-back by the public of power over environmental policy. In the 1970’s, people were largely content to leave control in the hands of established authorities, such as the Environmental Protection Agency. In the 1980’s, however, the public reasserted its claim over environmental policymaking. People became visibly upset, distressed, and even outraged when they felt excluded.

In this crucible, the current version of risk communication was born. It was created, in part, to guide the new partnership and dialogue of government and industry with the public. It addressed a fundamental dilemma made clear by that dialogue: The risks that kill people and the risks that alarm them are often completely different. There is virtually no correlation between the ranking of hazards according to statistics on expected annual mortality and the ranking of the same hazards by how upsetting they are. There are many risks that make people furious even though they cause little harm – and others that kill many, but without making anybody mad.

Risk communication is a scientifically based discipline that confronts this dilemma. Where data indicate that a hazard is not serious, yet the public is near panic, it can be used to calm people down; for this kind of situation, its goal is to provide reassurance. But it can also help generate a sense of urgency where data indicate that the hazard is serious, yet the public response is one of apathy. It has been effective, for example, in motivating people to buckle up their seat belts, to quit smoking, to test for radon in their houses, and to evacuate their homes during an emergency. It means shaking people by their collective lapels and saying: “Look, this is dangerous. It could kill you. Do something!”

This is the general context for the interest in risk communication that began in the 1980s and continues to this day. Other factors that have contributed to its rapid growth include significant increases in:

  • public interest in health, safety and environmental issues, and media coverage of them;
  • the demand for information generated by public concern about risks from past, present, and future activities;
  • the number and reach of right-to-know laws relating to exposures to risk agents;
  • mistrust in risk management authorities, and public demands for the right to participate as a full partner in all phases of risk assessment and risk management;
  • awareness by governments and industry that risk controversies often threaten the achievement of their organizational goals; and
  • awareness by all sides that the public’s response to a risk can be amplified or attenuated by those who wish to manipulate it – that is, that risk communication is a useful tool for advocates of particular outcomes.

Risk communication was formulated in response to these changes. In the process, several important obstacles had to be overcome: inconsistent, overly complex, confusing, or incomplete risk messages; the lack of trust in information sources; selective reporting by the media; and psychological and social factors that affect how information is processed.

Obstacles to Risk Communication Effectiveness

The first of these obstacles derives largely from the uncertainty, complexity, and incompleteness of environmental data. To make effective decisions, risk managers need to know the potential harm posed by threats to health, safety, or the environment. Risk assessments are designed to provide this information. Unfortunately, large gaps remain in our understanding of risk. Most environmental risk assessments to date have focused on cancer, for example; there has been much less study of other types of adverse health impacts (e.g., reproductive effects). In addition, most risk assessments have addressed single chemicals, with much less attention to mixtures of chemicals. This lack of data presents a critical challenge.

Even on those occasions when one can place precise quantitative error bars on a risk estimate, the “lesson” of the assessment may be only that the risk in question is somewhere between serious and nonexistent. So the decision whether to take action to avert a possible catastrophe (e.g., global warming) may need to be made before the magnitude and likelihood of the catastrophe can be estimated with anything approaching confidence.

Largely because of gaps in knowledge, risk assessment seldom provides exact answers. In this sense, it suffers from the same weaknesses as many other fields of scientific inquiry. A variety of confounding factors (e.g., smoking, drinking, exercise, and diet) often make it difficult, if not impossible, to reach definitive conclusions about cause and effect. This is especially the case for health risk assessments, where usually direct testing on humans is ethically prohibited (an important exception being controlled clinical trials). As a result, the outcomes of most risk assessments are best seen as estimates, with varying degrees of uncertainties about the actual nature of the risk. These uncertainties can justify conflicting interpretations of the data, typically grounded as much in value judgments as in scientific judgments.

A second major obstacle to effective risk communication is distrust. Sources of distrust include disagreements among experts; lack of coordination among risk management organizations; inadequate training of experts and spokespersons in risk communication skills; insensitivity to the requirements for effective communication, public participation, dialogue and community outreach; mismanagement and neglect; and a history of frequent distortion, exaggeration, secrecy, or worse on the part of many risk information providers. A complicating factor is that while industry and government risk communicators often see the lack of trust and credibility as their central problem, activists tend to see the undermining of undeserved trust as a major achievement.

Often the problem of risk communication is not so much how to regain trust as how to function without it. Important lessons can be learned here from the ways companies deal with each other – in contract negotiations, for example – where accountability, not trust, is the dominant value. Organizations that accept the obligation to prove their contentions to the satisfaction of their critics find a wide range of mechanisms to help them do so, such as third party audits. Ironically, it is because they rely more on accountability and less on trust that such organizations come to be viewed as trustworthy.

A third obstacle is selective reporting by the news media. The media are critical to the delivery of risk information to the general public (though they are much less important for communicating with involved stakeholders). A major conclusion from research focused on the news media is that journalists are highly selective in reporting about risk, and particularly inclined toward stories that involve people in unusual, dramatic, confrontational, negative, or sensational situations (e.g., natural disasters, technological disasters, emotionally charged town hall meetings). They tend to focus their attention on issues that play to the same “outrage factors” that the public uses in evaluating risks. For example, they look for stories involving dreaded events (e.g., cancer among children), risks to future generations, involuntariness, unclear benefits, inequitable distribution of risks and benefits, potentially irreversible effects, and cases where trust is lacking. They pay much less attention to stories about risks that affect many more people each year but are less dramatic (e.g., heart disease and diabetes).

In addition, many media stories about risk contain substantial omissions, or present oversimplified, distorted, inaccurate information. Studies have revealed, for example, that media reports on cancer risks often fail to provide adequate statistics on general cancer rates for purposes of comparison, or enough information on detection, treatments, and other protective measures.

Some of these problems stem from characteristics of the media and the constraints on them, such as tight deadlines limiting the pursuit of information, and lack of time or space to deal with the complexities and uncertainties surrounding many risk issues. Others arise from how journalists view their role. They often see their job as simply to present opposing views as equally as possible, without judging their merits; thus truth in journalism is quite different from truth in science. In addition, people in the media are often forced to rely heavily on sources who are easily accessible and willing to speak out; others tend to be ignored. And reporters often do not have the scientific or technical background and expertise needed to evaluate the complex data and the disagreements that surround many debates about risks. Consequently, their stories may contain inadvertent distortions of reality, or tend to mislead, or even be just plain wrong.

The fourth major obstacle to effective risk communication derives from the psychological and social factors that influence how people process information about risk. At least seven such factors can be identified.

The first consists of the mental short cuts – or heuristics – that all of us (including experts) use to calculate the probability that an adverse action or event will happen. As a result of these heuristics, people may make biased judgments, or use only a small amount of the available information in making decisions about risk. We tend to assign greater probability to events of which we are frequently reminded (e.g., in the news media, scientific literature, or discussions among friends or colleagues), for example, or to events that are easy to recall or imagine through concrete examples or dramatic images.

A second psychological factor that affects our processing of risk information is apathy. In many cases people just lack motivation, and are simply not interested in learning about a risk. Apathy can indicate true lack of interest, serve as a psychological defense mechanism, or be based on a prior negative experience. For example, people may not be willing to become actively engaged if they perceive a lack of relevance or an absence of opportunities for meaningful participation and dialogue.

A third factor is overconfidence and unrealistic optimism, which often lead people to ignore or dismiss risk information. A majority of the population, for example, consider themselves less likely than average to get cancer, get fired from their job, or get mugged; only 50 percent, of course, can be “less likely than average” to do anything. Overconfidence and unrealistic optimism are most influential when the risk in question is voluntary, and when high levels of perceived personal control lead to reduced feelings of susceptibility. Many people fail to use seat belts, for example, because of the unfounded belief that they are better or safer than the average driver. In a similar vein, many teenagers engage in high-risk behaviors (e.g., drunk driving, smoking, unprotected sex) because of perceptions, supported by peers, of invulnerability and overconfidence in their ability to avoid harm.

A fourth factor is the difficulty people have in understanding information that is probabilistic in nature, or relates to unfamiliar activities or technologies, or is presented in unfamiliar ways. Studies have demonstrated, moreover, that subtle changes in the way that probabilistic information is framed – such as expressing risks in terms of the probability of survival versus that of dying – can have a major impact on how we view the risk.

A fifth factor is the public’s desire and demand for scientific certainty. People are averse to uncertainty, and find a variety of coping mechanisms to reduce the anxiety it causes. This aversion often translates into a marked preference for statements of fact over statements of probability – the language of risk assessment. Despite protests by scientists that precise information is seldom available, people want absolute answers; they demand to know exactly what will happen, not what might happen.

A sixth factor is the reluctance on the part of people to change strongly held beliefs, and their willingness to ignore evidence that contradicts them. Strong beliefs about risks, once formed within a particular social and cultural context, change very slowly, and they can be extraordinarily persistent in the face of contrary evidence.

A last, but very important, social/psychological determinant of how we process risk information involves the factors that affect how we judge the actual magnitude of a risk. These components of judgment are often thought of as distortions in public risk perception; they are, perhaps, better seen as aspects of public risk assessment. Beginning in the 1960s, a large research effort focused on the complexity of the considerations involved in nonscientific risk perception/assessment. A major conclusion of this research was that typically there is only a low correlation between the level of physical risk in a situation and the amount of worry that it arouses. Much more important in determining people’s responses, it was found, is the presence of what are now called “outrage factors.” These include:

  • Voluntariness. Risks from activities considered to be involuntary or imposed (e.g., exposure to chemicals or radiation from a waste or industrial facility) are judged to be greater, and are therefore less readily accepted, than risks from activities that are seen to be voluntary (e.g,. smoking, sunbathing, or mountain climbing).
  • Controllability. Risks from activities viewed as under the control of others (e.g., releases of toxic chemicals by industrial facilities) are judged to be greater, and are less readily accepted, than those from activities that appear to be under the control of the individual (e.g., driving an automobile or riding a bicycle).
  • Familiarity. Risks from activities viewed as unfamiliar (such as from leaks of chemicals, or radiation from waste disposal sites) are judged to be greater than risks from activities viewed as familiar (such as household work).
  • Fairness. Risks from activities believed to be unfair or to involve unfair processes (e.g., inequities related to the siting of industrial facilities or landfills) are judged to be greater than risks from fair activities (e.g., vaccinations).
  • Benefits. Risks from activities that seem to have unclear, questionable, or diffused personal or economic benefits (e.g., waste disposal facilities) are judged to be greater than risks from activities that have clear benefits (jobs; monetary benefits; automobile driving).
  • Catastrophic potential. Risks from activities viewed as having the potential to cause a significant number of deaths and injuries grouped in time and space (e.g., deaths and injuries resulting from a major industrial explosion) are judged to be greater than risks from activities that cause deaths and injuries scattered or random in time and space (e.g., automobile accidents).
  • Understanding. Poorly understood risks (such as the health effects of long-term exposure to low doses of toxic chemicals or radiation) are judged to be greater than risks that are well understood or self-explanatory (such as pedestrian accidents or slipping on ice).
  • Uncertainty. Risks from activities that are relatively unknown or that pose highly uncertain risks (e.g., risks from biotechnology and genetic engineering) are judged to be greater than risks from activities that appear to be relatively well known to science (e.g., actuarial risk data related to automobile accidents).
  • Delayed effects. Risks from activities that may have delayed effects (e.g., long latency periods between exposure and adverse health effects) are judged to be greater than risks from activities viewed as having immediate effects (e.g., poisonings).
  • Effects on children. Risks from activities that appear to put children specifically at risk (e.g., milk contaminated with radiation or toxic chemicals; pregnant women exposed to radiation or toxic chemicals) are judged to be greater than risks from activities that do not (e.g., workplace accidents).
  • Effects on future generations. Risks from activities that seem to pose a threat to future generations (e.g., adverse genetic effects due to exposure to toxic chemicals or radiation) are judged to be greater than risks from activities that do not (e.g., skiing accidents).
  • Victim identity. Risks from activities that produce identifiable victims (e.g., a worker exposed to high levels of toxic chemicals or radiation; a child who falls down a well; a miner trapped in a mine) are judged to be greater than risks from activities that produce statistical victims (e.g., statistical profiles of automobile accident victims).
  • Dread. Risks from activities that evoke fear, terror, or anxiety (e.g., exposure to cancer-causing agents; AIDS) are judged to be greater than risks from activities that do not arouse such feelings or emotions (e.g., common colds and household accidents).
  • Trust. Risks from activities associated with individuals, institutions or organizations lacking in trust and credibility (e.g., industries with poor environmental track records) are judged to be greater than risks from activities associated with those that are trustworthy and credible (e.g., regulatory agencies that achieve high levels of compliance among regulated groups).
  • Media attention. Risks from activities that receive considerable media coverage (e.g., accidents and leaks at nuclear power plants) are judged to be greater than risks from activities that receive little (e.g., on-the-job accidents).
  • Accident history. Risks from activities with a history of major accidents or frequent minor accidents (e.g., leaks at waste disposal facilities) are judged to be greater than risks from those with little or no such history (e.g., recombinant DNA experimentation).
  • Reversibility. Risks from activities considered to have potentially irreversible adverse effects (e.g., birth defects from exposure to a toxic substance) are judged to be greater than risks from activities considered to have reversible adverse effects (e.g., sports injuries).
  • Personal stake. Risks from activities viewed by people to place them (or their families) personally and directly at risk (e.g., living near a waste disposal site) are judged to be greater than risks from activities that appear to pose no direct or personal threat (e.g., disposal of waste in remote areas).
  • Ethical/moral nature. Risks from activities believed to be ethically objectionable or morally wrong (e.g., foisting pollution on an economically distressed community) are judged to be greater than risks from ethically neutral activities (e.g., side effects of medication).
  • Human vs. natural origin. Risks generated by human action, failure or incompetence (e.g., industrial accidents caused by negligence, inadequate safeguards, or operator error) are judged to be greater than risks believed to be caused by nature or “Acts of God” (e.g., exposure to geological radon or cosmic rays).

These findings reveal that people often perceive/assess risk more in terms of these “outrage” factors than in terms of potential for “real” harm or hazard. For the public, Risk = Hazard + Outrage.

This equation reflects the observation that an individual’s perception or assessment of risk is based on a combination of hazard (e.g., mortality and morbidity statistics) and outrage factors. When present, outrage often takes on strong emotional overtones. It predisposes an individual to react emotionally (e.g., with fear or anger), which can in turn significantly amplify levels of worry. Outrage also tends to distort perceived hazard. But as we have stressed, the outrage factors are not only distorters of hazard perception. They are also, independently, components of the risk in question – and may themselves be perceived accurately or inaccurately.

The outrage model indicates that a key to resolving risk controversies lies in recognizing the importance of the various outrage factors that we have just discussed. Thus a fairly distributed risk is viewed as being less risky, and therefore is more acceptable, than an unfairly distributed one; an activity that provides significant benefits to the parties at risk is more acceptable than another with no such benefits; an activity for which there are no alternatives is more acceptable than one for which there appear to be better alternatives; a risk that the parties affected can control – through voluntary choice, the sharing of power, or the acquisition of knowledge needed to make informed choices – is more acceptable than a risk that is beyond their control; a risk that people can assess and decide voluntarily to accept is more acceptable than an imposed risk.

These statements are true in the same sense that a risk that is quantifiably small is more acceptable than a large risk. That is, risk is multidimensional – and its mathematical size (its hazard) is only one of the dimensions. Since people vary in how they assess risk acceptability, they will weigh the outrage factors according to their own values, education, personal experience, and stake in the outcome. Because acceptability is a matter of values and opinions, and because values and opinions differ, discussions of risk may also be debates about values, accountability, and control.

If the outrage model is accepted as valid, then a broad range of risk communication and management options become available for resolving risk controversies. Because outrage factors such as fairness, familiarity, and voluntariness are as relevant as measures of hazard probability and magnitude in judging the acceptability of a risk, efforts to reduce outrage (to make a risk fairer, more familiar, and more voluntary) are as significant as efforts to reduce hazard. Indeed, if “Risk = Hazard + Outrage” is taken literally, then making a risk fairer, more familiar, and more voluntary does indeed make the risk smaller, just as reducing hazard makes it smaller. Similarly, because personal control is important, efforts to share power, such as establishing and assisting community advisory committees, or supporting third party research, audits, inspections, and monitoring, can be powerful means for making a risk more acceptable.

Four Stages of Risk Communication

Not surprisingly, it has taken governments, companies, and others time to absorb these ideas. Indeed, “risk communication” has gone through four evolutionary stages in the process, each with its own general philosophy and approach.

The first stage was simply to ignore the public. This is the pre-risk-communication stage, prevalent in the United States until about 1985. The approach is built on the notion that most people are hopelessly stupid, irredeemably irrational. So you ignore them if you can; mislead them if you absolutely have to; lie to them if you think you can get away with it. Protect their health and environment, but by no means let them in on risk policymaking, because they’ll only mess things up.

For a long time, the public was content to be ignored. But this approach stopped working in the mid- to late 1980s. A movement began to take back power over environmental policy; and increasingly, when the public was ignored, controversies became larger. That was the experience of the nuclear power and chemical industries; the biotechnology industry is learning the same lesson today.

Since ignoring the public no longer worked, we advanced to the second stage, and the first level of true risk communication: learning how to explain risk data better. This is where many organizations are still to be found today. Although there has been real progress, it remains an uphill battle for spokespersons to explain risk numbers – such as parts per billion – so that people understand what they mean. Techniques are improving for explaining (for example) that a risk that is numerically small is trivial in importance, that people do not need to worry about it, that they should ask that their tax dollars be spent averting other, much more serious risks instead. Risk communicators have found that it is also clearly worthwhile to learn how to deal with the media, how to reduce or eliminate jargon, and how to make charts and graphs better.

Most important, risk communicators have discovered that motivation is the key to learning. While risk communication materials for the general public should be presented at the sixth to ninth grade level to be comprehensible, people take in even sixth-to-ninth grade material only when they are motivated. When they are sufficiently motivated, they manage to learn even very complex material. We all know high school boys who cannot make sense out of A Tale of Two Cities but have no difficulty figuring out from an article in Popular Mechanics how to adjust the spark plugs on their car. By any known readability test, Popular Mechanics is more difficult than Dickens. In a similar way, motivated people without high school diplomas can make their way through a highly technical 300-page environmental impact report to zero in on the one paragraph that they believe undermines the entire argument of its authors.

For some risk problems, such as radon, where the hazard is large and the controversy is minimal, doing a better job explaining data is one of the most important pieces of the puzzle. When people have control over a particular risk, and response to it is voluntary, there is need for action at the individual level. Health departments are accustomed to this type of issue, but for many organizations, including environmental regulatory agencies, it is new. Here, the main challenges are motivating attention and explaining the risk numbers.

When the hazard is not great but people are extremely outraged, on the other hand, explaining the data better seldom does much good (and motivating attention is unnecessary). You can never produce a video that will transform people who are trying to throw rocks at you into a pacified, well-informed audience.

That is why risk communication moved on to the third stage, which is built around dialogue with the community, especially with interested and concerned, even fanatic, stakeholders. The publication of the Seven Cardinal Rules of Risk Communication (see the Appendix) by EPA as a policy guidance document in 1988 was an important third-stage event; its central premise is that what people mean by risk is much more complicated than what technical experts commonly mean by risk.

In the third stage, a profound paradigm shift took place. For the first time, risk was properly seen as consisting of two almost independent, basic elements, hazard and outrage. The advantage of the “hazard + outrage” concept was that it served to reframe the problem. It allowed risk policy makers to consider in their decisions all of the factors that are included in the public’s definition of risk. This new, expanded concept of “risk” also pointed to the need for real dialogue among all the interested parties. It led to the then revolutionary idea that the essence of risk communication is not just explaining risk numbers – it is also reducing (or increasing) outrage. The problem is not mainly that people do not understand the numbers, but rather that they are (or are not) angry or upset.

Third-stage success requires that if you have a substantive action to offer in response to a risk situation, and you want people to listen to it, you have to listen to them first. In addition, you cannot just acknowledge people’s outrage – you must communicate that they are entitled to be outraged, and why.

An excellent example of this occurred in the late 1980s. Medical waste was floating up on the shorelines of the northeastern United States, and the public’s response was powerful. In New Jersey, the Department of Environmental Protection kept telling the people that the stuff was not dangerous, but the public kept on insisting that it was still disgusting, and a battle erupted. In Rhode Island, by comparison, the Commissioner of Health handled the same issue more deftly. He went public and said (in essence): “This is an outrage; this is unacceptable. The people of Rhode Island will not, and should not, tolerate any hypodermic syringes washing up on our shores. We are going to do absolutely everything in our power to stop it, even though the hazard from it is essentially nonexistent – even though there is a negligible risk to health. We are going to turn our budget upside down if we have to. We will put a stop to it no matter what it takes and no matter how much it costs.” The public’s reply was: “Thank you for your response … but maybe we should wait a minute. If it’s really a negligible risk, how much of our money are you planning to spend?” Psychiatrists call this “getting on the other side of the resistance.” When you share, and even exaggerate, people’s concerns, they pick up that you believe that their anger and their outrage are legitimate. That frees them to feel something else. In Rhode Island, they came to see that the risk was extremely small, and they began wondering whether it was worth spending a lot of money on it.

Stage four comes about when you really believe in stage three, and discover that stage three requires fundamental shifts in an organization’s values and culture. Stage four involves treating the public as a full partner. It works somewhat like psychotherapy. First, you must commit yourself, at least tentatively, to the goal – in this case, third-stage risk communication. Then you try to carry it out. Then you discover that, for the most part, you can’t. And then either you give up, or you recognize that a change in how you deal with others often requires a change in how you deal with yourself.

Only limited progress has been made toward achieving these organizational adjustments. The fourth stage of risk communication is difficult to achieve, largely because it is very hard for individuals and organizations to change. Habit and inertia propel us in the direction of old behavior.

Another reason for limited stage-four progress is that technical fields are dominated by people who, by disposition, prefer clear boundaries, logical approaches, and unemotional situations. They typically do not like negotiation, dialogue, and partnerships with members of the public.

A third reason involves the convictions and principles of the people who work in the environmental field. Most chose to do so because they want to save lives, protect people from hazards. They are convinced that they know precisely what is needed to do that, so they have good reasons for resisting any competing definition of what is needed. They want to deal with risks in a scientific, factual way, and not with people and their psychological problems.

Then, too, there is the skepticism of many people in government and industry who doubt that risk communication can do much good. They believe that people are irrational and hysterical, and that they will stay that way, regardless of how you try to get through to them.

A fifth reason for limited stage-four progress concerns power. At the core of the third and fourth stages is empowerment of the public. Every profession has at its center an impulse to hoard power, and to resist attempts to usurp that power.

Organizational climate is also important. Researchers have found that people in bureaucracies are often very adept at distinguishing real policies from those that are merely rhetoric. Is the organization’s commitment to dialogue sincere? If the dialogue process fails, will it harm my career? Will it affect my performance appraisal? Will I get the time, the staff, the training, and the budget to do the job well?

The seventh, and probably the most important, reason involves one’s comfort level and self-esteem. Managing the risk my way feels good (to me). Sharing control and credit with citizens – especially with angry, hostile, irrational citizens and with the activists who have inflamed them – feels rotten (to me). Even in the face of evidence that reducing stakeholder outrage and settling risk controversies are usually more profitable strategies than continuing a fight, managers frequently put a higher premium on protecting their own comfort and self-esteem than on achieving their organization’s substantive goals. The outrage they choose to reduce is their own.

Except for the first stage (ignoring the public), the various stages of risk communication build on one another. However, they do not replace one another. We still must be concerned about explaining the data better, and about making dialogue happen – even as we try to change our organizations.

If this fourth stage is ever fully realized, risk communication will turn out to have been even more revolutionary an idea than we first thought. We knew that it was going to alter the ways in which the public deals with organizations about risk, and in which organizations deal with the public. What we did not realize was that it would transform the way organizations think of themselves as well. We have discovered, at the most fundamental level, that engaging in meaningful, respectful, and frank dialogue with the public involves changes in basic values and organizational culture.

Appendix: The EPA’s Seven Cardinal Rules of Risk Communication

Rule 1. Accept and involve the public as a legitimate partner.

Two basic tenets of risk communication in a democracy are generally understood and accepted. First, people and communities have a right to participate in decisions that affect their lives, their property, and the things they value. Second, the goal of risk communication should not be to diffuse public concerns or avoid action. The goal should be to produce an informed public that is involved, interested, reasonable, thoughtful, solution-oriented, and collaborative.

Guidelines: Demonstrate respect for the public by involving the community early, before important decisions are made. Clarify that decisions about risks will be based not only on the magnitude of the risk but on factors of concern to the public. Involve all parties that have an interest or a stake in the particular risk in question. Adhere to highest moral and ethical standards; recognize that people hold you accountable.

Rule 2. Listen to the audience.

People are often more concerned about issues such as trust, credibility, control, benefits, competence, voluntariness, fairness, empathy, caring, courtesy, and compassion than about mortality statistics and the details of quantitative risk assessment. If people feel or perceive that they are not being heard, they cannot be expected to listen. Effective risk communication is a two-way activity.

Guidelines: Do not make assumptions about what people know, think or want done about risks. Take the time to find out what people are thinking; use techniques such as interviews, facilitated discussion groups, advisory groups, toll-free numbers, and surveys. Let all parties that have an interest or a stake in the issue be heard. Identify with your audience and try to put yourself in their place. Recognize people’s emotions. Let people know that what they said has been understood, addressing their concerns as well as yours. Recognize the “hidden agendas,” symbolic meanings, and broader social, cultural, economic or political considerations that often underlie and complicate the task of risk communication.

Rule 3. Be honest, frank, and open.

Before a risk communication can be accepted, the messenger must be perceived as trustworthy and credible. Therefore, the first goal of risk communication is to establish trust and credibility. Trust and credibility judgments are resistant to change once made. Short-term judgments of trust and credibility are based largely on verbal and nonverbal communications. Long term judgments of trust and credibility are based largely on actions and performance. In communicating risk information, trust and credibility are a spokesperson’s most precious assets. Trust and credibility are difficult to obtain. Once lost they are almost impossible to regain.

Guidelines: State credentials; but do not ask or expect to be trusted by the public. If an answer is unknown or uncertain, express willingness to get back to the questioner with answers. Make corrections if errors are made. Disclose risk information as soon as possible (emphasizing appropriate reservations about reliability). Do not minimize or exaggerate the level of risk. Speculate only with great caution. If in doubt, lean toward sharing more information, not less – or people may think something significant is being hidden. Discuss data uncertainties, strengths and weaknesses – including the ones identified by other credible sources. Identify worst-case estimates as such, and cite ranges of risk estimates when appropriate.

Rule 4. Coordinate and collaborate with other credible sources.

Allies can be effective in helping communicate risk information. Few things make risk communication more difficult than conflicts or public disagreements with other credible sources.

Guidelines: Take time to coordinate all inter-organizational and intra-organizational communications. Devote effort and resources to the slow, hard work of building bridges, partnerships, and alliances with other organizations. Use credible and authoritative intermediaries. Consult with others to determine who is best able to answer questions about risk. Try to issue communications jointly with other trustworthy sources such as credible university scientists, physicians, citizen advisory groups, trusted local officials, and national or local opinion leaders.

Rule 5. Meet the needs of the media.

The media are a prime transmitter of information on risks. They play a critical role in setting agendas and in determining outcomes. The media are generally more interested in politics than in risk; more interested in simplicity than in complexity; and more interested in wrongdoing, blame and danger than in safety.

Guidelines: Be open with and accessible to reporters. Respect their deadlines. Provide information tailored to the needs of each type of media, such as sound bites, graphics and other visual aids for television. Agree with the reporter in advance about the specific topic of the interview; stick to the topic in the interview. Prepare a limited number of positive key messages in advance and repeat the messages several times during the interview. Provide background material on complex risk issues. Do not speculate. Say only those things that you are willing to have repeated: everything you say in an interview is on the record. Keep interviews short. Follow up on stories with praise or criticism, as warranted. Try to establish long-term relationships of trust with specific editors and reporters.

Rule 6. Speak clearly and with compassion.

Technical language and jargon are useful as professional shorthand. But they are barriers to successful communication with the public. In low trust, high concern situations, empathy and caring often carry more weight than numbers and technical facts.

Guidelines: Use clear, nontechnical language. Be sensitive to local norms, such as speech and dress. Strive for brevity, but respect people’s information needs and offer to provide more information. Use graphics and other pictorial material to clarify messages. Personalize risk data; use stories, examples, and anecdotes that make technical data come alive. Avoid distant, abstract, unfeeling language about deaths, injuries and illnesses. Acknowledge and respond (both in words and with actions) to emotions that people express, such as anxiety, fear, anger, outrage, and helplessness. Acknowledge and respond to the distinctions that the public views as important in evaluating risks. Use risk comparisons to help put risks in perspective, but avoid comparisons that ignore distinctions that people consider important. Always try to include a discussion of actions that are under way or can be taken. Promise only that which can be delivered, and follow through. Acknowledge, and say, that any illness, injury or death is a tragedy and to be avoided.

Rule 7. Plan carefully and evaluate performance.

Different goals, audiences, and media require different risk communication strategies. Risk communication will be successful only if carefully planned and evaluated.

Guidelines: Begin with clear, explicit objectives – such as providing information to the public, providing reassurance, encouraging protective action and behavior change, stimulating emergency response, or involving stakeholders in dialogue and joint problem solving. Evaluate technical information about risks and know its strengths and weaknesses. Identify important stakeholders and subgroups within the audience. Aim communications at specific stakeholders and subgroups in the audience. Recruit spokespersons with effective presentation and human interaction skills. Train staff – including technical staff – in communication skills; recognize and reward outstanding performance. Pretest messages. Carefully evaluate efforts and learn from mistakes.

Copyright © 2001 by Vincent Covello and Peter M. Sandman

For more introductory materials on risk communication:    link to Introductory articles
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.