Posted: November 10, 2001
This page is categorized as:    link to Outrage Management index
Hover here for
Article SummaryA recurring question among my clients is: “Why can’t we just explain the data so people won’t be outraged any more?” This article reports some research on the efficacy of technical information as a way to shape risk perception. The results are not encouraging to my clients’ fondest hopes.

Testing the Role of Technical Information
in Public Risk Perception

RISK: Issues in Health and Safety, Fall 1992, pp. 341–364
(Page 1 of 4)

Introduction

Experts and laypersons have long disagreed about which risks to human health and safety and the environment should be of greatest concern. Experts in environmental health are most concerned about public overestimation of low-probability risks, although alleged underestimation of high-probability risks, e.g., radon, also concerns them. They assume that risk overestimation is the basis of citizen disagreement with experts, and that overestimation is due to ignorance of technical facts.1  The solution to the conflict, then, is to reduce public ignorance through education about the toxicity, exposure routes and health effects of environmental toxicants.

These are, however, assumptions. It is not clear that knowledge or ignorance of technical facts drives risk estimation, or that risk estimation is the central factor in public risk perception.2   It is even less clear whether providing citizens with technical risk information will alter their perceptions of risk or their views of how well government agencies are protecting the environment. Several studies suggest that agency behavior – for example, in coping with the effects of Chernobyl3 or in planning for a nuclear waste repository at Yucca Mountain4 – has far more impact on public views than agencies’ technical information.

The study on which this paper is based5 compared the effects of three variables in hypothetical newspaper stories:

  • the amount of technical information provided
  • the extent to which government officials were responsive or unresponsive and citizens were calm or upset,6
  • the magnitude of the risk, e.g., concentration of a hazardous chemical in water or number of households exposed.

This paper reviews methodological and conceptual challenges of testing the effect of technical information, reports results of one test of technical detail on perceived risk and perceived appropriateness of government action, and suggests approaches for future research.

Possible Roles of Technical Information

link up to indexStudy of the effect of technical information on risk perception faces several challenges. One must determine what effects should be expected and what kind of information is pertinent. Scientists advocating more communication of technical information to the public presume that information will lead citizens to see the risks the same way experts do, e.g., view low-probability risks as insignificant. Scholars who document the public’s lack of knowledge about science also imply that improving scientific literacy will reduce disagreements between experts and citizens.7

Some scholars propose that people update their knowledge in a Bayesian fashion from hazard-related information. Thus, perceived seriousness of a risk in a followup survey is believed to be a weighted average of earlier perceptions of risk seriousness and the message people see in information they have received in the interim. One study found, for example, that quantitative information about radon is superior to qualitative information in reducing gaps between objective and subjective risks.8   Yet, information-updating studies have not examined the effects of different topics of technical information, nor can Bayesian theorems predict such effects. In other words, these studies tell us that providing information may make a difference but do not suggest what information makes the difference.

Allan Mazur has argued that the more people see or hear about the risks of a technology, e.g., as measured in overall media coverage of the topic, the more concerned they will become. This effect, he suggested, would occur whether the coverage was positive or negative; the mere mention of risks, well- or poorly-managed, was enough to make the risks more memorable and thus increase public estimates of risk.9   The same effect might occur when technical information appears in a single news story, if readers construed the inclusion of such information as a signal that the issue deserves considerable attention and concern. This signal would be all the stronger because technical information is not a common attribute of news stories.10   Alternatively, inclusion of technical jargon could be interpreted as an attempt to hide something, justifying and provoking extra concern. Some studies contradicted Mazur’s thesis for effects of overall media coverage;11 other hypotheses have not been tested. All of them, if true, imply that sharing technical risk information with the public would increase perceived risk, the opposite effect from that proposed by many risk professionals.

Yet another possibility is that technical content might interact with other attributes of the news story to affect risk perceptions. For example, technical detail might make a story more credible, hence a frightening story scarier and a calming story more reassuring. One test of this hypothesis found no such interaction, and no direct effect of technical detail on readers’ alarm or comfort.12   What effects technical information might have combined with such story attributes as topic or media outlet have yet to be formulated, much less tested.

Clearly there are several, potentially contradictory, plausible effects of technical information on risk perception. In addition, there are several possible kinds of technical information that might exert these effects. Officials and experts who call for public education rarely specify which kind of data they expect to work and may not know themselves how to proceed. However, it is difficult to imagine circumstances under which officials would fail to tell the public about potential exposure routes and health effects of chemicals involved in an environmental spill, for example. So, the pertinent comparison is not between zero and some, but between some and more (or different), information.

Several potential comparative issues about what information to include confront the designer of research on the effects of technical information:

Detail vs. length of discussion

Compare a statement that short-term exposure to high levels of a chemical “can cause a wide range of health effects” to one that it “can cause a wide range of health effects, such as loss of muscle coordination, weakness, restlessness, and irritation of the eyes and skin.” Would differing effects of these statements, if any, be due to greater detail or greater length of the second? This confusion is exacerbated if the manipulation of technical detail includes sub-details (e.g., exposure routes as well as health effects). Each contrast across the stories potentially widens the gap in story length. We would need to compare a story with technical detail to another story of equal length but without technical detail.

Detail varying in kind vs. amount

One story may detail health effects, while another discusses dose-response relationships. Are any observed differences in dependent variables due to amount of detail or to story length or to kind of information?

Neutral vs. alarming or reassuring information

For example, would otherwise identical accounts of health effects, with one including cancer, foster identical risk perceptions?

Technical information’s effects on perceived risk vs. audience size

A science- and jargon-filled story may cause readers or viewers to stop paying attention, or it may alarm or reassure them.

Technical information vs. scientific certainty

Experts often argue that laypeople are too prone to seek certainty, while greater knowledge leads to understanding that certainty does not exist. In this view the value of providing more detail is that it grounds generalizations in data, theory, and caveats. Detail makes otherwise misleadingly simple statements more accurate and credible. In this view the second of the following two versions of health effects data will increase uncertainty about health effects:

  • “Scientific research has linked long-term PERC exposure to some kinds of cancer in test animals”
  • “Scientific research has linked long-term PERC exposure to liver cancer in mice and leukemia in rats. Although no evidence has been found concerning cancer in humans, EPA considers PERC a ’suspected human carcinogen.’”

However, these presumed effects of information about uncertainty may not occur. Citizens who fail to find certainty, whether or not they are told directly that it does not exist, may become alarmed despite otherwise believing a technical statement. If many laypeople believe science should produce certainty,13 uncertainty in technical information could even reduce credibility, and thereby raise perceived risk. Thus the effects of more information must be carefully separated from the effects of uncertainty engendered by the information.

Potentially intervening variables

For example, the trustworthiness of the person or organization supplying technical information may affect its impact. Several studies14 have found that the public sees wide differences in the credibility of various institutions on environmental issues. Environmentalists are usually most credible, industry least credible, and government moderately credible. Credibility also could be affected by whether the source communicates as expected. For example, New Jersey citizens said they would find state officials more credible if they declared an environmental problem to be dangerous than if they said it was not,15 and the same is probably true of industry. In contrast, environmentalists should be most trusted when they say something is safe. Technical information challenged by (trusted) opponents should be less credible than information left unchallenged, or even supported, by opponents.

Channel that conveys technical information

Suppose that news stories with details about exposure pathways and health effects do not affect risk perceptions. We cannot conclude that the same information would be ineffective if conveyed through other channels. Technical information may have a stronger effect in more direct and personal interactions, such as public meetings or one-on-one conversations. These situations allow for questions and answers to clarify the data, time to add metaphors and other comparisons, and the building of a potentially trusting relationship. Even other printed material may be more effective than press stories, whose users include many people not actively looking for risk information, or indeed looking for anything other than entertainment. Fact sheets and brochures passed out to interested parties may be processed more easily than media accounts; they focus on a single topic, and readers are more highly motivated. In contrast, detail communicated through the mass media may only reduce story readability, introducing, e.g., the concept of probability, without enough information to make sense of this new idea. These differences may affect risk perceptions.

Clarity with which technical detail is conveyed16

Clarity encompasses many attributes, such as jargon, sentence length, sentence complexity, tone, organization of ideas and active vs. passive voice. This, too, could affect risk perceptions.

Assessing the effect of technical information on risk perceptions involves a number of challenging issues. The pilot study described below involved a government source using a mass media channel to convey information, varying in detail and (slightly) in length but not in kind, about a spill of a potentially carcinogenic chemical. Researchers varied the magnitude of the risk and statements of the government spokesman about agency action. This provided a test of the hypothesis that different messages about risk from a given source can elicit different reactions. The study was a first step toward clarifying the direct and interactive effects of variables discussed above, with a focus on two:

  • Do laypeople recognize more detailed technical information (as defined by experts) as more detailed?
  • Does reading more detailed technical information about an environmental problem affect lay views of the risks or of the government information source managing the risks?

Research Design

link up to indexHypothetical news stories about a spill of perchloroethylene (PERC) were developed, each with a “low” or “high” value for each of three treatment variables: outrage, risk magnitude and technical detail. Appendix I contains examples of these stories. The channel used was a newspaper story because such stories are widely used to disseminate environmental information. Also, environmental professionals see newspaper stories as potentially very distorting of lay risk perceptions, but dominant and unavoidable because most Americans get most of their information through television broadcasts or newspapers.17

The source of technical information in these stories was the New Jersey Department of Environmental Protection and Energy (NJDEPE), which sponsored the study.18   Government agencies also are the single largest source of media stories on the environment.19

The "outrage" factor varied the agency spokesperson’s behavior. He did or did not willingly share information, promise review of regulations, and arrange for wellwater and exposure testing. The story also varied reported levels of residents’ distress. The stories thus included two kinds of reported behavior: that of the agency spokesperson and of residents. Such “person on the street” reactions to government statements are typical of news stories on environmental issues. These two sets of behaviors may have joint, separate, or offsetting effects on risk perception, just as may two government actions, e.g., to share information and review regulations. However, one must first see if outrage in general affects risk perception before analyzing the effects of subvariables. To keep story length workable, critiques of technical information by other groups were not included.

The “magnitude” variable altered the estimated toxicity of PERC, the estimated exposures resulting from the spill, and the number of people exposed. Risk magnitude varied overall between the two versions by nearly five orders of magnitude (approximately 80,000 fold).

Technical information was more or less detailed in its presentation of facts about exposure pathways, health effects, and evidence for those health effects. The cancer health effects example provided above20 was included in both pilot and field tests. The example for non-cancer effects21 was included only in the pilot tests. Varying the amount of detail given for the same technical topics avoided confusing these effects with those from differing kinds of information. Technical details were reviewed for accuracy by NJDEPE scientific staff. The amount of detail in the two versions was a plausible reflection of detail likely in actual news stories. The study only partly controlled for the possible effect of story length. High-detail stories were about 14% longer (105 vs. 92 lines) than low-detail stories. However, both filled nearly two columns22 and readers may not see them as differing much.23

The issue of whether detail raised or lowered perceived uncertainty was not addressed. Due to limited resources, all three of the manipulations (detail, outrage, risk magnitude) included several sub-variables, including potential uncertainty (about whether PERC caused cancer in humans) in the technical detail variable. This meant the study also could not separate the effects of uncertainty from the effects of other sorts of technical detail.

Table 1 details the variables measured in this design in the field study. The instrument (Appendix II) included statements intended to measure risk aversion. Previous research24 suggested that controlling for this variable might provide a more powerful test of risk perception hypotheses.

Table 1.25  [Omitted]

This research was supported by the New Jersey Department of Environmental Protection and Energy (NJDEPE), Division of Science and Research (DSR). The views reported here do not necessarily reflect the views of the agency.

Dr. Johnson is a Research Scientist in the Risk Communication Unit NJDEPE DSR. He holds a B.A. (Environmental Values and Behavior) from University of Hawaii, an M.A. (Environmental Affairs) and a Ph.D. (Geography) from Clark University.

Dr. Sandman is a Professor in the Environmental Communication Research Program (ECRP), and in the Program of Human Ecology, at Rutgers University. He holds a B.A. (Psychology) from Princeton University and an M.A. and Ph.D. (Journalism) from Stanford University.

Dr. Miller is a Research Associate at ECRP. He holds a B.S. (Marketing and Economics) from Ohio State University, an M.S. (Psychology) from Idaho State University, and a Ph.D. (Psychology) from University of Utah.

Copyright © 2001 by Branden B. Johnson, Peter M. Sandman, and Paul Miller

For more on outrage management:    link to Outrage Management index
      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.