Posted: January 4, 2004
Hover here for
Article SummaryScientists are from Mars and everyone else is from Venus. In this column, Jody Lanard and I focus on ten systematic differences between scientists and the public that make things difficult when scientists try to do risk communication. Among them: Many scientists don’t approve of communicating with nonscientists; many scientists overvalue rationality, and mistrust – and even disdain – emotion; many scientists fail to allow for the public’s mistrust; many scientists do not trust the public. Understanding these differences can help scientist-communicators overcome them and find common ground with their audience. And understanding these differences can help members of the public make allowances when scientists mishandle their risk communication efforts.

Scientists and the Public:
Barriers to Cross-Species Risk Communication

This column got its start as a presentation by one of us (Lanard) at a conference on “Conveying Science into Policy: Science Communication and Environmental Decision-Making.” Sponsored by the GreenFacts Foundation, the conference was held at the Atomium, Brussels, October 16, 2003. For a written version of the original speech, see Scientists and Risk Communication this link goes to a pdf.

A Mandarin translation of the original speech this link goes to a pdf, translated by Hangzhou Neuro-Hemin Biotech Co., Ltd (China) is available on this site.

Scientists are different from other people – sometimes in ways they are aware of and proud of, sometimes in ways they fail to notice or try to disavow. This column will address some of the differences, and how they affect risk communication. One person attending the conference where this was first presented called the topic, “Scientists are from Mars, everyone else is from Venus!”

We could just as easily – and productively – stereotype any of the other main actors in risk controversies: public officials, corporate managers, journalists, activists, lawyers.… But today we’re going to focus on scientists.

With some stunning exceptions, the vast majority of scientists are notoriously poor communicators, except when talking to each other. Since scientists are often the only people around who actually understand technical data, this deficiency makes risk communication more difficult than it would otherwise be. This column will discuss some reasons why scientists typically use poor communication techniques when talking with nonscientists.

1. Many scientists don’t approve of communicating with nonscientists.

Almost forty years ago, Rae Goodell did a doctoral dissertation studying what she called “the visible scientists” – the ones who write popular books and magazine articles, and who gladly take calls from journalists. Her most important finding was that scientists lose stature in each other’s eyes when they seek, or even accept, public attention. Respectable scientists shun the limelight.

No doubt the pressure to shun the limelight has diminished some in the ensuing decades. Under editor Franz Ingelfinger, the New England Journal of Medicine (the most prestigious medical journal in the U.S.) refused for many years even to consider for publication any research that had already been shared with the media. Other science journals had their versions of the so-called “Ingelfinger Rule.” Some still do. But other journals have loosened or even abandoned their prohibitions.

Of course the most eminent scientists can get away with becoming public figures, and always could; consider Einstein. But less eminent scientists who want to rise in the ranks are well-advised – and are literally advised by their scientific mentors – to steer clear.

2. Many scientists don’t expect themselves to be able to communicate.

Some of this is selection at work. Some time in their schooling, at college or often long before, many scientists-to-be make the discovery (or is it the decision?) that they’re not very good with people. So they decide to focus on something more comprehensible than people … and a scientist is born. Among the appeals of science are precision, impersonality, and replicability. Scientific principles don’t get moody, don’t require smalltalk, don’t keep changing their minds for no particular reason.

Obviously not all scientists are inarticulate and interpersonally insensitive. And not everyone who is inarticulate and interpersonally insensitive in youth is doomed to stay that way. Still, a disproportionate number of inarticulate and interpersonally insensitive bright young people get funneled into science. Long before they are scientists, they’re already convinced that communicating across the species boundary between themselves and normal people is something they don’t like and don’t do well.

This sense that they are poor communicators coexists in most scientists with a firm conviction that communication isn’t an actual field. “If I’m not good at making contact with ordinary people,” they say, “that must be about personality or style. I must lack some innate talent for glibness. But that doesn’t mean I need to study communications; I just need to explain the data better.” After the 1979 Three Mile Island nuclear accident, Metropolitan Edison engineering vice president Jack Herbein, who managed the accident, explained to one of us (Sandman) why he didn’t take his PR specialist’s advice (“risk communication” wasn’t really a term yet in 1979). “PR isn’t a real field. It’s not like engineering. Anyone can do it.” Many articles have since been written about the poor communication during Three Mile Island.

3. Many scientists over-value rationality, and mistrust – and even disdain – emotion.

This is a much more complex phenomenon than the first two, and it will take longer to describe. We’ll start with an example.

In the midst of the 2003 SARS outbreaks, Nobel laureate and virologist David Baltimore wrote widely quoted columns in the Los Angeles Times and the Wall Street Journal blaming the media for public fears about SARS. He even labeled the phenomenon SAMS: “Severe Acute Media Syndrome.” His real quarrel with the media, though, was their willingness to cater to the emotionality of the public. He wrote that “Openness … breeds fear and overreaction,” and that “those of us professionally devoted to rational analysis” need to do something about SAMS.

One of the most interesting things about Dr. Baltimore’s columns is how far outside his area of expertise he was. He might have benefited from a few hours with the published research on his new field. He might have learned how the media actually cover emergencies – as the crisis gets more serious, sensationalism gives way to over-reassurance. He might have learned how normal people actually adjust to scary new information – we personalize the risk at first (and are disdained for doing so), become temporarily more anxious (and are called “hysterical”), and take precautions beyond those the experts consider necessary (and are told we’re being irrational) … but we rarely panic and we usually adjust pretty quickly, even in the face of all that ridicule. He might have learned about the constructive role emotions actually play in risk perception and risk communication – genuinely emotionless people are often psychopaths; those who temporarily lose access to their emotions are suffering from a variety of denial psychiatrists call “isolation of affect.”

So Dr. Baltimore’s rant demonstrates not just an overattachment to rationality and a fear of emotion, but also a kind of one-upmanship or scientific arrogance: “Hard” scientists imagine that the “soft” sciences aren’t science at all. Virologists are likely to feel entitled to opinions about risk communication without having studied the field, without even having noticed that there is a field; risk communicators are unlikely to feel that way about virology. (The irritation we are venting here is real, but so is the phenomenon that irritates us!)

In their dealings with the public, scientists irrationally overvalue strict rationality. This is a carry-over from their research work, where scientists are appropriately pretty religious when it comes to being rational and sticking to the data. Perhaps inevitably but certainly mistakenly, they imagine that everyone should learn the way they learn: doing studies, reading journals, attending conferences. So when they try to talk to nonscientists, their model is the scientific lecture or journal article. “Educating the Public” in the minds of most scientists seems to mean the cool, straightforward, unemotional, one-way transmission of facts. Curiously, they do not seem to know what every learning theorist, teacher, and parent knows about how normal people learn new information, especially distressing information.

Not everyone who shares Dr. Baltimore’s ignorance about risk communication data is as angry as he is or as inclined to blame others. Writing also about SARS, a group of scientists led by prominent infectious disease experts Roy Anderson and Christl Donnelly expressed their earnest desire to find a way to prevent public alarm about scary new risks. In a May 7, 2003 article in the medical journal Lancet, updating the evolving case fatality rate for SARS to a number far higher than the media’s routinely reported 4%, Anderson and Donnelly wrote: “The epidemic has shown the need for communication of risk that will inform and warn the public, in a way that will improve personal protection, without inducing raised anxiety and fear.”

This is the Holy Grail of risk communication – informing and warning people, getting them to take appropriate precautions about scary new information, without actually making them anxious. Like trying to break up with your boyfriend without hurting his feelings, it simply can’t be done. Equally important, social scientists have long known it can’t be done. Physical scientists (and medical researchers) who continue to aspire to this unachievable goal demonstrate a profound emotional attachment to avoiding other people’s emotions. Their deep commitment to basing public policy and even individual behavior on “data” alone is aggressively ignorant of the data that explain why that won’t work.

The scientist’s longing for a risk communication “magic bullet” is exactly parallel to the public’s longing for a pharmaceutical magic bullet – a vaccine or other drug to alleviate all their fears. It’s a reasonable thing to want. But rational response begins with the sad awareness that it doesn’t exist – and then proceeds to inquire what other approaches may work better.

What is it that Drs. Anderson, Donnelly, and Baltimore don’t seem to know?

  • That normal people’s response to risk is emotional and “rational” (cognitive) at the same time.
  • That you can’t give people scary information without scaring them.
  • That people are usually able to tolerate anxiety and even fear, without escalating into terror or panic.
  • That risk communication professionals have evolved techniques for helping people do so.
  • That demanding that people stay unemotional about risk isn’t one of the techniques that work.
  • That demanding that the media suppress alarming content also isn’t one of the techniques that work.
  • That those who want to educate the public should first study how the public learns.
  • That all of the above generalizations are supported by data.
  • That scientists who ignore such data and decry “irrationality” – that is, emotionality – in society’s response to risk are acting out of their own emotions, compounded by ignorance and arrogance.

There. That felt good.

4. Many scientists disavow their own emotions. This may lead them to project their emotions onto the public and the media.

Many scientists reading this column will proudly agree with us that scientists tend to emphasize rationality and ignore or disparage emotion in their efforts at public communication. Many scientists may even accept, reluctantly, that this approach runs counter to the data on how people learn and how risk is best communicated. What will stop them in their tracks is our claim that these tendencies are rooted in the scientists’ own emotions. “Us, emotional? Absurd!”

When dealing with the public, many scientists suppress their own emotionality, and with it huge hunks of their very humanity. They suddenly forget the way they feel when a peer-reviewer tears a draft article to shreds or a rival gets a coveted grant. They forget the venom and sarcasm with which they sometimes criticize each other at conferences. They forget even the affection, understanding, and compassion they have learned to let show with their friends and families. When they face the public they see themselves as emotion-free data-based advisors, the Great Wise Ones, deserving of infinite and unquestioning trust because their pronouncements are backed by Sound Science (you can just hear the capital letters) – which they often let the public believe is more certain than it is. (As a matter of principle, scientific knowledge is always incomplete. When scientists do not want people to rely on current knowledge, they say “further research is needed” or even “it’s too soon to speculate.” When they do want people to rely on current knowledge, they invoke Sound Science.)

And they wishfully see the public as an interested, ignorant, passive audience – empty vessels waiting to be filled with knowledge. Not surprisingly, the public falls short. It may not be interested. Worse, it may not be ignorant and passive; people may have prior knowledge (some of it false), opinions of their own, information sources other than scientists, things they want to answer back, precautions they want to take that are different from the precautions scientists are recommending.

Then, when the public reacts with alarm, anxiety, fear, skepticism, distrust, or disobedience – when the public doesn’t automatically “learn” what the scientists want them to learn – supposedly emotionless scientists often get annoyed with the public. They may not know they’re annoyed; since annoyance is an emotion and emotions are to be avoided, they may experience their annoyance as disapproval. But the rest of us have no trouble detecting their frustration and irritation. Words like “irrational” and “hysterical” begin to creep into their rhetoric.

We have all experienced interactions with people who are trying hard not to show what they’re feeling – not even to know they’re feeling it. They tend to get rigid, cold, critical, self-righteous, and dogmatic: “I’m not angry, I’m just right!” They also tend to see everyone around them as hyper-emotional. Psychiatrists call this projection. Unable to acknowledge emotions in themselves, they see them instead in others, whether they’re there or not. Especially in a risk crisis, experts are likely to be feeling a range of emotions – not just irritation at that uncooperative public, but also anxiety: What if we turn out wrong about the data? What if we mishandle the crisis? Little wonder these experts over-diagnose panic in the public.

Scientists need to learn four things about emotions. First, it isn’t irrational or hysterical to be alarmed, anxious, fearful, skeptical, distrustful, or disobedient when scientists impersonally impart “facts” about a new risk. Second, interpreting these reactions as irrational or hysterical is itself an emotional response (we’ll resist saying it’s irrational or hysterical) on the part of scientists. Third, scientists’ own unacknowledged emotions are likely to get projected onto the public. And fourth, scientists work hard not to notice that the first three points are true, seeing themselves as much less emotional than they are and the public as much less rational than it is.

5. Many scientists fail to allow for the public’s mistrust.

Most people can readily tell the difference between a listener who doesn’t understand what you’re saying and a listener who doesn’t buy what you’re saying. Many scientists, interestingly, have trouble making this distinction. They are so confident of the correctness and objectivity of their opinions that they interpret all audience doubts and disagreements as some variety of ignorance, irrationality, or emotionality, presumably inflamed by the media or unduly influenced by activists.

Among themselves scientists are extremely respectful of skepticism. Demands for proof, criticisms of methodology, and citations of conflicting evidence and alternative hypotheses are always a legitimate response to a scientific claim. In fact, scientists are famous for their habit of self-doubt, for qualifying and even over-qualifying their own opinions by listing all the reasons they might not be right. (Check out the “Discussion” section of any journal article.) Not that we are certain this is the case. Further research is needed.

One would expect scientists to be similarly tentative in their public pronouncements. And sometimes they are. Senator Edmund Muskie famously lost patience with the endless “on the other hands” of his scientific advisors and asked an aide to find him a one-armed scientist. It isn’t rare for scientists to insist on hedging their claims, even in the face of a public that would rather they were more confident and a sponsor (whether corporate, government, or activist) that is also pushing for an unequivocal answer. This is scientists at their best: sticking to their uncertainties when everyone around them is begging them to say they’re sure.

But when everyone around them is questioning whether they know what they’re talking about, then scientists tend to abandon their uncertainties. This is one of the seesaws that pop up all over risk communication. Scientific sources tend to abandon their habitual tentativeness and become over-confident when their opinions are questioned and their advice is resisted – especially when the questioning and resistance come from nonscientists, and even more if the nonscientists are frightened or angry. Having temporarily forgotten all the uncertainties that surround their claims, they then disparage their skeptical nonscientific audience as ignorant or – the biggest dis of all in the scientist’s vocabulary – “irrational.” This defensively contemptuous response to feeling disrespected themselves is perfectly normal, human, and emotionally understandable … though surely it is unscientific and maybe even “irrational.”

It also ignores the context in which the disrespect occurs. The public has many reasons to mistrust the pronouncements of scientists. That’s a column in itself – see “Trust Us, We’re Experts.” In a nutshell, the public has a more accurate understanding than most scientists do of the pressures that may contaminate scientific conclusions. (Economic pressures predominate for corporate scientists, ideological pressures for activist scientists, political pressures for government scientists.) The public is accustomed to hearing from competing scientists on opposite sides of a risk controversy, both of them expressing confidence in the correctness of their own positions. The public is also accustomed to hearing from revisionist scientists, back a second or a third time to contradict (or at least modify) their earlier claims. The honest ones say “we were wrong before, and we sounded a lot more confident than we should have sounded,” while the less honest ones try to correct the record without quite acknowledging that there is a record that needs correcting.

Scientists’ diagnosis of the public as hysterical, panicky, ignorant, or irrational is usually a self-serving misdiagnosis. The reality is often much more threatening to the scientists’ self-image: The public doesn’t trust them, and often for good reason.

All of this applies most clearly to scientists who see themselves as neutral – government scientists, academic scientists, and the like. Corporate scientists, by contrast, expect to be mistrusted. They know their employers are presumed to be the bad guys, and they walk into a public meeting expecting their data to be challenged. They don’t like it, but they’re not surprised, and they learn not to take offense (although they may still act defensive). Activist scientists expect to be trusted and generally are. They get credit for working for the good guys, and for trying to warn the public, not trying to reassure it. People may doubt their technical competence, but not usually their altruism or their integrity. It is the scientists who see themselves as not having an axe to grind who are likeliest to experience public mistrust and misinterpret it as public ignorance or irrationality.

6. Many scientists collude with the public’s desire to be over-reassured.

When the public is skeptical, scientists tend to act over-confident as a defense. What about when the public is too frightened to be skeptical, when the public is begging for immediate answers and premature reassurance?

This is when the scientist’s natural inclination to be tentative would ideally merge with the risk communicator’s skill at helping people bear their fears, to yield an approach that is compassionately candid about the real uncertainties of the situation. Too often, instead, scientists give in to the pressure. Instead of sounding defensively over-confident because the public is offensively skeptical, they end up sounding patronizingly over-confident because the public seems dependent and fragile.

In a crisis, the public – which means most of us, most of the time – is in a state of ambivalence. On the one hand, we long for reassurance, for easy and quick answers, for magic bullets. We want to be passive and taken care of; we want to be told everything will be okay. These yearnings are a kind of regression to a less mature coping level than our normal adult selves. This is an understandable and inevitable reaction to a crisis – but it is only half of our reaction.

The other half of our ambivalence in a crisis is our resilient desire to take charge, to be involved, to have input, to learn how to help ourselves and others, to be altruistic, to fight the problem. This half of our ambivalence represents our desire to respond on a more mature coping level, as adults.

If they understood normal people better, scientists (and officials) could choose consciously which side of this ambivalence to ally with. Instead, they typically see only one option. They see only the immature half of the public’s ambivalence, and almost automatically they collude with that half, representing themselves as overly confident, overly reassuring, and overly wise. When they turn out wrong, this approach backfires, of course. But even when they turn out right, this approach does not help inspire the public’s optimal mature coping abilities.

We believe it is the scientists who have the responsibility to break this cycle: by insisting on uncertainty; by sharing dilemmas; by appealing to the more adult half of the public’s ambivalence: its ability to cope with and bear hard situations.

Failing to do so is sometimes a failure of nerve, especially when officials (or employers) are sending clear signals that a confidently reassuring statement would be a good career move right now. But when experts over-reassure, we believe, the main culprit usually isn’t their fear of angering their bosses. It is much likelier to be their fear that the media and the public will over-react if they are candid, and their resulting conviction that the best way to maintain public calm is to spin the story in a reassuring direction. They believe the actual risk is smaller than they believe the public will think it is if they tell the public everything they know. By de-emphasizing the possible worst case, the level of uncertainty, and the alarming bits of preliminary data, they’re not trying to mislead. They’re trying to lead. But time after time this backfires.

As we write in early January 2004, scientists at the United States Department of Agriculture are working valiantly to reassure the American public and other countries’ meat import programs that American beef is safe, notwithstanding the recent discovery of the country’s first known instance of Mad Cow Disease. They are almost certainly right in their claim that Mad Cow Disease will remain extremely rare in the U.S.; surely it is a far smaller health risk to American beef-eaters than E. coli (not to mention cholesterol). They may or may not succeed in persuading us all that this is the case. The experience of other countries (and other risk controversies) suggests that December’s events will put some people off beef for a while, but most of us will come back soon enough regardless of what the USDA tells us. But the USDA is unnecessarily slowing the process, and leaving an unnecessary residue of skepticism and mistrust, by mishandling their risk communication: by asserting that American beef is absolutely “safe,” by giving us the impression that they think the one Mad Cow that got tested was probably the only Mad Cow in the country, by pretending that the existence of regulations is tantamount to 100% compliance with those regulations, by ignoring or discounting the claims of more alarmist scientists, and above all by endlessly comparing their Sound Science with everyone else’s “knee-jerk over-reactions.”

7. Many scientists do not trust the public.

Just as the public doesn’t always trust science, scientists often do not trust the public.

You may have read about the Red Wine Paradox (also called the French Paradox). This is the popular name for the observation that despite a high-fat diet, French people have lower rates of heart disease than many other cultures, possibly because they drink a lot of red wine. Research studies have usually shown that moderate red wine drinking helps protect against heart disease.

From a risk communication perspective, the most interesting thing about the Red Wine Paradox is the obligatory sentence in nearly every research report and news story, disavowing the obvious meaning of the finding: “But that doesn’t mean people should start drinking red wine.” Well, everything in the study suggested it meant exactly that! To a reader, this sort of disavowal looks like the scientists are drawing an irrational conclusion – or refusing to draw a rational conclusion – from straightforward data. When confronted about it, scientists explain that they don’t want to encourage excessive drinking. Some people, they say, might take the findings as license to overdo it. So they deny or at least hedge the meaning of the findings. This reveals a patronizing, insulting lack of trust in us as adults. Scientists are so fearful of provoking alcoholism in potentially out-of-control drinkers that they don’t want to tell moderate drinkers and nondrinkers that a little wine is a good thing! So we see the spectacle of rational, data-based scientists afraid to draw the logical conclusions of their own work, out of mistrust of the public.

Some recent research has shown that red wine might not explain the French Paradox after all. (Yet another example of the permanent tentativeness and changeability of all science!) But what we are going to call The Red Wine Paradox Paradox is still valid: When research findings yield conclusions that might be misapplied, or applied in socially unacceptable ways, scientists often avoid drawing those conclusions. They do so because they distrust the public’s ability to use the results wisely.

Some of the best examples of the Red Wine Paradox Paradox come out of social science, especially research results bearing on gender, ethnicity, sexual orientation, and other hot-button topics. More often than not, research that might give aid and comfort to antisocial ideologies simply isn’t done; if it’s done, the studies aren’t published; if they’re published, their implications are disavowed.

But risk controversies are also vulnerable to the Red Wine Paradox Paradox. One of us (Sandman) has a client that owns a former battery recycling facility, which emitted substantial amounts of lead when it was operating. The client commissioned a study to compare bone lead deposition in people downwind and upwind from the facility. To help reduce the inevitable public skepticism if the results showed no difference, the academic researchers were given total independence. Ultimately, the results showed no difference – and the researchers became suddenly reluctant to get the study published, because they didn’t want to look like they were helping to defend polluters and excuse pollution.

One final example of the Red Wine Paradox Paradox. In September 2003, research was published showing that obese people have less of a particular hunger-fighting hormone (called PYY-3-36) in their blood than people of normal weight. Moreover, infusions of the hormone reduced people’s appetite by thirty percent; that is, both obese and normal people ate 30% less at an all-you-can-eat buffet when given the hormone treatment a few hours before.

Sounds like a possible replacement for self-control, doesn’t it? Take your hormone and eat as much as you want, since you won’t want as much. Well, apparently that implication cannot be allowed to stand. So the article on the discovery in NewScientist.com quotes one of the researchers to the effect that PYY “will not be a wonder treatment” that will allow people “to continue eating high fat foods and drinking beer.” That is probably true. There are very few wonder treatments. But then the article ends with this unsubstantiated sentence: “The hormone would only be effective when combined with lifestyle changes.”

8. Many scientists see themselves as outside the communication process.

The public’s response to scientific information is largely a response to the scientists who are imparting that information. Are they friendly or cold, compassionate or uninvolved, accessible or distant, robotic or human? As risk communication expert Vincent Covello has long argued, “people don’t care what you know until they know that you care.”

Scientists tend to discount this fundamental reality. They have factored out, or failed even to consider, their own role in the communication environment; they seem to be examining the spectacle of the public and the media from outside the system.

They forget Heisenberg!

9. Many scientists don’t notice when they have gone beyond their expertise.

We have already shared our irritation at David Baltimore for his willingness to opine on media behavior, psychology, and risk communication without having done his homework. There is of course a long line of eminent scientists who, like Baltimore, have felt that their eminence in one field gave them standing in another. Linus Pauling and William Shockley come to mind, among others. We suppose that in a society that takes seriously the political opinions and product endorsements of singers and actors, scientists must be forgiven for going beyond their expertise.

What is less forgivable is the tendency of scientists to fudge the difference between scientific conclusions that are grounded in data alone and action recommendations that are grounded jointly in data and values. Scientists seem to imagine that their ability to collect and interpret data gives their values special standing. Worse, they seem not to notice when they have gone beyond the data, beyond their interpretation of the data, into the realm of values.

In risk controversies, the data tell us how safe X is. And the data tell us how confident we can be about our assessment of the safety of X: what the margin of error is on the studies that have been done, what dissenting experts have to say about the methods and findings of those studies, what components of the risk haven’t yet been studied at all. Scientists sometimes neglect some of these data questions, but they are data questions, and scientists are uniquely qualified to guide us through the answers.

But the data do not tell us how safe is safe enough, or how confident is confident enough. Nor do the data tell us whether we should tolerate higher levels of risk or higher levels of uncertainty for some sorts of hazards than for others – for example, whether risks become less acceptable when they are involuntary or unfair, or when there is a history of dishonesty or unresponsiveness, or when moral issues are intermixed with the safety issues. Risk acceptability is a values question, a “trans-scientific” rather than a scientific question. In democratic societies, policy choices about risk acceptability are made by the society as a whole through the political process, while individuals remain free to make their own choices about their own personal risk.

Risk decisions should be guided not just by data, but also by values. Absent values, the data alone do not yield a decision. Scientists who insist that the data are the only proper basis for decision-making are being – yes – irrational. And scientists who imagine that their own values are part of the data are way, way out of their fields. Should Europeans accept unlabeled American food imports with genetically modified content? Should people in cities afflicted by SARS wear masks in crowded subways? Should Japan import meat from countries with less precautionary Mad Cow Disease regulations than its own? Risk data are relevant to the answers to these questions. But values are also relevant, and scientists’ values are no more authoritative than anyone else’s values.

10. Many scientists forget to start where their audience is.

No communication principle is more fundamental than this one: Start where your audience is. This isn’t just a risk communication principle; it is fundamental to all of communication.

Princeton psychologist Daniel Kahneman and his late partner Amos Tversky spent decades elaborating this principle. Kahneman won a Nobel Prize for his research in 2003, the closest risk communication has come to having its own Nobel Laureate. (David Baltimore might want to read some of his research.) The basis for Kahneman and Tversky’s work is that people use everything they’ve got to help them assess a new piece of data. Prior knowledge, experience, and emotion “anchor” and “frame” the new information, affecting not just how it is interpreted, but even how it is perceived.

By the way, Kahneman and Tversky and their successors have found that scientists are just as vulnerable as the rest of us to these framing effects, and to all the other “heuristics” and “mental models” that influence how new information is taken in. For example, doctors are less likely to prescribe a medication that loses 30% of patients than one that saves 70%, although the two formulations are statistically equivalent. And even risk assessment specialists imagine that risk and benefit are negatively correlated (they are actually positively correlated, though weakly), and therefore tend to overestimate risk when they consider the benefits trivial, and underestimate risk when they consider the benefits substantial.

What is important here – and extremely well-established – is that people aren’t empty vessels, just waiting to be filled with whatever scientists decide to tell us. We come to the interaction with preconceptions, biases, and emotions. Some of this baggage may be false or misleading; some of it is valid and helpful. All of it is where we start. And research about how people learn consistently shows that new information is hard to absorb if it is at odds with what we bring to the table.

In response, communicators have long since learned to find out what their audience brings to the table. Some communication strategies are grounded in stressing those aspects of the message that are compatible with people’s prior opinions, predilections, and feelings. Other strategies are aimed at addressing the conflicts that cannot or should not be papered over. It helps, for example, to validate the rationale behind erroneous preconceptions before explaining why they turn out, surprisingly enough, to be untrue or inapplicable to the situation at hand. It helps to acknowledge why it is normal and appropriate for people to feel the way they feel (“I feel that way too sometimes”) before warning that the feeling may lead them to act in ways they could come to regret. It helps not only to show your understanding of people’s starting assumptions, but to ally empathically with how those assumptions came to be – are they grounded in last year’s scientific “truth,” in well-known folk wisdom, in skepticism because science has been wrong before, or even in mistrust because scientists have been dishonest before?

In other words, it helps to tell people stories about themselves, directly or indirectly – to talk not just about external reality but about them, how they are likely to see that reality and why. What doesn’t help is to assume they have no feelings, no preconceptions, nothing except their ignorance to be replaced with your scientific knowledge.

11, 12, 13, etc.

There are many other differences between scientists and the public that further complicate risk communication efforts. A few more:

  • Scientists generally like details; they want to report everything they know and everything they did. For the public, on the other hand, a little methodology goes a long way, and unless it’s a full-bore crisis the details are best forgotten. Just tell us the bottom line, and be ready with the details if we ask. But Lord help your credibility if you fail to volunteer a detail that seems to conflict with your main point; critics will be all over it, and all over you, in a microsecond. The details that merely bolster your case, however, are best summarized and held in reserve, not belabored.
  • Scientists are taught implicitly that good scientific communication should be complicated and hard to understand. A scientist who writes clearly is often assumed by his or her peers to be oversimplifying. Even if the complexities haven’t actually been omitted, only explained well, the professional bias holds: If it’s clear it’s got to be inaccurate. If it goes beyond clear, if it’s genuinely interesting, charges of pandering are bound to result.
  • Scientists talk in numbers. They’re number people; they mistrust words as intrinsically imprecise. The vast bulk of the public, meanwhile, mistrusts numbers and has difficulty interpreting numbers. We are a literate but innumerate society; we do better with words. We do best with concrete words, with examples, anecdotes, analogies, parables – whereas scientists, when they must resort to words, prefer abstractions, generalizations, and jargon. And some of the scientist’s jargon consists of weird meanings for words ordinary people use in ordinary ways: “surveillance,” “community,” “contaminant,” “significant,” “conservative,” even “positive” and “negative.” Nor do pictures resolve the conflict. The public learns from illustrations, photos, cartoons; the scientist provides charts and graphs.
  • Scientists, typically, are in no hurry. After all, a scientific team may devote a decade or more to finding a definitive answer to one tiny question no one else has even thought to ask. If a reporter calls to find out what the scientists have learned, how could it matter if they wait till next week to return the call? The public (and the media), on the other hand, oscillate between total absence of interest and intense, urgent interest.

Feel free to add to the list. But more importantly, try to do something about the list. Yes, scientists are from Mars and everyone else is from Venus. To some extent that’s inevitable. But citizens who understand why scientists are the way they are can make allowances. And scientists who understand why scientists are the way they are can try harder to find common ground with the public. Their risk communication will be the better for it.

Copyright © 2003 by Jody Lanard and Peter M. Sandman

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.