Note from Peter Sandman: What follows is the unedited transcript provided by the Nieman Foundation for Journalism, which I have very lightly edited so it makes sense. It is still very much a transcript, not a polished article. A shorter version, abridged by the Nieman Foundation (not by me), was published in the Spring 2007 issue of Nieman Reports.
Moderator from Nieman Foundation: OK, we’re almost there; thank you for trying to listen to me.
I think I’ve learned three things so far – I’ve learned a little bit more, but the three most important ones for the next session are the influenza virus is complicated, we know less than we knew thirty years ago, and we shouldn’t think too much about the signs. We should jump right into preparing, which of course leaves me with a lot of questions about how you are supposed to think about it. It’s an elusive threat; how do we deal with this?
I could not think of a better speaker to guide us through some of these questions. Peter Sandman is a Risk Advisor to WHO and an internationally known risk consultant and communicator. I’ll just give you one of his great lines, which came during the [inaudible] break: “We observed that there was no panic, just panic about panic by officials.”
Peter Sandman: Thank you. I’m on a cordless mike and I’m not going to stand behind the podium and I have no PowerPoint slides. I do have, however, a page of notes and I will occasionally write on the blackboard.
I want to try to accomplish four things in the next hour or nearly hour. One of the things I want to do is to talk to you about some of the basics of risk perception. Some of the material that some of you may have read very recently in the current Time cover story on risk perception, which is, I think, very good. Anybody here from Time?
PS: Well, it’s very good anyway!
The second thing I want to do is talk about the basics of risk communication or, at least, the basics of my approach to risk communication, which is in some ways shared by other risk communication people and in other ways kind of idiosyncratic.
The third thing I want to do is talk about what all that has to do with journalism, to which in some ways, the answer is, “Not much.” I want to make some distinctions between risk communication and risk journalism and maybe [inaudible] some kind of [inaudible] between the two.
The last thing I want to do – and I won’t do these in order, I’ll kind of switch them – is to talk about what all of this has to do with pandemics.
But in large measure for the next hour, we’re going to back off an exclusive focus on pandemics and talk more broadly about risk perception and risk communication and risk journalism. I will not finish – nobody else has either – but I will end on time. I’m not going to allow any time for Q and A. I will take questions and comments throughout the hour; so if you have a comment or you have a question, do not wait for the Q and A. I’m not going to finish anyhow, so like good journalists do, we’ll cut from the top. I will take your questions and comments as we go.
[Inaudible comment from audience]
I want to start with a number; it’s the only number I’m going to use in this hour, so if you don’t like numbers, get through this one and it’s clear sailing from here on in.
Here’s a number that captures what I think is most important in risk perception. If you make a long list of hazards, and you rank order them in order of something like ‘expected annual mortality’ – how many people they kill a year – and then you take the same list of hazards, and you rank order them in order of how upsetting they are to people, the correlation between the two rank orders is approximately 0.2. Those of you who have studied statistics know you can square a correlation coefficient to get the percentage of variance accounted for. If you square 0.2, you get 0.4 – a glorious 4% of the variance.
Or here it is without numbers: the risks that kill people and the risks that upset people are completely different. If you know this is deadly, that tells you almost nothing about whether it’s upsetting. If you know a risk is upsetting, that tells you almost nothing about whether it’s deadly. It could be worse: it could be a negative correlation. That would mean that if it’s upsetting, that in itself proves it’s safe. And we haven’t got a negative correlation; we have a tiny, forgettable positive correlation.
So essentially the two variables are unrelated. That’s the basic premise – that’s the sort of universal axiom of risk perception. If you replace mortality with morbidity in the calculation – you’re no longer killing people, you’re just hurting them or making them sick – and you correlate morbidity with public concern, you get once again 0.2. If you use ecosystem data; if you correlate ecosystem data to the public concern, the correlation comes out to 0.2. If you’re not worried about any of those and you focus on something like economic damage, which the current administration likes to do, you correlate economic damage with public concern, that correlation also is 0.2. It doesn’t matter what your measure of harm is, across a wide range of hazards, the correlation between how much harm that hazard does and how upset people get about it is this absurdly low 0.2 correlation.
So the key intellectual question in risk perception is, “Why is the correlation so low?” The key practical question in risk communication is, “How do we get it higher?” which is not just one question. It’s at least two questions. Half of that problem of getting the correlation higher is figuring out how to get people to get more upset when the risk is serious; half of that problem is figuring out how to get people less upset when the risk is trivial.
All of this leaves aside, of course, that sometimes we don’t know whether the risk is serious or the risk is trivial, and that’s going to be hotly debated. But even when we know, we have this piss-poor correlation. Are you with me so far?
Now, a long time ago, trying to make sense out of this universal very low correlation, I came up with new terminology to describe it; that terminology has become almost standard.
This terminology actually sent my children to college. I said, “Let’s take the concept of risk, and let’s divide it in half. Let’s consider the technical side of risk – whether it’s likely to kill you, hurt you, or damage the ecosystem – let’s call that ‘hazard.’”
And then I said, “Let’s take the other half of risk – the culture half of risk rather than the scientific half – that is whether its likely to upset you, anger you, or frighten you; let’s call that ‘outrage.’”
And I came up with the formula: risk is equal to hazard plus outrage. Technical people hate that formula, and there are some technical people in the room and you all talk to technical people all the time. My clients are mostly engineers, and they look at this and they say, “Oh, come on. That’s what happens when you let an English major write a formula.”
So I have, for the techies in the room or the techies that you live with and love, I have an alternative definition. I would suggest that risk is a function of hazard and outrage. If you don’t know what that means, that’s hazard plus outrage in ‘their code.’
This is a redefinition that makes a difference. I want to make sure everybody understands why. The way the problem is usually seen – remember the problem is this miserable, boring 0.2 correlation between whether people are going to get hurt and whether they’re going to get upset – the way that problem is usually seen is as a problem of public misperception. So my clients spend a lot of time complaining to each other that the public just doesn’t get, just doesn’t understand, just doesn’t perceive the risk correctly.
What this redefinition does is suggest something very different and much more symmetrical. What I’m arguing is that the experts, when they look at a risk, focus on the hazard and ignore the outrage. The experts, therefore, systematically overestimate the risk when the hazard is high and the outrage is low; and the experts systematically underestimate the risk when the hazard is low and the outrage is high, because all they’re doing is looking at the hazard.
The public – and by that I mean everybody except the experts, and even the experts when they go home at night (there is beautiful data about EPA risk assessment professionals considering exploratory surgery) – it turns out that outside their area of expertise they abandon all their risk assessment understanding and they think about exploratory surgery the same way the rest of us do.
So the experts, then, focus on the hazard and ignore the outrage. The public makes exactly the opposite mistake. The public focuses on the outrage and ignores the hazard. The public, therefore, overestimates the risk when the outrage is high and the hazard is low, and underestimates the risk when the outrage is low and the hazard is high.
What I’m arguing is that this 0.2 correlation I keep going on about, far from being the result of public misperception, is in fact the result of a definitional dispute. I’m arguing that 0.2 is the genuine correlation between hazard and outrage – two nearly independent variables that have in fact only one interesting thing in common: they’re both called risk by different groups of people.
Now I want to make sure that makes sense to everybody, because then I’m going to play with it. But you’ve got to get it first; the essence of what I’m saying is as follows: when the hazard is high and the outrage is low – give me an example of a high-hazard low-outrage risk that’s likely to kill people but not likely to upset people.
Female: Driving a car.
Driving a car is a good one.
When the hazard is high and the outrage is low, the experts will take the risk seriously, the public will shrug the risk off, and the experts will be tearing their hair out trying to figure out how to get this ‘stupid public’ to realize that this is really dangerous. When the hazard is low and the outrage is high, the experts will shrug the risk off, the public will take the risk seriously, and the experts will still be tearing their hair out, trying to figure out how to get this ‘stupid public’ to realize that this isn’t really serious.
When the hazard is high and the outrage is also high, the experts will take the risk seriously. The public will also take the risk seriously for completely different reasons; it’s a coincidence. As soon as they start talking to each other about it, they realize that they’re not talking about the same thing at all and that they actually disagree just as much as they do in the other two cases.
Finally, when the hazard and the outrage are both low, experts and public will both shrug the risk off, and they are unlikely to talk to each other at all. They don’t spend a lot of time talking about a risk that neither of them considers serious.
OK, I said a couple of minutes ago that the only real relationship between hazard and outrage is that they’re both called ‘risk’ by different groups of people. But there’s another relationship that goes to the core of where I want to take you in this hour. Let me try to explain that. When outrage is high, perceived hazard is almost always going to be high. When outrage is low, perceived hazard is going to be low.
Female1: Now what’s the difference – excuse me – between outrage and perceived hazard? Isn’t that the same?
No, outrage is real stuff. If you look at your handouts, you’ve got a list of the key components of outrage. They are things like, do I control it or do you control it, is it voluntary or coerced, is it familiar or is it exotic, do you lie a lot or can I trust you, do you answer my questions and answer my phone calls or are you unresponsive to my concerns, is it a familiar risk or an exotic risk, is it highly dreaded or not so highly dreaded? All of those are what outrage means. So outrage is not perceived hazard.
But outrage is highly correlated with perceived hazard. Does everybody see the difference between those two? Outrage is outrage. Perceived hazard is perceived hazard; but they are highly correlated with each other.
It turns out that the correlation between hazard and perceived hazard is quite low. I’ve already told you that the correlation between hazard and outrage is also quite low, but the correlation between outrage and perceived hazard is quite high. Now whenever you have a high correlation, the first thing you want to know if you’re any kind of scientist is, “What’s the direction of the causality?”
So when we look at the high correlation between outrage and hazard perception, the question we’re asking in a nutshell is this: do people get upset because they think something is dangerous, or do people think something is dangerous because they’re upset? That’s a very important question, because if you want to manage a system, you have to know what’s cause and what’s effect so that you won’t be in the embarrassing position of trying to influence the cause by manipulating the effect. That’s bad engineering, it’s bad science, and it’s bad communication too. So we really want to know what’s cause and what’s effect.
Because this matters so much, there has been a lot of research on it. The research always comes out the same way, and we do what social scientists do when we don’t like a finding: we study it again. So we have a huge literature and, in a nutshell, here’s what it tells us.
It’s a cycle – there are arrows in both directions. But the arrow from perceived hazard to outrage is very weak, and the arrow from outrage to perceived hazard is very strong. That is, for the most part, it is not true that people get upset because they think something is dangerous. It is much more true than people think that something is dangerous because they’re upset. It is similarly not true that people are calm because they think something is safe. It’s much truer that people think something is safe because they are calm. Outrage is the engine of hazard perception. Hazard perception is not the engine of outrage.
That begins to make sense of something that all of us have seen, I think. Certainly if you cover risk controversies, you see it all the time. You go to a public meeting. The meeting is about, say, the health effects of dimethylmeatloaf emissions from a factory. Dimethylmeatloaf is a well-known toxic compound. Standing in front of the room is some kind of technical expert, who is standing up there and says, “We’ve done a quantitative risk assessment on the situation, and the odds that anybody in this room will die as a result of our dimethylmeatloaf emissions are less than one in a million.”
And they’re booed. Someone stands up at the back of the room and says, “You’re lying! We’re all going to die of leukemia!” And there’s tumultuous applause. You’ve been in meetings like that? And you ask yourself, what does it take to applaud the idea that you’re going to die of leukemia? And the answer is outrage.
When people are outraged enough, they would rather believe they’re going to die of leukemia, than believe this company is not the Anti-Christ after all. Under the circumstances, we’re going to have a very difficult time convincing people that they’re not going to die of leukemia. They want to die of leukemia. Or to make the point more carefully, they want to believe they’re going to die of leukemia so they can hang on to their outrage.
Two things are true. One, you can’t persuade them with the data. It’s extremely hard to persuade them with the data. Two, if you persuade them with the data, it doesn’t do any good. Let’s assume that we’ve got a terrific speaker at this meeting, and that the speaker has terrific data – those are both rare – and over the course of a couple of hours, the speaker succeeds in convincing this particular audience that dimethylmeatloaf cannot cause leukemia. You give them a true/false test, and they get the question right. What has happened to their outrage during the period during which they were reluctantly becoming convinced that dimethylmeatloaf can’t cause leukemia? Did it go up or did it go down?
It’s a little like Charlie Brown and the football. You haven’t done anything about the outrage. All you’ve done is taken away the ammunition that supports the outrage. They’re just as outraged or, typically, more outraged, but they feel a certain kind of impotence attached to the outrage because they’re not allowed any longer to claim that dimethylmeatloaf causes leukemia.
So there’s this bewildered pause of about fifteen seconds and then somebody says, “Well, you know, but it causes birth defects!” And you’re right back where you started.
Now, this is not because people are stupid. It’s certainly not because they’re low-income or whatever ethnicity they are. It appears to be because they’re human. Let me prove that to you quickly.
Raise your hand if you’re married. Good, lots of married people. I would like you to assume that you and your spouse – or if you’re not married, you and whoever you hang out with – are having a quarrel about where to go out for dinner tonight. Shall we go out for Chinese food or shall we go out for Italian food? Let’s assume that for whatever reason, the quarrel has gotten a little out of hand, and each of you feels that the other is acting like a real jerk. A credible scenario so far?
Given this scenario, raise your hand if you’ve found that a good way to deal with this situation – a good way to save the evening – is to present to your spouse … data [background laughter]. Prove to your spouse that you’re right and your spouse is the one who’s a jerk. Then we’ll have a pleasant evening. Raise your hand if that works in your situation. No hands.
We have all learned that when we’re having a quarrel with somebody that we care deeply about, the quarrel is always about the relationship. The substantive issue is only ammunition, and the same two things are true that were true at our dimethylmeatloaf meeting. It will be hard to persuade your spouse that you’re right, but if you succeed in persuading your spouse that you’re right; if you succeed in selling the data to your spouse, and you don’t do anything at all about the outrage, you can absolutely guarantee that your spouse will pick a fight about some other issue very quickly.
So the bottom line here is managing risk perception – managing what I call hazard perception – is managing outrage. You don’t manage the hazard perception in order to manage the outrage, you manage the outrage in order to manage the hazard perception. Based on that, I want to talk about the kinds of risk communication.
What I want to do is graph hazard against outrage. You say graph and a roomful of journalists all go to sleep, but this’ll be a graph you can understand. We’re going to graph hazard against outrage, and I just want to point your attention to the interesting corners of this graph, and I want to start down here.
High hazard, low outrage. These are the risks that are very likely to kill you or hurt you but not very likely to upset you. This is the domain of what I call ‘precaution advocacy.’ The paradigm is, “Watch out!” The key task – for reasons that I think you can now understand – is to increase the outrage. You’re not trying to increase the outrage for the hell of it. You’re trying to increase the outrage because of the causal system that we’ve just described: if you can increase the outrage, you will increase the hazard perception; if you can increase the hazard perception, you will increase people’s inclination to take precautions or demand precautions or tolerate precautions.
Female2: What fits there? Babies in cars without baby seats?
Yeah, what fits there depends on what your values are. Greenpeace thinks genetically modified food fits there – I don’t – but they’re out there saying, “For God’s sake, do something about genetically modified foods!” The health departments are saying, “For God’s sake, do something about obesity!” This is your graph. If you are convinced that a particular risk is high in hazard and low in outrage, then your core task has got to be to increase the outrage in order to increase the hazard perception. That might mean making people more frightened. It might mean identifying a villain and making people more angry. There are a variety of strategies open to you for increasing outrage.
Is it possible to motivate precautions without increasing outrage? Yes, it is. In the same way that is possible to write a novel without using the letter e. It’s not easy, but you can do it. Should you belong to a religious cult that forbids you to mobilize outrage, all is not lost. You can find other ways of trying to get people to take precautions, but you have certainly abandoned by far the most powerful way. The most powerful way to get people to take precautions is to mobilize and increase outrage.
Now, let me talk for a minute about some of the tech specs for precaution advocacy. They all result from the fact that outrage is low. Another word for low outrage is apathy. Low outrage equals apathy: people are not interested, they’re not concerned, they’re not upset, they’re not angry, they’re not frightened. They’re apathetic. Several things are true. One of the things that’s true as a result of people being apathetic is you’re going to have to keep your message short. Apathetic people have short attention spans. Another thing that’s true is you’re going to have to work really hard to make your message interesting, because apathetic people are easily bored. If you’re a source, you’ve got to try to make it interesting to the reporter. If you’re a reporter, you’ve got to try to make it interesting to the editor. If you’re an editor, you’ve got to try to make it interesting for the reader or viewer. Those are all very daunting tasks, because apathetic people are not easily interested and they’re certainly not interested for long.
The third tech spec that is also important is to stay on message. If it’s going to have to be short, if you’ve only got an eight-second sound bite, and it’s got to be interesting because people are going to tune out pretty easily, you think really hard about what eight-second sound bite you want to use. You craft your message very carefully, you pick your words very carefully, and then you stick to them come hell or high water.
The only advantage of precaution advocacy is people are not hostile. They’re not interested enough to be hostile. So if you can grab their attention during those brief eight seconds, you can shove any crap you want down their throat. So picking what crap to shove down their throat is an essential skill. The name of that skill is public relations. It’s the field that many of you will go into when you retire from journalism.
This is important to notice. I think everybody in the room knows something about PR, and when I say, “Keep it short, make it interesting, stay on message,” that’s all PR 101.
That is important mostly in contrast to the opposite corner. Now we’re looking at risks that are high in outrage and low in hazard. People are very likely to get upset, and not very likely to get hurt. This is the venue of what I call ‘outrage management,’ and the arrow goes the other way.
Now your goal – if you’re doing outrage management – is to decrease the outrage. Again, not for the fun of it; you’re decreasing the outrage in order to decrease the hazard perception, in order to decrease people’s inclination to take precautions or demand precaution – usually demand that you take precautions – or tolerate precautions. It’s exactly the flip side of precaution advocacy. If the paradigm for precaution advocacy is, “Watch out!” the paradigm for outrage management is, “Calm down.” I hasten to add ‘calm down’ isn’t the message.
What happens to outraged people when you say, “Calm down”? Where does the outrage go? It goes up, right, because implicit in there is, “Calm down, you jerk.” People don’t respond well to that message. So you don’t actually say, “Calm down,” but that is your goal.
Now let’s look at our three tech specs for precaution advocacy and see how well they apply to the outrage management paradigm. First of all, keep it short. Do you have to keep it short in outrage management? Absolutely not. They come to a meeting and stay till midnight. Instead of an eight-second sound bite you have an eight-hour meeting. You wish you had an eight-second sound bite. You long for the good old days of apathetic audiences. But it’s a very different situation; no need to keep it short. Should you make it interesting? Of course not. Your entire goal is to make this issue as boring as you can possibly make it. The problem is not insufficient interest. They’re already interested. In fact, they’re obsessed. In outrage management, you very much want to diminish their interest.
Now that doesn’t mean you give a boring speech, because boring speeches boomerang. If you give a boring speech down here in precaution advocacy, they get bored and go away. Even if you give an interesting speech down here they get bored and go away. But if you give a boring speech to a roomful of outraged people, what do they do? They get angry. They think you’re trying to calm them, they think you’re trying to put them to sleep with PowerPoint slides. They don’t fall asleep, they get angry.
So you can’t afford to be boring, but it is your goal to make the issue boring; that is to say, to make the issue lower in outrage. Does this make sense?
I want to emphasize one thing about outrage management.
Oh, I didn’t mention the third tech spec. The third tech spec for precaution advocacy was ‘stay on message.’ Do you stay on message in outrage management? No. What’s the most important thing you do if you’re trying to calm people down?
I heard “Reduce them,” I heard “Change the subject.” Neither of those is right but [inaudible]. You listen! You listen. You know, outrage management is done largely with the ears. Precaution advocacy is done exclusively with the mouth. But outrage management involves a lot of listening, so if you’re doing outrage management – if you’re a good risk communicator – you go to a meeting, and the people at that meeting are not there to listen to you, they’re there to vent. They’re there to yell at you and you let them. You take notes. You practice your active listening skills: you take notes and you say “uh-huh” every twenty seconds, and you crease your brow. All those things.
And a very weird thing happens if you’re a good listener. One of the things that happens is people get calmer when they get listened to. I’m not saying the outrage disappears. It’s not magic, but they get calmer. The other thing that happens is they start wanting to hear from you. Outraged people are a little bit like teenagers. They’re ambivalent. If you want to talk to them, they want you to shut up and listen, but if you do shut up and listen, eventually they start wanting to hear from you. At around the fourth hour of an outraged meeting, if you handle the first four hours well, they will come to you and say, “Well?! What’s your answer? What have you got to say for yourself?” If you’re properly trained, the first time they do that you demur. You say, “Well, I think it’s premature and there are some people who haven’t spoken yet; I’m learning so much tonight.” Try not to let that sound sarcastic.
Male1: [Inaudible] step back a little here and talk a little bit about the components of outrage. In these situations it’s not one thing. You’re suggesting, for example, that people who don’t feel listened to are more outraged. That suggests that one of the problems is a lack of a sense of perceived sense of lack of control over circumstance. Sometimes it has to do with a sense of betrayal of what’s right – when a chemical company dumps something or whatever it is – regardless of how poisonous what they dumped actually was. What do you see as the components of outrage?
A list of what I consider to be the twelve most important components of outrage is in the handouts. It’s in the handouts – it’s a set of dichotomies: voluntary versus coerced, natural versus industrial, familiar versus exotic, and so on. These are not priority order. They’re in an order that was heuristic when I actually taught the whole list. These, I would argue, are the big twelve. There are longer lists available; but a risk that’s voluntary, natural, familiar, not memorable, not dreaded, and so on down that list is going to bore people even if it kills them. A risk that’s coerced, industrial, exotic, memorable, dreaded, and so on down that side, is going to upset people even if its benign.
Now, there are two purposes to a list like this. One is to sort of do your outrage due diligence, to predict outrage and know how much outrage you’re getting into when you undertake a task. The other is to do outrage management. It’s my conviction that outrage is manageable, that you can indeed reduce outrage. And you’re absolutely right: to implement this strategy with any kind of skill requires diagnosing which outrage components are most potent in this particular situation, and addressing them.
There is not a generic answer to which outrage components are most important across the board. Some show up more frequently than others. I think trust and control and responsiveness are probably more likely to be on top than the other nine, but I’ve worked issues where any of the twelve could be on top.
Female2: Is there a tipping point when outrage turns from useful emotion that can make people change their behavior and [inaudible]?
Yes. That takes me where I’m going to be tomorrow [talking about fear in crisis situations] rather than where I want to be now. But yeah, as long as you asked, let’s draw a dimension of fear (fear is not the only kind of outrage but it is certainly part of what I mean by outrage). Down at the bottom we have apathy, and a little bit higher we have interest, and a little bit higher we have concern; a little bit higher we have fear, and a little bit higher we have terror. A little bit higher in two different directions; higher in one way we have panic, higher in the other way we have denial. Panic is quite rare, denial is why panic is rare. When you’re about to panic, you trip a circuit breaker and go into denial instead.
As a matter of definition, the level on this dimension where the most precautionary activity takes place is fear. People who are fearful do the most. People who are merely concerned do less. People who are terrified also do less. So it is a U curve, if you graph this against action – against precaution taking – it’s a U curve and the inflection point on the U curve is fear. That’s not an observation, that’s a definition. We figure out where the inflection point is, and that’s what we call fear.
Male2: Don’t you also have a graph for anger, rage, hate? [inaudible] fueled by fear?
Some of them get more complicated than others but all the outrage emotions have a comparable graph. Anger, for example, also can go into denial. Misery is one that you didn’t mention, but that’s very important – certainly important for thinking about pandemics. If you put misery where fear is here; then empathy is probably where concern is, and depression is probably where terror is. You can play with labels, but there is indeed a dimension like this for all the dominant emotions.
I don’t want to go there if I can help it because I’m trying to go somewhere else.
Female3: I’m sorry. Someone was wondering about the connection between emotions and the facts? Would one variable be knowledge and ignorance?
No. Wouldn’t it be nice if it were so? But it’s not. The relationship between information and emotion is that strong emotion provokes biased information-seeking. The stronger your emotions, the more you will learn; but it’s not neutral learning. You’re learning in order to validate what you’re already feeling. People who don’t have strong emotions usually learn very little; people who have strong emotions learn a lot, but it’s biased. Those are your choices.
Female3: So I write a really factually sound article, and if you’re afraid you’re just going to dismiss it?
If you write a factually sound article, I will harvest it for things that support my attitude. If I think the issue is crap, I will find in your article the things that support my view that it’s crap. If I think the issue is important, I will find in your article the things that support my view that it’s important. Unless I’m a strong activist, in which case I do exactly the opposite. If I think the issue is important, I find the things in your article that suggest it’s crap and write you a nasty letter.
If you and I are redesigning the universe, we will create one in which information leads to attitude change leads to behavior, but that’s not the one God created.
Male3: Can’t you sometimes just dial down the terror? I’ve found just by the words you used – four years of writing about West Nile – I found that if you used the word ’outbreak’ instead of ’epidemic,’ it sort of cooled the temperature down a bit. People think an epidemic is biblical, and an outbreak is just a couple of cases around the block.
Yes. You’re sending signals, and precisely because people don’t have a technical vocabulary and people are pretty innumerate, the signals matter significantly more than the words and the numbers. The classic example is if you say a pandemic could kill as many as two to seven million people (everybody in this room knows that’s a pretty low estimate of a flu pandemic). But if you say it could kill as many as two to seven million people, people will kind of shrug off the two to seven, but they’ll focus on the ‘as many as’ as evidence that it’s a bad number. They’ll say, “Oh, shit. It could kill as many as two to seven million people!”
If, on the other hand, if you said it would only kill two to seven million people, people use ‘only’ as their signal and say, “Oh, no biggie. It’s only two to seven million people.” So the number matters less than the signals you put around the number that tell people whether you’re trying to freak them out or you’re trying to calm them.
I want to say one more thing about outrage management, and that is this: if you’re in a situation where people are yelling at you – somebody is yelling at you; let’s say your spouse. They’re telling you all the things that are wrong with you. Typically, that indictment is going to be a mix of valid stuff, halfway valid stuff, and crap. We all agree? Some of what they’ve got to say is true, some has a germ of truth, and some of it is nonsense. If you are normal, as you listen to this harangue, and are preparing your response, what you normally focus on is...?
Audience: The crap.
The normal reaction when you get the floor is to say, “That’s crap!” Then you come up with two or three valid examples of crap – the things they said that are not true. You ignore the things they said that are true. When you do that, whose outrage are you managing? Your own. The key to outrage management, when an organization decides that it has an interest in calming down its stakeholders instead of itself, the key to outrage management is to ignore the crap and focus on the good stuff. Focus on the valid complaints.
So you can recognize the people that are doing outrage management because they do a lot of, “You’re right about that,” and, “Yeah, we screwed that up,” and, “Yeah, that is a problem.” The more acknowledging and apologizing they’re doing, the better. They should also move on to what they’re going to do about it, but in fact, my clients are much too inclined to talk about how they’re going to solve the problem, and much too disinclined to dwell in how bad the problem is.
Again, if you go to your personal relationships, you all sort of know that when you have done something wrong, apologizing is more important than your recovery plan. This is major in relationships between men and women. Men love to skip right through the apology and get to the recovery plan as soon as possible; it’s one of many kinds of prematurity problems we have. You’re all familiar with this right? “I did it, I’m sorry, let’s never talk about it again!” One sentence and it’s over. But it’s not, because outraged people need to yell at you before they can forgive you.
Bottom line on what’s most important here: outrage management and precaution advocacy are completely different skill sets. If you’re good at precaution advocacy, that does not necessarily mean you’re good at outrage management or vice versa.
They gave me the ten minute signal, so I’m going to skip some stuff.
Up here, we have crisis communication. High hazard, high outrage. Now be careful: lots of organizations refer to what I’m calling ‘outrage management’ as crisis communication, because it’s a crisis for the organization. It’s a reputational crisis for your corporation, and a profitability crisis, but I don’t consider it a crisis unless it’s a crisis for the audience. So crisis communication is when people are upset and they’re right to be upset. That’s a third paradigm.
Down here it’s “Watch out!” Up here it’s “Calm down.” Over here [crisis communication] it’s “We’ll get through this together.” And that is yet a third skill set. The things you do when you’re doing crisis communication are very different from the things you do in the other two corners.
Very quickly, because I haven’t got time, in the middle we have sort of the sweet spot where it’s easy. Down here is low-hazard, low-outrage risk communication. I have not found that to be a business opportunity. If you see a way to make money here, let me know [audience laughter].
This is the map of risk communication. Whenever you face a risk communication challenge, the first thing you do is ask yourself, “How high is the hazard, and how high is the outrage?” Then you add a time element: how high is the hazard likely to get? How high is the outrage likely to get? You want to pre-empt if you can. Based on your judgment about where the hazard is and where it’s going, where the outrage is and where it’s going, you know where you are on the map, and you know which toolkit to bring to the task.
You can be good at all three of these. PR people are normally good at precaution advocacy. Politicians are normally good at that too. Nobody is normally good at the other two. You have to learn the other two. The big problem with learning the other two is you have to unlearn this one, because if you bring PR skills to bear in outrage management or crisis communication situations, you will screw them up badly, as Bill Clinton screwed up Monica Lewinsky’s – screwed up his communication about Monica Lewinsky – and much more importantly, as it seems to me, President Bush screwed up his communication about the war in Iraq.
I think that Bush on Iraq and Clinton on Lewinsky were both stunning examples of people who had to do outrage management and later had to do crisis communication and were busy doing precaution advocacy instead. They were busy talking in sound bites. They were busy keeping it short, making it interesting, and staying on message when they needed to be deploying very different skills.
All right. Two more points and then I have to stop.
Point one: none of this – well, let me make the other point first. Where is pandemic communication on this map? It depends where you are in the pandemic, and where you are in the world. If there is a pandemic, particularly if there is a 1918-like rather than a 1968-like pandemic, we will all be doing crisis communication. That’s obvious.
For the most part now, those of us who think it’s serious are doing precaution advocacy. Those – if you’ve read the writings of, say, Marc Siegel – those who think it isn’t serious are doing outrage management. They are busy because that’s their judgment, they are busy trying to persuade the world that we have a real pandemic of obesity and we’ve got a hypothetical pandemic of flu; get a grip!
So there are people who think it’s an opportunity for outrage management. I’m busy doing precaution advocacy. And when it happens, it’ll be crisis communication.
There are other complexities. It is already crisis communication if you’re talking to – if you’re writing on the Flu Wiki.
Do you all know about the Flu Wiki? Fluwikie.com – you have to spell it exactly or you get an ad. Flu Wiki is where flu-obsessives congregate – particularly where pandemic obsessives congregate. I don’t mean that to be an insult; I’m one of them. But if you want to know the mood of the three or four percent of the country who is most worried about a pandemic, you read Flu Wiki. You should particularly read the forum part of Flu Wiki and, as you track it, you will get a very good sense of people who are already in a pandemic crisis even though they know they’re not in a pandemic yet.
Male4: Can you spell that again, please?
Fluwikie.com – if you go to org or if you spell it or do anything different, you’ll have an opportunity to buy Tamiflu. So you’ve got to spell it right.
So if you’re talking to flu nuts – and there are a lot of us in the room – then it’s already crisis communication, because you’re talking to people who are upset and rightfully so. It’s also already crisis communication if you’re in Asia trying to persuade a farmer that it’s really a good idea to cull his flock, even though his flock is healthy and even though the risk to his health and his family’s health is trivial, and even though your main reason for wanting to cull his flock is in order to reduce the probability of a pandemic because it would be bad for people here in Cambridge. There you are in Thailand, saying to a farmer, “We’re going to kill all your birds, we’re going to destroy your livelihood, we’re going to put you and your family into poverty permanently; but it’s worth it because there will be less of the ‘great roulette table of Asia’ operating.” That may be good for public health, but it sure isn’t a good business plan for the Thai farmer. Are you with me?
So he is in crisis; he is also highly outraged. But it’s not outrage management, it’s crisis communication because he’s right to be highly outraged. You’re about to devastate his economic welfare because it’s better for the world, albeit much worse for him.
In short, where pandemics live on this chart depends on who you are and where you are and what you believe. Locating where pandemics are on this chart is fundamental to understanding pandemic risk communication.
Final point: none of this is what pandemic journalism is about. Somehow, I imagined I would save about twenty minutes to talk about journalism; instead, I’m already on borrowed time. I assume it’s obvious to the journalists in the room – obvious to everybody in the room – that reporters do not believe these arrows are their job. Reporters are not trying to increase the outrage, they are not trying to decrease the outrage. They are covering the outrage.
Reporters do vary their coverage in proportion to location on this map in ways that are absolutely predictable. Coverage down here [precaution advocacy] is dutiful; it’s boring. There’s not much of it, because it’s not an interesting story. It’s hard to interest your readers in something that could kill them but doesn’t upset them. The coverage down there is very low on volume; it is very low-investigated, and it is extraordinarily credulous. Any official source can tell you anything in this corner and you’ll just cover it. You write it off the press release.
As the risk gets more serious, we start moving up into this corner [crisis communication], or as the reporter gets more worried the reporter starts moving up into this corner, even though the editor hasn’t and even though, Lord knows, the audience hasn’t! But you’ve got your Tamiflu – you’re worried! The coverage changes. I can look at a news story and tell you whether the reporter’s got Tamiflu.
The coverage changes, and it changes in very predictable ways. Because now it’s a crisis, the coverage gets more extensive. Interestingly enough, the coverage becomes over-reassuring. I suspect part of what’s going on is the reporter is genuinely himself or herself worried and is trying to reassure him or herself by reassuring the reader. I think it’s a psychological phenomenon. There may also be an economic phenomenon. Terrified people are not good advertising audiences. It’s not good for business to terrify your audiences. But I don’t think reporters really care about business very much.
So I think the main thing that’s going on is the reporter’s individual psychology. In any case, it is extremely noticeable that sources continue to imagine the reporters are sensationalizing, but reporters stop sensationalizing when they start thinking it’s serious. They instead become very over-reassuring. The Three Mile Island coverage, for example, was profoundly reassuring. Reassuring paragraphs outnumbered alarming paragraphs four to one.
Because reporters were scared and scared reporters write reassuring stories. Scared reporters also rely much more on official sources. At Three Mile Island, the anti-nuclear activists had enormous trouble – this should have been their moment, you know. My God, they’d been proved right and nobody wanted to quote them.
The same thing is happening now with flu, with respect to those reporters who are starting to take it seriously. They are starting to get very solemn, they are starting to get very official; there are obviously extremely good reporters who have a wonderful [inaudible] and have a rich range of official sources, but they’re still very reluctant to listen to [inaudible]. We all think that reporters listen too much to crazies, but as soon as you get worried, as soon as it becomes crisis communication, you listen only a little to crazies.
Of course, you all know the dynamic of outrage management. When the issue is not serious but people are upset, reporters have fun. High-outrage low-hazard stories are fun to write, they get a lot of attention; the editor likes them, the reader likes them. Nothing is really at stake, and what your critics call sensationalism and you call good journalism is most characteristic of coverage in this corner. Your use of sources becomes completely different.
You’re still an objective reporter, but what you do in this corner [low hazard, high outrage] – we’re hypothesizing by putting it in this corner that the hazard is actually low. What you do – you do several things. First of all you cover the outrage instead of the hazard. You cover people saying, “I’m scared shitless!” instead of people saying what the hazard is. You cover the outrage.
Secondly, to the extent that you cover the hazard, you cover opinions about the hazard instead of data about the hazard. We’ve done studies in which we wrote fifty-paragraph articles with all kinds of stuff. We gave them to different kinds of people, and we said, “This article is too long, get rid of half the paragraphs. Don’t just cut from the bottom, pick which paragraphs to get rid of.” Reporters invariably get rid of all the science. Well, that’s not fair. Reporters invariably get rid of nearly all the science. Editors invariably get rid of all the science. The public gets rid of most of the science, and the scientists get rid of anything that smacks of humanity.
So there are very different visions of what a good story is. If it’s a high-outrage low-hazard story, reporters are going to cover the outrage more than the hazard. Reporters are going to cover opinion about the hazard more than data about the hazard, and reporters are going to cover certain opinions more than others.
This is the last thing I’ll say.
Female5: Peter, where would the coverage of the flu shot shortage in 2004 be in there? Nobody said, “If you don’t get a flu shot this year, you’re going to be [inaudible].
Absolutely. Coverage of the annual recommendation to get a flu shot is precaution advocacy and follows that pattern. Coverage of the shortage was outrage management and followed that pattern. The two got grafted onto each other. If you followed the coverage, you got the impression that waiting in line for a flu shot must be a much more serious risk than the flu.
Male4: Peter, where does precautionary or investigative reporting come in: Silent Spring or John Wilkes Collin and Mark Schlesteiner in the New York Times and Pinky Unann in 2003 writing about failures like [inaudible]. That’s neither outrage management nor precaution advocacy. That’s –
That is precaution advocacy. There is a niche in journalism; there have been times when it was a big niche and times when it was a small niche, but it never disappeared altogether, that says, “All right. You don’t have to absolutely write an Op-Ed. You can write an investigative series.” If you’re writing an investigative series, you’re allowed to invest in the arrow. That is, investigative reporters are allowed to try to increase public outrage.
It exists, it does not exist in daily deadline journalism – or it almost doesn’t exist in daily deadline journalism. But it always has existed. There have been periods in history – of which this is not one – in which it was very strong; this is a period now in which it is not very strong but it certainly hasn’t disappeared.
Let me draw you this picture and then I’ll shut up. Here’s a nine-point scale from completely safe [one] to incredibly dangerous [nine]. These were meant to be equal intervals. Leaving aside the investigative story, reporters do not care whether the real risk is two or five or eight. They judge that they’re not qualified to tell and they judge that it’s not their business to try to tell.
So what reporters want to do is essentially go on a scavenger hunt. What the general assignment reporter does especially is go on a scavenger hunt. Now they don’t cover one and nine because they’re too extreme. Those of you who think reporters cover the extremes all the time, you have not seen the nuts who come into newsrooms who tell you that SARS is from outer space. They actually covered that one. It was a slow day.
So reporters sort of ignore one and two, and they sort of ignore eight and nine as ‘too weird.’ They’re also not very interested in four, five, and six, because they’re boring. It is hard to get a good story out of “Further research is needed.” So most journalism is about three and seven. If it’s a minor story, three and seven get their own news release. Normally the story is launched by seven because risky is more newsworthy than safe. So somebody says it’s risky and you cover it; the next day, somebody says, “No, it isn’t!” Then you cover that too.
If it’s a bigger story, you get them into the same story in alternating paragraphs, and once again, seven is going to get more attention than three because risky is more interesting than safe. But seven and three will all get more attention than two or five or eight. Those will all sort of fall by the wayside, and you get a nice little ping-pong match between three and seven, which seven normally wins because the scary side almost always wins in the ping-pong match.
In terms of choice of sources, government is the preferred source because government is the swing vote. You go to government first. If government says seven, you go find an industry spokesperson to say three. If government says three, you go find an activist to say seven. And then you’ve got your story. Don’t really worry if the truth is in two or five or eight. If you cared where the truth was, you’d be writing editorials. That’s a gross oversimplification.
I do realize – I was talking to Stephanie about this last night at dinner – anthropologists are never popular with the tribe. Like all professionals, journalists are profoundly ambivalent about their own norms. Anytime anybody stands up in front of a roomful of journalists and says, “You ought to care. You ought to make people realize how serious obesity is!” Reporters can be counted on to say, “That’s not my job, that’s your job. I just cover it.” But if somebody stands up and says, “That’s not your job, that’s my job, you just cover it,” reporters tend to say, “Well, wait a minute. I’m a person too. I care [inaudible].”
So you can’t win. In risk communication this is called the seesaw. There are lots of seesaws in risk communication. Uncertainty is a seesaw. Low probability versus high magnitude is a risk seesaw. Journalists are on their very own seesaw, whether you’re supposed to be doing risk communication or you’re supposed to be doing journalism. If I ask you to do risk communication, you insist on journalism. If I stereotype you as mere journalists, you will insist that you do risk communication.
So I get to decide what side of the seesaw I put you on by getting on the other side. You see which one I chose.
[End of Recording]
Copyright © 2007 by Peter M. Sandman