Posted: May 3, 2017
Hover here for
Article SummaryThis column is not an argument leading to a single conclusion, but rather a compilation of everything you (might) want to know about expert disagreement. Some of the points it makes: (1) Both claims of expert consensus and claims of expert disagreement should be viewed skeptically. (2) Even expert opinions are mostly secondhand; experts who haven’t read the key studies opine on what other experts say the studies concluded. (3) Though they often claim otherwise, technical experts deserve no special credibility on values and policy questions. (4) Experts are always biased by factors such as financial self-interest, ideology, peer pressure, and consistency. (5) When looking for an expert who will be credible to skeptics, pick one who leans the other way. (6) When experts can’t credibly claim consensus, they should acknowledge uncertainty rather than settle into a pattern of “dueling Ph.D.s.”

Expert Disagreement

When I started drafting this column in mid-March 2017, the U.S. media were full of expert analyses of the newly proposed Republican replacement for Obamacare. Not surprisingly, different experts focused on different aspects of the bill. Some cared enormously about the risk that millions of poor people could lose their health insurance, and cared hardly at all about the prospect of saving taxpayers billions of dollars. Others had exactly the opposite priorities.

But the experts didn’t just disagree about what’s important; they also disagreed about what’s true. When the experts who were most worried about poor people losing coverage turned their attention to budget savings, they thought the other side’s estimates of those savings were hugely exaggerated. When the experts who were most attracted to budget savings turned their attention to poor people losing coverage, they thought many fewer people would lose coverage than the other side imagined.

In other words, the disagreements of health insurance experts had a consistent pattern. The experts who thought the bill had big upsides tended to think its downsides were small. The experts who thought it had big downsides tended to think its upsides were small. There is no a priori reason why a mammoth change in health insurance policy shouldn’t have both big upsides and big downsides; in fact, that intuitively seems like the likeliest outcome. But it was surprisingly hard to find experts who claimed to believe the bill would both cost many poor people their coverage and save taxpayers a lot of money.

Moreover, the experts who predicted that lots of poor people would lose coverage tended to be onboard more generally with a progressive ideology that favors government expenditures to help poor people, while those who predicted sizable tax savings tended to be allied with a more conservative ideology that focuses on lower taxes and smaller government.

When it comes to public policy controversies, this is more the rule than the exception. Experts disagree. They disagree not just in their value judgments about what’s important, but also in their factual conclusions about what’s true. Their disagreements arrange themselves neatly into “sides” or schools of thought, with comparatively few experts who are partly on one side and partly on the other. And at least to an outsider, it looks an awful lot like the experts’ judgments about what’s important are greatly influencing their views on what’s true. To a great extent, their policy preferences and ideological biases seem to determine their factual opinions.

And when experts claim to agree, it’s not always clear that their consensus is real, or that it derives from their expertise. I was still tinkering with this column on April 21, 2017, the day of a worldwide but mostly U.S. and mostly anti-Trump “March for Science.” Timed to coincide with Earth Day, the march was focused largely though far from exclusively on the issue of climate change. It was grounded in the contention that the Trump administration is ignoring or denying the consensus of scientists not just on climate but on a broad range of issues.

On some of these issues – such as government funding for scientific research – the consensus was easily explained as special pleading. I think the marchers were generally right that science is a good investment of taxpayer money, but scientists are surely not the most credible, objective arbiters of that particular claim.

Even where the self-interest of scientists was not implicated, the March for Science struck me as a march for policy preferences that do indeed have a scientific component but also have a substantial values component, with regard to which scientists have no special expertise. And it struck me not as a march for the scientific process, which calls for tentativeness and open-mindedness in the face of debate (traits the marchers didn’t much demonstrate), but rather as a march for certain conclusions that most scientists were asserted to share and that most marchers surely shared.

Above all I was struck by the issue selectivity of the March for Science. A 2015 Pew Research Center study is just one of many that have looked at the differences between the opinions of scientists and the opinions of the general public. On some issues, the dominant opinion among U.S. scientists differs from the majority opinion of the U.S. general public (and the emerging policies of the Trump administration) in a direction that might be categorized as progressive or left-leaning – for example, that climate change is caused by human activity, that humans have evolved over time, and that vaccinations against childhood diseases should be required. But on other issues, scientists’ opinions are arguably to the right of the general public’s opinions. (I haven’t tried to figure out where the Trump administration stands on these issues.) The biggest gap in the Pew study concerned genetically modified foods, which 88% of scientists but only 37% of the general public consider safe to eat. Another big gap: 68% of scientists versus 28% of the general public think foods grown with pesticides are safe. And a third: 65% of scientists compared to 45% of the general public believe the U.S. should build more nuclear power plants. Yet pretty clearly the March for Science was not a march on behalf of GM foods, pesticides, and nukes.

The March for Science raises obvious questions about the reliability, validity, and meaning of claims about expert consensus. It also raises questions about the limits of expertise: when we should defer to the opinions (even consensus opinions) of experts and when we shouldn’t. I couldn’t resist discussing it briefly in this column. But including it here is a bit of a stretch. The march was always a march of progressive-leaning politically involved people against Trump policy positions they felt could be fairly categorized as anti-science or at least scientifically unsound. Most of the marchers and many of the march organizers weren’t scientists. They claimed, more or less accurately, to be on the same side as the majority of experts on the carefully selected issues on behalf of which they were marching – but they didn’t really claim to be experts themselves.

The debate over Obamacare versus Trumpcare is also in some ways an unfair example to launch my discussion of expert disagreement. Not everyone thinks economic modeling is a field of expertise at all. Even those who believe it is acknowledge that it is incredibly difficult to predict the future impacts of a hugely complicated economic innovation like a national health insurance system. And major segments of U.S. society have financial, political, ideological, or personal reasons for preferring one set of answers to another. This is a recipe for “dueling Ph.D.s” – experts whose disagreements tell you more about what side each expert is on (or paid by) than about how a proposed policy will actually turn out.

How often do expert disagreements resemble the battle over U.S. health insurance policy? My judgment after four decades of consulting on risk controversies: often.

Even more often than experts explicitly disagree on which facts are true, they cherry-pick which facts to showcase. From the universe of available data, each expert selects a subset of facts that support her viewpoint, grounded in her policy preferences and ideological biases. Consensus facts, if there are any, are framed by each expert so they appear to support that expert’s viewpoint. Notwithstanding the aphorism that “you are entitled to your own opinion, but you are not entitled to your own facts,” where possible experts prefer to stick to their own facts and ignore opponents’ facts. If they can’t, they try to make a case that opponents’ facts are less germane to the question under discussion, or that opponents are misinterpreting their own facts, or that the research demonstrating opponents’ facts has methodological flaws and shouldn’t be relied upon. Flat-out claiming that opponents’ facts aren’t true is a last resort … though it may have become more popular in recent months since “fake news” became a meme.

The most common pattern vis-à-vis controversial issues is for each expert to cobble together a coherent narrative of cherry-picked facts leading to her preferred conclusion – and stick closely to that narrative. I used to spend a fair amount of time grilling my clients’ experts. (It’s hard to give effective risk communication advice without knowing a fair bit about where the holes are in the client’s risk assessment claims.) I frequently asked them these two questions: “What are your opponents’ best arguments? And what are the kernels of truth in your opponents’ most commonly used arguments?” Surprisingly often my clients’ experts seemed to think these were unfair questions, questions they shouldn’t have to answer – even in the context of a confidential consulting arrangement.

This column is an effort to investigate some of the complexities of expert disagreement. By way of preparation, I have tried to read a fair sample of the expert literature on expert disagreement. Even so, much of what’s in the column is my own opinions, based in large part on my observations over 40+ years of risk communication consulting – interacting with lots of experts along the way.

Among the questions I want to explore are these:

If this long column is more than you want to read – which I fully realize is likelier than not – feel free to skip to my Summary section at the end.

Is expert disagreement simply to be expected, or does it have to mean that somebody is incompetent, dishonest, or biased?

link up to indexLots of nitty-gritty technical questions and lots of theoretical big-picture questions do indeed have a single consensus answer, an answer so well established that anyone who doubts it’s the right answer is obviously not an expert. A trivial example: The world really is (approximately) spherical, not flat.

Once in a while the consensus answer – the universal or nearly universal expert opinion – is shown to be false, precipitating a paradigm shift. Experts tend to resist the new evidence for a while. The dissidents may remain a minority for some time, even in the face of compelling proof that they’re right and the consensus is wrong. Sometimes the paradigm shift must wait until the Old Guard retires or dies.

But other times the paradigm shift happens quickly – you might even say surprisingly quickly, given that experts are people and people don’t like to be proved wrong. One familiar pattern: The real experts promptly take the new evidence onboard and change their minds; the broader field and experts in allied specialties lag behind, perpetuating the old paradigm; the dissidents and especially their early converts continue to feel like rebels, not quite noticing that they’ve already won.

Paradigm shifts are important, but they are relatively uncommon. Most questions on which there is expert consensus remain settled. Most of the time the expert consensus is the right answer, and the outliers who think they’re the vanguard of a paradigm shift are simply wrong. Quite often they are kooks whose “evidence” is far less convincing than they claim.

So it’s a problem when one expert’s opinion diverges from the consensus. If you don’t know that “your” expert is in the minority, you could be seriously misled. If you do know – and ideally your expert should tell you so – you still have to decide whether to go with the majority or trust the dissenter.

In my own field of risk communication, many of my opinions are part of an expert consensus I share with other risk communication professionals. But some are idiosyncratic. For example, I dissent from the widely held dictum that organizations should “speak with one voice” in a crisis or controversy. I think I detect a bit of a trend line toward my position, suggesting that maybe I’m part of the vanguard of a paradigm shift, not just a kook who’s simply wrong. But I’m obviously not the right person to make that call.

On other risk communication issues, there simply isn’t any settled expert consensus, nor is there a detectable trend line. The experts are divided and likely to remain divided for the foreseeable future. For example, many people in my field believe it’s usually wisest to correct misinformation promptly and aggressively, while many others recommend avoiding such corrections whenever possible on the grounds that repeating a falsehood may give it further currency. (I’m in a third camp, smaller than the other two, that favors acknowledging the kernel of truth that is usually embedded somewhere in the misinformation.)

Most fields, if not all, are like risk communication in this way: Plenty of questions exist on which there is considerable expert dissensus – questions that are unsettled and expected to remain so. These are the controversial questions within the field. And when a question is controversial, there are always experts to be found on every side. That’s part of what “controversial” means.

With regard to controversies, then, expert disagreement doesn’t have to mean that somebody is incompetent, dishonest, or biased. It simply means that the question isn’t settled. And the existence of unsettled questions is arguably a good thing, demonstrating that the field is still growing and the experts are still learning.

This is a crucial distinction I want to underline before moving on. When a question is thought to be settled, an expert who disputes the consensus is unsettling. Is the dissenter simply off-the-wall? Does the paradigm need to change? Even if the dissenter is mostly wrong, might the dissenting view be pointing the way to some aspect of the consensus that’s weaker than the mainstream realizes and needs to be reassessed?

But with regard to unsettled questions, expert disagreement is predictable, ordinary, and appropriate – and should not be considered a problem.

Surprisingly – to me, at least – the literature on expert disagreement has traditionally treated it as a problem across the board, as something to be explained and, if possible, eliminated. I am gobsmacked by the number of articles and blogs I have found with titles like this reddit page: “Given access to the same facts, how is it possible that there can be disagreement between experts in a discipline?” According to the traditional view, experts who have equal access to the available data and are properly skilled at assessing those data – “epistemic peers” in the language of the field – should reach the same conclusion. When they don’t, something is amiss.

(This connects naturally to the nearly universal preference for experts who “speak with one voice.” My disagreement with that dictum has undoubtedly sensitized me to the value of expert disagreement.)

Three explanations have traditionally been offered for expert disagreement, all of them hostile to experts who disagree: incompetence, venality, and ideology. link is to a PDF file In other words, when two experts disagree one or both of them must be incompetent, bought and paid for, or biased by ideological fervor.

That’s what the public thinks too. In a 2015 study, people were asked to react to expert disagreements on a wide range of issues. Those with low levels of education and those who didn’t feel they understood the issue particularly well tended to interpret expert disagreements as evidence of expert incompetence. Those who felt more knowledgeable usually attributed expert disagreements to bias, either financial or ideological. The most educated and “cognitively able” people in the sample also thought a lot of expert disagreement was due to bias – but this group and only this group said the biggest reason for expert disagreement was the “irreducible complexity and randomness of the topic area.” In other words, only the most educated and smartest people in the study thought maybe the experts disagreed because it was hard (or impossible) to figure out the right answer.

I certainly agree with the participants in this study that experts are sometimes incompetent, and oftentimes biased by money or ideology. But unlike most of the study participants, I think what expert disagreement means depends entirely on how settled or unsettled the question is. An expert who disagrees with an established consensus upsets the applecart. Expert disagreement about ongoing controversies, on the other hand, is what’s supposed to happen. When there’s not much evidence, or the evidence is mixed, or the question has multiple aspects that point in different directions, expert disagreement is natural and for the best.

But most people smell a rat when experts disagree.

How can we tell how much experts actually disagree?

link up to indexSince most people see expert disagreement as evidence that something is amiss, it follows that an expert who wants to be maximally credible has reason to emphasize that her view is the majority view, even the consensus view, among epistemic peers. If virtually all experts believe that X is true, most non-experts will take the truth of X on faith, making the relatively safe bet that the X issue isn’t one that’s about to undergo a paradigm shift.

The more uneven a split in expert opinion, of course, the more sense it makes to go with the majority. But it’s worth remembering that you’re not going with the majority because you think truth is determined by vote. There’s nothing in the least scientific about majority rule. Every paradigm shift from Galileo to Einstein signaled a mistaken expert majority. Rather, you’re going with the majority because:

  • You know that paradigm shifts are relatively uncommon;
  • You know that paradigm shifts are especially unlikely when the expert consensus looks strong, when the minority view isn’t widespread and doesn’t seem to be gaining any ground; and
  • You know that experts have knowledge that helps them find truth more successfully than non-experts.

So if nearly all the experts have arrived at a particular conclusion, and there doesn’t seem to be a lot of debate among the experts about that conclusion, you know that it’s statistically likely to be the right conclusion … even if lots of non-experts don’t buy it.

Expert majority rule isn’t a sound basis for expert decisions; experts are supposed to assess the evidence, not the other experts. It’s a sound basis only for non-expert decisions – and it’s sound only insofar as the experts themselves aren’t relying on it. In other words, if experts are using majority rule to decide what they think, their reasoning is circular: Each of them believes X because the rest of them believe X – in which case we non-experts have precious little reason to believe X just because the experts do.

So a key criterion for deciding whether counting experts’ heads is a good decision-making protocol is whether the experts are independently assessing the evidence, not just counting each other’s heads.

A second, related criterion is also worth keeping in mind: whether the experts are open to countervailing evidence. To the extent that they’re not – to the extent that they have reached a conclusion that is now an article of faith and impervious to falsification – they’re no longer functioning as experts but rather as adherents to the faith. That does happen to experts, even scientific experts, especially when they’re under attack. Conclusions become beliefs and experts become believers. Among the “tells” that this may be happening: demonization of dissenting or even partially dissenting experts and over-reliance on the extent of consensus as a replacement for evidentiary arguments.

When you sense that a field of evidence-based expert knowledge has become instead an internally consistent, evidence-resistant, unfalsifiable system of beliefs, that doesn’t necessarily mean the expert consensus is wrong. It just means outsiders have less reason to rely on an expert head count. Adherents to a religion typically have high levels of consensus about the tenets of that religion, and the greatest consensus may be among the religion’s most thoroughly schooled leaders. The same is true of any ideology, or indeed of any belief system. The consensus doesn’t make the belief system true, nor, of course, does it make the belief system false.

Judging truth by the number of experts on each side is itself a bias, a bias in favor of traditionalism. In economic terms, majority opinion is a lagging indicator. It misses the vanguards of paradigm shifts to come. So instead of counting heads, it’s probably wiser to judge the trend line: Which positions are gaining expert adherents as younger, more flexible and more newly trained people join the expert ranks, and which positions are fading out as older experts retire or die (especially generalists and experts in allied fields, who haven’t necessarily kept up with new research)?

If there’s no trend line – if there’s a stable consensus among experts who are independently assessing the evidence and remain open to contrary evidence – then you’re on much more solid ground as a non-expert going with the majority than with some outlier expert who is probably wrong and possibly a kook.

How do we know whether a question is settled or not? The conventional answer is circular: We measure how unsettled the question is by how much expert disagreement we see. This isn’t a satisfactory answer. As I’ve emphasized already, ideally each individual expert would independently assess the evidence and decide what conclusions to draw. Then we could count heads (ideally using some kind of secret ballot) and see how many experts favored each competing conclusion. If that were the way reality worked, expert consensus would be the natural result when a question was settled, and expert disagreement would be the natural result when it was unsettled – but it would be the state of the evidence, not the head count, that defined how settled or unsettled the question was.

Because we measure how settled or unsettled a question is by how much expert disagreement we see, the extent of expert disagreement becomes itself a key piece of rhetorical ammunition. Many policy controversies are accompanied by meta-controversies over what’s really controversial and what’s settled.

In recent years this has been most vivid with regard to climate change. I don’t intend to focus this column on climate change. I want to keep my discussion of expert disagreement generic, with examples from many topics. But climate change is the obvious example of meta-controversy over expert disagreement. Perhaps the biggest debate about climate change is the debate over whether climate change is debatable.

The two dominant positions in the climate change controversy, in fact, are not Yes and No. They’re Yes and Maybe. One side says climate change is real, anthropogenic, and serious, full stop. The other side says there are too many unanswered questions to reach such firm conclusions. In an ideal world the debate over debatability would focus on the quality of the evidence: What do we know for sure, what looks probable but not certain, and what’s really up in the air?

In the real world, much of the debate focuses instead on the extent of expert disagreement. Climate change proponents assert that the denialists are manufacturing dissent and uncertainty where none really exists, drawing from the same playbook the tobacco industry used to cast doubt on the carcinogenicity of cigarettes. In 2008 the term “Scientific Certainty Argumentation Methodlink is to a PDF file (SCAM) was coined to describe this strategy: denying scientific certainty in an effort to undermine regulatory action. Climate change skeptics (a more accurate descriptor than “denialists” for people who say Maybe rather than No) reply that the proponents (“alarmists”?) are manufacturing faux consensus, pressuring experts to keep mum about their doubts and sign on to the party line.

As far as I can tell, both indictments are partly correct. There is evidence of both manufactured dissensus and manufactured consensus. Expert disagreement about climate change science, especially the reliability and validity of climate change models, is almost certainly greater than proponents claim but less than skeptics claim.

The expert disagreement disagreement leaves the rest of us in a bit of a pickle. We don’t know nearly enough to reach our own judgments about climate change science, so we have to rely on the experts. But the experts disagree. Or do they? They disagree about whether they disagree. Or do they? How much of the disagreement is manufactured? How much of the agreement is manufactured? And how far these questions have strayed from the substance of climate change science!

(Because climate change is such a hot-button issue, I feel I should mention here, parenthetically, my own decidedly non-expert opinion. I don’t have much faith in climate change models. It seems to me that there are too many models, and too much post-hoc tinkering with the models. We’re nowhere near the goal of one best model, picked in advance by expert consensus, that has stood the test of time and turned out to be a good climate predictor. I also don’t have much faith in the solutions recommended by climate change proponents. The commonsense solutions sound pretty trivial compared to what they say is the size of the problem, while the big solutions sound like long-shot gambles. Despite these reservations, I suspect the proponents are closer to right than the skeptics. And you don’t have to be confident that alarmists are right to think it makes sense to start taking steps – at least the smallish, commonsense steps – to mitigate the damage in case they’re right.)

Now let’s put aside climate change and other super-hot policy controversies and return to the generic question of what expert disagreement signifies.

The fact that most people see expert disagreement as a problem – as evidence that there’s something wrong with some or all of the experts – undoubtedly puts pressure on experts to agree. Worse, it puts pressure on entire fields to pretend that there’s more consensus than there actually is, to speak with one voice.

That’s just one of the reasons why data on the extent of expert disagreement shouldn’t be taken at face value. Here’s a more basic reason: Experts in a particular field or subfield typically form a subculture of their own. They know each other, go to the same professional meetings, read the same journals, etc. That easily leads to conformity, especially if the majority opinion among experts is under attack, whether from dissident fellow experts or from non-expert opponents.

As a result, uneven splits in expert opinion are probably less uneven than they seem. If 90 percent of experts are thought to believe X, many who don’t – especially the younger, untenured ones – will knuckle under and say they do. This pressure toward conformity occurs in faculty lounges and surveys, in grant applications and journal articles, and maybe even in courtroom testimony.

Usually the pressure to conform comes from professional peers. But sometimes it comes from a broader political movement with which the profession is allied. When Ebola broke out in West Africa in 2014, hundreds of health professionals at U.S. hospitals and medical schools decided to volunteer. In response, many of their institutions promulgated rules requiring returning Ebola volunteers to stay away from work for 21 days after leaving West Africa, a quarantine-like policy they said was intended to protect patients from possible infection. But when New Jersey’s Republican Governor Chris Christie imposed a statewide quarantine on returning Ebola volunteers, the Ebola quarantine issue became a national political football. The public health profession lined up almost unanimously behind the Democratic position that Ebola quarantines were scientifically indefensible. Not one of the medical schools and hospitals that had quasi-quarantined their own people dissented publicly from the profession’s anti-quarantine pronouncements.

Subtler pressures toward conformity may do even greater harm. Knowing that most of the top people in my field believe X, I don’t just hide my conviction that X is false, if I have such a conviction. The bigger problem is that I shrug off any doubts I might have about certain aspects of X. A methodological defect in a study that concludes X is less likely to strike me as fatal. An anomaly in a data set that points away from X is less likely to strike me as worth pursuing. And since I already “know” that X is true, because virtually all my colleagues say so, I’m a whole lot less likely to read the studies demonstrating that X is true carefully and critically – or, indeed, to read them at all.

But remember that there are contrary pressures as well, pressures toward dissensus. If the emerging expert consensus that X is true threatens the profitability of some industry, companies in that industry will invest heavily in studies aimed at undermining the consensus. The mere availability of funding indisputably affects what gets studied. The studies don’t have to be dishonest. Without acting unethically, researchers can consciously or unconsciously make methodological and analytic choices that favor one hypothesis over another. And of course a corporate research sponsor can publish the studies that call the X consensus into question and ditch the ones that turn out supporting the consensus. In my decades of consulting I saw very little evidence of fudged data or other outright research fraud on behalf of my clients. But I saw plenty of research aimed more at demonstrating the hypothesis a client favored than at testing that hypothesis critically.

And just as some experts are coerced or lulled into joining a majority they haven’t independently decided is correct, others find joy and success in staking out a contrarian position. Papers that merely confirm an expert consensus are at least as hard to get published as papers that dispute that consensus; the most publishable papers either add a detail or question a detail. A reputation for iconoclasm, moreover, has upsides as well as downsides. “Me too” isn’t exactly a quick road to a stellar reputation. If you really want lots of speaking invitations, consulting gigs, and other professional opportunities, you need to have a point of view that is identifiably yours, not just your field’s.

In short, the incentives to dissent are as real as the incentives to conform. Sometimes one set of incentives is greater, sometimes the other. Both affect the extent of expert disagreement. And both make it hard to judge how much “natural” expert disagreement exists with regard to a particular question.

Though expert disagreement is tough to measure reliably, it is nonetheless an important variable to try to assess. Sometimes there are only a few iconoclasts, with the vast majority adhering to the mainstream view. Sometimes there are only a few holdouts, with the vast majority having embraced the new view. Sometimes the split is closer to even.

Often what looks like a split within one expert field is actually a split between two different expert fields. The question under discussion is relevant to both fields, and their expert perspectives differ systematically. My risk communication consulting, for example, has often been complicated by divergent expert assessments of the danger posed by some industrial facility, product, or waste. I learned early that chemists and toxicologists tended to think the risk was lower than epidemiologists or environmental scientists.

Even within a single field, “expert” is a slippery concept. Most people would agree that not everybody with a college degree in biology is an expert biologist. But many would consider everybody with a Ph.D. in biology an expert biologist – even on questions quite distant from the particular expert’s research focus, perhaps even on questions the expert hasn’t actually studied since sophomore year. The more specialized and technical the question, the fewer true experts there are on that question. When we talk about experts, then, we may be talking about people “in the field” generally, or people in a particularly relevant subfield, or people in the subfield who have recently read summaries of the research most relevant to the specific issue under debate, or people who have recently read the studies themselves, or people who have immersed themselves in the studies (even the methodological appendices and supplementary data analyses), or people who actually did some of the studies themselves.

Experts with more detailed knowledge of a specific issue are likelier to have firm opinions on that issue – and if they actually did some of the key studies themselves, they may understandably resist alternative interpretations of their work. Detailed knowledge and opinionated intransigence tend to go hand-in-hand. I’ll return to this point later, when I address the loaded question of whether experts are objective.

When experts disagree, it’s worth paying attention to the experts’ level of expertise. Are the real specialists pretty much in agreement but the broader field is split? Is the broader field pretty much in agreement but the specialists are divided into divergent (or even feuding) camps? Are the specialists really confident of specific conclusions, while the broader field is more tentative? Does the broader field imagine the specialists are more confident than they actually are, because it has over-learned their conclusions and missed their doubts? Does the broader field have a consistent misimpression – over-generalized or out-of-date, perhaps – of what the specialists have learned? Have the specialists gone off the deep end together (specialists turned cultists, in effect), while the broader field is on more solid ground? All these patterns occur.

What do we mean when we say somebody is an expert? To what extent is individual expertise an illusion?

link up to indexThe “level of expertise” question matters in both directions:

  • If you’re a certified expert in a broad field – biology or law, say – you may know understandably little about a highly specialized question within that field. You probably haven’t read the recent research on that particular question; in fact, you may never have studied that particular question. But people who aren’t in the field at all may assume you’re an expert on everything in the field. And when more specialized experts are not around, you may assume so too.
  • If you’re a genuine expert in a highly specialized subfield – cellular biology of protozoa or tort law in medieval Europe – you may know surprisingly little about the broader field. You probably haven’t kept up on recent research outside your subfield; in fact, you may have forgotten much of what you once learned (perhaps in an introductory survey course) about the broader field of biology or law. But again outsiders may assume you’re an expert on other subfields too, and so may you.

Here’s a key thing to remember about experts, maybe even the key thing: Most people the public thinks of as experts on a particular question haven’t actually done any of the significant research on that question. They may not even have read the significant research in order to reach a firsthand, solitary judgment about what’s true and what’s important.

So what makes them experts? Maybe they have read abstracts or summaries of some of the key studies, or articles in professional publications that summarized whole clusters of studies. Maybe they haven’t – but have picked up the gist from casual conversations with colleagues. They’re secondhand experts, not firsthand experts. That is, they’re not actually experts on the question itself; rather, they’re experts on what the experts on the question itself have concluded. And if there are disagreements among the experts on the question itself, then they’re experts on which experts they trust.

Take a question that’s fairly settled among experts but still controversial to the broader public – vaccine safety, for example. The vast majority of genuine experts on vaccine safety are convinced that getting vaccinated against vaccine-preventable diseases is safe – by which they mean that getting vaccinated is way, way safer than not getting vaccinated, assuming you’re at risk of being exposed to the disease a particular vaccine was formulated to prevent. So few experts disagree about vaccine safety that the majority would deny that the outliers are experts at all; even if they have read all the right papers, they’re nonetheless considered ideologues, kooks, or worse. Meanwhile, the typical local health department official or pediatrician has not read all the right papers – but nonetheless “knows,” rightly, that vaccines are safe. What she actually knows is that nearly all the real experts think vaccines are safe.

Most local health department officials and pediatricians know far less about the details of vaccine safety controversies than an outlier expert (okay, an outlier “expert”) knows. But the outlier expert has reached conclusions about those details that mainstream experts consider not just mistaken but disproven. Local health department officials and pediatricians know what the mainstream opinion is; they know they should ignore the outlier expert’s opinion. That’s what makes them experts – sort of.

This process works fine for questions that are genuinely settled among experts.

On those relatively infrequent occasions when the expert consensus is wrong, of course, the process backfires. Secondhand experts who know what the expert consensus is without knowing much about the evidence frequently resist the paradigm shift that needs to occur. Quite often they simply don't notice that the old paradigm is weakening. So they are typically the last holdouts; even after most of the real specialists have reluctantly changed their minds, the generalists still “know” what they learned years ago.

The process is also problematic on those not-so-infrequent occasions when there is no true expert consensus. A secondhand expert is likely to interpret as consensus the school of thought that she has been exposed to and inculcated in. For example, the current conventional wisdom among tobacco risk experts in the United States is that electronic cigarettes (e-cigs) are roughly as dangerous as real cigarettes, and should be regulated roughly as strictly. The current conventional wisdom among experts in the United Kingdom, on the other hand, is that e-cigs are much safer than real cigarettes, and should therefore be regulated more laxly to encourage smokers to make the switch. In the U.S., at least (I don’t know about the U.K.), most physicians and public health officials don’t realize that the issue is unsettled. They think the dominant school of thought in their part of the world is the expert consensus. That is, generalists in the U.S. think they “know” that e-cigs do more harm than good because that’s the position that the majority of U.S. specialists have staked out in what is actually an ongoing expert disagreement.

Knowledge – including expert knowledge – is a lot more communal than we normally realize. Each of us knows shockingly little firsthand. Most of what we think we know is actually other people’s knowledge that we take on faith. A March 2017 op-ed in the New York Times, written by two cognitive scientists, put it this way:

You know that the earth revolves around the sun. But can you rehearse the astronomical observations and calculations that led to that conclusion? You know that smoking causes cancer. But can you articulate what smoke does to our cells, how cancers form and why some kinds of smoke are more dangerous than others? We’re guessing no. Most of what you “know” – most of what anyone knows – about any topic is a placeholder for information stored elsewhere, in a long-forgotten textbook or in some expert’s head.

One key point in this op-ed was that we believe falsehoods for pretty much the same reason we believe truths: because sources we trust told us they’re true. In this sense, the false beliefs are no more irrational than the true ones. To return to vaccine safety: Anti-vax parents “know” vaccines are dangerous in exactly the same way pro-vax parents “know” they’re safe – because they trust the experts who say so. How do I “know” the pro-vax parents are (mostly) right and the anti-vax parents are (mostly) wrong? Because I (mostly) trust the experts who say so. Unless you’re an actual expert, where you stand on vaccine safety depends far more on whom you trust than on what you know.

The authors’ other key point was that we all lose track of this distinction between firsthand and secondhand knowledge, absorbing conclusions from sources we trust and imagining that we “know” why those conclusions are valid. The op-ed referenced a series of four studies demonstrating this point. The researchers fabricated nonexistent scientific phenomena (e.g. rocks that glow). Some respondents were told that scientists don’t understand the phenomenon yet; others were told that it was scientifically understood already, though no explanation was provided. Then respondents were asked how well they understood the phenomenon. Respondents in the second group rated their own understanding higher than those in the first group. If you’re told the experts understand something, in short, you imagine that you understand it too.

We do this even when we are aware that a question is controversial. The op-ed gave this example:

Recently … there was a vociferous outcry when President Trump and Congress rolled back regulations on the dumping of mining waste in waterways. This may be bad policy, but most people don’t have sufficient expertise to draw that conclusion because evaluating the policy is complicated. Environmental policy is about balancing costs and benefits. In this case, you need to know something about what mining waste does to waterways and in what quantities these effects occur, how much economic activity depends on being able to dump freely, how a decrease in mining activity would be made up for from other energy sources and how environmentally damaging those are, and on and on.

We suspect that most of those people expressing outrage lacked the detailed knowledge necessary to assess the policy. We also suspect that many in Congress who voted for the rollback were equally in the dark. But people seemed pretty confident.

On question X, some people believe A and others believe B. We know very little about X, and we have neither the time nor the expertise nor the inclination to delve into the debate over X. While we may lack what it takes to come up with a firsthand opinion about X, we are highly skilled at figuring out what sort of person is on the A side of X and what sort is on the B side. And we know which sort of person is our sort of person. Are you a Democrat or a Republican? Do you generally think U.S. environmental regulations are too lax or too strict? What do you suppose most of your friends would say if asked what they think? If you know the answers to these three questions, you pretty much know where you stand on the recent rollback of the regs governing mine waste dumping in waterways. No need to study up on the technical merits of the rollback.

What’s fascinating isn’t just that we all take this shortcut again and again, using affiliation and ideology as stand-ins for substantive opinions. What’s fascinating is how easily we forget that that’s what we do. We actually think we have a serious, substantive opinion for or against the Trump administration’s mine waste dumping regulatory rollback.

My wife and colleague Jody Lanard likes to recall how quickly her college classmates decided they were against the war in Vietnam. Jody ended up against the war too, but it took her a fair amount of time and effort to figure it out. Her classmates got there quicker. Instead of deciding what they believed, they mostly decided whose side they were on, a much easier task.

Here’s a thought experiment that can sometimes help people realize how secondhand their opinions really are. Take a scientific belief you’re pretty confident about, and imagine yourself in a discussion with a handful of experts whose views are different from your own non-expert view. How will you fare? Are you likely to convince them you’re right? Or if it’s a debate before a neutral audience, are you likely to win the audience over to your side? Odds are your answers to these questions are “No, of course not.” You know you’d be outclassed. You know that actual experts who disagree with you can make a better case than you can. Then how is it possible that you continue to think you’re right? The obvious answer: You feel entitled to disagree with experts who can out-argue you because you’re confident there are experts at least as good on your side. You just don’t happen to be one of them. In other words, your opinion is grounded not in your own expertise, but in other people’s expertise. Your opinion is secondhand.

Experts are just like the rest of us in all this. Expert opinion, too, is mostly secondhand.

The exception is the handful of highly specialized experts whose lives are largely devoted to the specific question about which they are now offering their expert opinions. Some of their expertise is genuinely firsthand: They did their own studies. And even their secondhand expertise is less secondhand than that of most experts: They read their colleagues’ studies carefully, critically, and knowledgeably.

Firsthand expertise is surely more knowledgeable than secondhand expertise. But is it more objective or trustworthy? That’s where I’m going. But first I need to address the ways experts – whether firsthand or secondhand – opine beyond their expertise.

What are the limits of expertise? In particular, how often do technical experts conflate expert opinions on technical questions with “expert” opinions on values or policy questions outside their expertise?

link up to indexExperts are not the most reliable judges of the limits of their own expertise. When talking to other professionals, experts tend to define their expertise quite narrowly, often deferring to another expert in the room whose expertise is more on-target, demurring that the question “isn’t really my field.” The same experts typically define their expertise much more broadly with a lay audience. As long as the audience knows even less about the question than they do, they often feel qualified to opine freely – in newspaper op-eds, for example.

Of course everybody is entitled to a non-expert opinion. They problem is that experts often offer “expert” opinions on topics well beyond their expertise.

I include myself in this indictment. Largely under the tutelage of my wife and colleague Jody Lanard M.D., I have acquired a fair amount of knowledge about a few areas of public health, most notably vaccination and emerging infectious diseases. When I write about these topics, I don’t always remember that I have learned just enough to get it wrong (sometimes) while sounding like I know what I’m talking about. And I don’t always remember to warn my audience that I’m an expert in risk communication, not public health.

On the other hand, with Jody’s help I do try hard to get the technical details right. Knowing how far outside my field I am, I try extra-hard. But I find endless technical misstatements in the writing of public health professionals about vaccination and emerging infectious diseases. They’re far enough from their actual expertise to make mistakes, but not far enough to feel they’d better check before they write.

(I have to add that some fields have formidable reputations that keep outsiders from imagining they’re experts. And some don’t. Nobody but a nuclear physicist dares to opine on nuclear physics. Virtually everybody feels entitled to an opinion on risk communication.)

This is a generic problem: Too often experts feel qualified to encroach on nearby technical fields they haven’t actually mastered, as if knowing more than their audience were sufficient.

A related and even bigger problem: Too often experts assume that their technical expertise gives them nontechnical expertise as well.

The distinction between technical expertise and policy expertise is fundamental to any discussion of expert disagreement. When technical experts disagree, it’s always worthwhile to ask yourself three questions – in this order:

1.
Is the disagreement about a technical question or something else – a values or policy question?
2.
Even if the disagreement is about a technical question, do the experts also disagree on related values and policy questions?
3.
If they do, is their technical disagreement leading to their policy disagreement, or is it the other way round: Are their divergent technical opinions grounded in divergent values or policy judgments?

The third question is very difficult to answer with confidence, but it is nonetheless important to ask. When a technical expert’s technical opinions lead logically to certain policy positions, the expert’s policy positions deserve a kind of “quasi-expert” credibility. You’re still entitled to question the expert’s policy views even if you accept her technical opinions, but to do so you’d want to find some flaw in the expert’s reasoning. But often a technical expert’s “reasoning” flows in the other direction: The expert’s technical opinions are grounded in values or in policy preferences that aren’t part of the expert’s expertise. In that case you want to bear in mind that the expert’s policy views are not expert opinions at all – and even the expert’s technical opinions aren’t purely technical, since they’re affected or even determined by her values and policy preferences.

Of course technical expertise is a necessary condition for wise policy decisions; you can’t properly decide what to do about a problem if you don’t understand the problem. But it’s not a sufficient condition. The experts who understand the problem best aren’t necessarily the ones best qualified to decide what to do about it.

Policy decisions typically involve both technical and nontechnical questions. The classic example in the risk field:

  • “How safe is X?”is a question about technical facts, so subject matter experts deserve special credibility (especially if they all agree).
  • “How safe is safe enough?”is a question about values. All the relevant stakeholders are entitled to their opinions, and subject matter experts deserve no special credibility (even if they all agree).
  • “Should we do X?”is a question about policy, grounded partly in the answers to the previous two questions. Subject matter experts deserve special credibility vis-à-vis the “fact”aspects of the question, but not vis-à-vis its “values” aspects.

Consider the riskiness of a particular technology – a nuclear power plant, for example. In a seminal 1974 essay on “Science and Trans-Science,” Alvin M. Weinberg used this example:

  • What would happen if every control rod in a nuclear reactor failed at the same time? Weinberg pointed out that this is “a strictly scientific question which may be decided by the methods of science; and, in the case mentioned, the scientific facts are indisputable.” Unless other safety mechanisms intervened, there would be a catastrophe. “Thus to the experts public discussion of this strictly scientific issue could only cause confusion, since science already gives an unequivocal answer.”
  • Could this scenario actually happen? Weinberg said this question is “really unanswerable.” Every expert agrees the probability of simultaneous failure of all the control rods is “extremely small,” but “some will insist that the event is incredible, others that it is not.” This is what Weinberg labeled a “trans-scientific” question. It’s not just that experts disagree; experts can disagree on scientific questions too. The point is that there’s no “scientific” way to resolve the disagreement. “Here public discussion helps to remind us that science can say little about the matter, and that its resolution requires non-scientific mechanisms.”
  • Should we take this scenario seriously as a disadvantage of nuclear power? This is closely related to the previous question, and many would call it a trans-scientific question too. But as I read Weinberg, he limited trans-science to questions that take the form of factual, scientific questions but are actually unanswerable. This, on the other hand, is a values question, not scientific or even trans-scientific. It’s a variant on the age-old values question, how safe is safe enough? Experts can help us guesstimate the probability of a catastrophic nuclear power plant failure, even if they can’t reliably assess that probability. But they have zero special wisdom to offer on whether that level of probability, given that level of uncertainty, ought to be considered acceptable or unacceptable for that sort of catastrophe.
  • Should we support or oppose nuclear power? This is the basic policy question. It is the bottom line of lots of scientific questions, lots of trans-scientific questions, and lots of values questions. There are experts on both sides of this bottom-line question, of course. They deploy their expertise to justify their side – and they imagine or pretend that their expertise proves their side. But in fact there is no “expert” answer to this question. There’s not even much reason to believe the experts are on the side they’re on because of their expertise, rather than because of other factors: ideology, financial interest, group membership, etc.

In a 2003 article on “Dilemmas in Emergency Communication Policy,” link is to a PDF file I addressed the distinction between technical and nontechnical expertise, using “a weird chicken pox case that might or might not be smallpox” as my case in point. Then I segued to a second example, universal smallpox vaccination:

Experts are needed to assess many questions [about the weird chicken pox case]: the probability that it is smallpox; the magnitude of the ensuing epidemic if it is and no quarantine is called; the damage to be expected from the quarantine itself. But the underlying question of when to err on the side of caution and when to avoid overprotectiveness is a values question, especially when the answers to the technical questions are so uncertain. So why shouldn’t it be a public/political question?

Think about universal smallpox vaccination in these terms. The risk posed by a nationwide vaccination program is a technical question. So is the risk posed by failing to have such a program. Maybe it’s even a “technical” question how likely terrorists are to possess a usable smallpox weapon, though the expertise required has more to do with intelligence-gathering than with medicine, and the error bar around any assessment of this probability is huge. But deciding whether or not these various expert assessments justify a mass vaccination program (that is, deciding whether to endure the high-probability moderate-consequence risks of vaccinating or the low-probability huge-consequence risks of not vaccinating) sounds like a values decision to me – the sort of decision democracies leave to the political process.

Government regulatory agencies almost invariably conflate their scientific/technical judgments with their trans-science and values judgments – claiming their policy preferences flow directly from their technical expertise. Here’s a lovely example from my 2001 website column on “Sound Science”:

A government agency client recently brought me an interesting problem. The agency had been charged with setting a cleanup standard for copper around a century-old smelter. It started with existing data – or at least existing standards in other jurisdictions – about how much copper inside the body constituted a health threat. Then it did some work on how much copper was likely to be absorbed by residents of the neighborhood in question. Much to the agency’s relief, it found that the likely uptake was way below the hazardous level. Knowing that even a very stringent body burden standard would lead to virtually no cleanup requirement, it articulated just such a stringent standard.

Then came the problem. There had been a clerical error in transcribing the absorption data; the numbers were off by several orders of magnitude. And so it turned out that implementing the standard the agency had announced would require digging up half of the town. My client’s problem wasn’t whether to admit the error; the agency had already done that. The problem was how to disclose that now that the action implications of its new standard had been recalculated, the agency was abandoning that standard and substituting a much less conservative one.

Of course there is nothing unreasonable about deciding that it doesn’t make sense to excavate whole neighborhoods for the sake of a highly conservative health standard. But when the standard was first announced, nobody said it was based on practical criteria like how little it would cost to implement. The agency had simply said it was the right standard to protect people’s health. Suddenly the agency needed to say it was no longer the right standard, now that the cost of implementing it was clear.

I don’t know yet how much community outrage the revised standard will provoke. Maybe not too much – after all, people don’t really like seeing their gardens destroyed and their neighborhoods disrupted. If the outrage does get out of hand, the agency may well decide that outrage trumps cost and choose to stick with the original standard. No doubt that decision too will be explained as a scientific judgment – in the invariable alliteration, “sound science.”

Policies are always compounded of technical and nontechnical factors. But policymakers usually imply their decisions are purely technical, especially in a technically relevant field like risk management.

For another one of my favorite examples of this, see my 2014 Guestbook entry on “What to say when a chemical that’s outlawed in some countries is legal in yours.” I argued that risk regulators make use of three “buckets” in their policymaking: an “obviously needs to be regulated” bucket, an “obviously okay to leave unregulated” bucket, and a “tough call” bucket in the middle. Science helps regulators decide which bucket a particular risk belongs in. But once a risk is classified as a tough call, other factors control whether it is regulated strictly or laxly: activist fervor, public outrage, industry lobbying, politics, even random chance. The middle bucket is the one that generates the lion’s share of controversy, of course. Yet regulators seldom acknowledge that the middle bucket even exists. They prefer to pretend that every risk can be dichotomized as either serious or negligible based on science alone.

I have four closely related complaints about how experts address (or, rather, fail to address) the distinction between technical expertise and nontechnical opinion:

  • Experts elide from technical expertise to nontechnical opinion, mixing the two without making the distinction.
  • Experts pretend or imagine that their nontechnical opinions – their trans-scientific judgments, value judgments, and policy judgments – are grounded firmly in their technical expertise. Sometimes they explicitly make that claim; other times they merely imply it and let the audience assume that their technical expertise extends to their nontechnical opinions.
  • Experts pretend or imagine that their technical opinions are not grounded in any way, shape, or form in anything nontechnical. Sociologists of science know that values, policy preferences, ideology, affiliation, etc. all have some degree of impact on a technical expert’s technical opinions – an impact that experts seldom explicitly acknowledge and often explicitly deny.
  • Experts treat their nontechnical disagreements with other experts as if they were technical disagreements. They defend their values and policy preferences as “sound science,” and attack competing values and policy preferences as unsound science.

In my 2001 website column on “Sound Science,” I said this about the fourth complaint:

Of course unsound science does exist, and deserves to be exposed. There are marginal “experts” at both tails of the normal distribution, ready to claim either that vanishingly low concentrations of dimethylmeatloaf are the probable cause of all the cancers in the neighborhood or that terrifyingly high concentrations are probably good for you. Some so-called scientists are crackpots; others fall prey to the temptation of ideology (on the alarmist side) or money (on the reassuring side). But most scientists – even those whose views are influenced by ideology or money – have views that are within the range of scientific acceptability. In other words, they may be right.

Most risk controversies, moreover, are chiefly about values and interests, not science. Veterans of these sorts of controversies may recall the 1989 battle over Alar, a chemical that was sprayed on apples to hold them on the tree longer. When studies surfaced suggesting that Alar might have adverse health consequences, the U.S. Environmental Protection Agency launched a slow legal process to begin phasing it out of the food supply. The Natural Resources Defense Council, which had long advocated faster regulatory action on a wide range of pesticide-related issues, chose Alar as its poster child for regulatory reform – not because it was the most hazardous agricultural chemical around, but because it was a surefire media winner: Children consume most of the nation’s apples and apple juice. So EPA and NRDC got into a huge battle over how urgent it was to get the Alar off the apples.

NRDC and EPA did interpret the science differently. EPA’s estimate of the health risk of Alar was about an order of magnitude lower than NRDC’s estimate. Or maybe it was two orders of magnitude. I don’t remember, and it didn’t matter. I do remember asking an NRDC spokeswoman if the group would abandon its crusade to move faster on Alar if it discovered EPA was right on the numbers after all. No, she said. Then I asked an EPA spokesman if the agency would speed up to NRDC’s schedule if it accepted NRDC’s numbers. No, he said. Though the two organizations genuinely disagreed about the science, the disagreement was more the result of a policy difference than the cause of it. What they really disagreed about was how bad a risk needs to be to kick it out of the “routine” regulatory category and call it an emergency. If the Alar risk had been as bad as NRDC thought, EPA would still have considered it a routine problem; if it had been as mild as EPA thought, NRDC would still have considered it an emergency. Not to mention EPA’s stake in defending its past decisions, and NRDC’s stake in dramatizing the case for regulatory reform. Or perhaps EPA’s desire to go easy on the apple and chemical industries. Or perhaps NRDC’s need for a good fundraising issue. Lots of things were going on in this battle, and a scientific disagreement was far from the most important for either side.

I have focused here on technical experts who conflate their technical expertise with their nontechnical, non-expert opinions on questions of values and policy. But there’s another extremely common way technical experts go beyond their expertise: They apply their expertise to a specific situation without actually having mastered the facts of the situation. Experts don’t usually get their own fields wrong. But when trying to apply their expertise to a specific situation, they quite often get the facts of the situation wrong.

I know how vulnerable I am to this error. I pride myself on my ability to shoot from the hip as a risk communication consultant. A client or a correspondent on my website Guestbook asks me how I think thus-and-such a situation should be handled. I know next-to-nothing about the specifics of the situation. For me, the situation is a widget. Using my toolkit of generic risk communication principles and strategies as a template, I quickly come up with a list of recommendations.

But insofar as I have misunderstood the situation, my recommendations may be off-target.

In give-and-take with a client my misunderstanding is easily corrected. The client notices what I got wrong and sets me straight. It’s tougher in the Guestbook – which is why so many of my Guestbook responses are peppered with weasel phrases like “…if I understand what you’re saying….”

Factual errors are especially likely when experts volunteer their opinions on current events in which they themselves are not directly involved. Virtually any day’s collection of newspaper op-ed columns offers new examples. But I’ll settle for just one example, a classic one.

In April 2009, a devastating earthquake hit the Italian town of L’Aquila. In the days before the big quake, swarms of small quakes had frightened residents, leading many to sleep in their fields instead of their beds. To calm the residents, the local government brought in a committee of earthquake experts, who met for about an hour. After the nonlocal experts had left town, a local official told the media – falsely – that the experts had concluded that the small swarms were relieving energy from the earthquake fault, making a big quake less likely, not more. None of the experts publicly corrected this inaccurate over-reassurance.

The residents were indeed reassured. A few nights later when the big one came, 297 people were killed, mostly locals asleep in their beds. Many presumably would have lived if they had stayed out in the fields.

Six experts who had attended the meeting were charged with involuntary manslaughter, convicted, and sentenced to six-year prison terms. (The convictions of five of the six were later overturned.) The rationale for the indictments and convictions was that the six experts (and the official, who was also charged) had participated in “inaccurate, incomplete and contradictory” statements about the tremors, statements that were “falsely reassuring.” The judge said they had provided “an assessment of the risks that was incomplete, inept, unsuitable, and criminally mistaken.”

Both the prosecutor and the judge explicitly said that the experts were not charged or convicted for failing to predict the earthquake. As risk communication consultant David Ropeik made clear in a superb September 2011 article, they were accused of what amounted to criminally bad risk communication. That’s an unusual and debatable rationale for a manslaughter conviction. My wife and colleague Jody Lanard and I commented at the time that risk communication “sins” like overconfident over-reassurance are extremely common. “They are culpable, though we very much doubt they’re manslaughter.”

But it was simply false to allege that the experts were being sent to prison just because current science cannot predict earthquakes.

The world of scientists rose up as one to condemn the manslaughter charges – and, as one, the world of scientists claimed that the defendants were charged with failing to predict an earthquake. Alan Leshner, then-president of the American Association for the Advancement of Science, wrote an open letter link is to a PDF file to the president of Italy protesting the charges and explaining that there was no way the scientists could have predicted the quake.

According to Ropeik’s article, 5,000 scientists signed on to the letter. Apparently not one of those scientists bothered to learn what the actual charges were. They heard from a prominent American scientist (Leshner) that the defendants were charged with failing to predict an earthquake, and that was enough for them.

In this lengthy section I have discussed three ways experts routinely go beyond their expertise:

  • They offer “expert” opinions on questions close enough to their actual expertise that they feel entitled to be considered experts, but not close enough that they really understand the details.
  • They offer “expert” opinions that conflate values questions and policy questions with the technical questions about which they have real expertise. (This is the biggie, in my judgment.)
  • They offer “expert” opinions on how their subject matter expertise applies to specific current events, without first doing sufficient investigation of the facts of the situation on which they’re commenting.

Okay, but sometimes experts offer expert opinions that are well within their expertise. Then can we trust what they tell us? That’s where I want to go next.

Is there any such thing as an objective expert?

link up to indexIn my view the fact that experts disagree doesn’t necessarily mean that any of them is incompetent. If a scientific question is controversial and still unresolved, competent experts can disagree about the right answer. And if it’s a trans-scientific question or a values question or a policy question, then it is scientifically unanswerable – and of course competent experts can disagree about the right answer.

But I don’t feel the same way about bias. For the sake of simplicity, assume that all experts have equal access to the same data and equal ability to assess the data. So their disagreements can’t be attributed to differences in their knowledge or understanding. What then do we attribute their disagreements to?

In social science, “bias” is the name for any factor that systematically influences a distribution, making it diverge from randomness. In other words, the distribution of opinions among experts with equal knowledge is either random or it’s biased. Or it’s both, sort of. Consider my risk communication opinions. These opinions are undoubtedly influenced by where I went to college and graduate school. There was no risk communication bias in my process of deciding where to go to school, nor, obviously, in the schools’ process of deciding whether to admit me. Nonetheless, decades later my expert opinions about risk communication are surely biased to some extent by those decisions. In many fields – economics, for example – you can pretty reliably categorize experts into schools of thought based on where they happened to go to school.

Of course experts, at least firsthand experts, do assess the evidence in the process of reaching the conclusions they reach. But even when an expert has independently assessed the evidence, other factors are always at work.

Financial self-interest is the bias that gets the most attention, and the one that arouses the most hostility. Experts quite properly sell access to their opinions, but they’re not supposed to sell the opinions themselves. Nonetheless, there is ample evidence that experts are indeed influenced by their own financial self-interest, even when they genuinely think they’re not. Doctors, for example, famously imagine they’re immune to the blandishments of pharmaceutical salespeople, despite dozens of studies to the contrary.

Ideology is a second source of expert bias. It gets less attention and arouses less hostility than financial self-interest. This always infuriated my corporate clients. When environmental activists did a study, the media and the public tended to assume the study results were solid and unbiased. But when companies did a study, the results were widely dismissed as self-serving. I totally get it (though my clients usually didn’t) that the bias in a company-sponsored study presumably leads to risk underestimation, while the bias in an activist-sponsored study tends to overestimate risk. Overestimating risk is safer than underestimating risk; being too cautious is a smaller problem than not being cautious enough. So the activists’ bias is a less dangerous bias. And any ideological bias is at least principled, grounded in something more than self-interest. So an ideological bias is more altruistic than a financial bias. But it’s still a bias.

I’m especially interested in sources of expert bias that usually fly under the radar. Friendship and peer pressure, for example, rarely get as much attention as money, or even as ideology. As I’ve already discussed, experts’ opinions are mostly secondhand, just like everybody else’s opinions. And just like everybody else’s opinions, experts’ opinions are greatly influenced by affiliation – that is, by the opinions of their friends and colleagues. (See for example this discussion of conflict-of-interest issues affecting experts who advised the World Health Organization on how to handle the 2009–10 swine flu pandemic.)

A very powerful source of bias that’s almost never discussed is the pressure to stay consistent with whatever point of view or school of thought an expert’s reputation is grounded in. The contention that outrage determines hazard perception, for example, was a linchpin of my career; it sent my children to college and will send my grandchildren to college. Other risk communication experts are free to decide that I’ve been wrong all along about the relationship between hazard and outrage. I, on the other hand, have a strong reputational incentive not to change my mind.

Part-consciously and part-unconsciously, all experts resist reaching conclusions that diverge from the conclusions they have reached in the past. Evidence that their prior conclusions were wrong tends to get missed. Even evidence that some particular case is different tends to get missed.

Reputational consistency may be the second-biggest bias in expert opinions. The biggest is simple consistency, irrespective of reputation. This is so fundamental it hardly ever gets called a bias at all. Experts are not blank slates. They do not come to a new task without baggage. The way they approach the new task will be a lot like the way they have approached similar tasks in the past. The conclusions they draw this time will be a lot like the conclusions they drew the last few hundred times. Experts may disagree fervently with what a different expert has to say, but they are profoundly unlikely to disagree fervently with what they themselves said last week.

It’s worth pondering why experts give pretty much the same answer week after week. They’re not starting fresh and coming out in much the same place. They’re not reconsidering their generic opinions every time they’re asked to give an opinion on something specific. They’re building on what they have thought, said, and done before.

Is consistency a bias and therefore a threat to objectivity? It depends on what we mean by bias and objectivity. But surely consistency is a departure from neutrality. A blank slate is neutral. An expert’s mind is filled with prior intellectual, emotional, and experiential content that necessarily colors how that expert approaches a new task.

I’m not going to attempt a complete list of factors that influence the opinions of experts. My point is simply that these factors exist. I’ll stipulate that most experts are honorable people, trying to apply their expertise to the evidence at hand – that is, trying to decide what they believe based on what they know. But a great deal of what they believe is influenced (biased) by other factors. That’s not a criticism. It’s an incontrovertible fact. (At least that’s my expert opinion.) Expert opinions are inescapably influenced by financial self-interest, ideology, friendship, peer pressure, reputational consistency, simple consistency, and a panoply of other factors.

For years I have advised clients that there is no such thing as a truly neutral expert. Neutrality is easy when you’re ignorant. I know virtually nothing about Bolivian foreign policy toward Venezuela. I still bring a lifetime of accumulated knowledge, feelings, experiences, opinions, values, and biases to the topic. These preexisting factors would come into play as I climbed the learning curve about Bolivian-Venezuelan relations. But especially at the start, their influence would be minor. An expert on Bolivian-Venezuelan relations, on the other hand, has lots of relevant knowledge, feelings and experiences, leading to firmly held opinions, values, and biases. Compared to me, the expert is much better informed and much less neutral.

On the topic of how to do effective risk communication, on the other hand, the Bolivian-Venezuelan relations expert would be ignorant and neutral. I am a well-informed and far-from-neutral expert on that topic.

I’m not claiming that experts are dishonest. Rather, I am claiming that expertise is intrinsically not neutral. By the time you know enough to be considered an expert, you have opinions, values, and biases. You may be associated with a particular approach or school of thought within your field, which you diligently bring to bear with regard to any specific situation you are called upon to assess.

At the very least, you have already staked out your preferred methodologies; you know which aspects of a problem you tend to consider of paramount importance and which you tend to treat as secondary. And an expert’s choice of methods inevitably affects what that expert sees and what she misses. As the aphorism has it, if you’re a carpenter whose preferred tool is a hammer, everything looks like a nail.

Even if you have no allegiances or constituencies to support, no financial or ideological biases, you have an approach. You may be completely neutral vis-à-vis the situation you’re being asked to assess, but you are nothing like neutral vis-à-vis your views on how to assess that situation. Presumably you do your honest best to call them as you see them – but you see them through the lens of your prior knowledge, feelings, and experiences; your opinions, values, and biases; your approach as an expert.

How should organizations pick experts?

link up to indexSo if you want to hire someone to opine on a controversial topic, whom should you hire?

If you’re neutral and looking for an expert who’s also neutral, you’re basically out of luck, since neutral experts don’t exist. If you’re neutral, in fact, you’re especially vulnerable to the biases of the expert you hire, because you’re probably pretty ignorant about what sorts of biases are most likely to affect how different experts approach the question you want answered.

In my risk communication consulting, for example, I divided my clients into two categories: the ones who particularly sought me out because they’d heard something they liked about my approach; and the ones who just wanted a generic risk communication consultant and happened to wind up with me. I often tried to brief clients in the second category on how my advice might differ from what they’d have heard from a different consultant. As a rule they didn’t pay much attention to my explanations of these differences. They wanted risk communication advice; they were getting risk communication advice. Why complicate the transaction, they seemed to feel, by exploring the ways in which the advice they were getting might be idiosyncratic?

More often than not, organizations in the market for expert opinions aren’t neutral. Assume you have a stake in the outcome of some controversy. For whatever reason (financial, ideological, etc.), you want X to be true, not Y. You’re hiring someone to tell you – and perhaps others – which one she thinks is true. You have seven choices:

The choices that favor X, your side:

1.
Hire an expert whose preexisting views make her virtually certain to conclude that you’re right, that X is the truth and not Y, independent of the facts of the case.
2.
Hire an expert whose preexisting views mean she will probably decide close cases in favor of X over Y, but will come down (however reluctantly) on the Y side if the facts clearly favor Y.

The choices that favor Y, the other side:

3.
Hire an expert whose preexisting views make her virtually certain to conclude that you’re wrong, that Y is the truth and not X, independent of the facts of the case.
4.
Hire an expert whose preexisting views mean she will probably decide close cases in favor of Y over X, but will come down (however reluctantly) on the X side if the facts clearly favor X.

The choices that are in the middle:

5.
Hire an expert whose conclusions are for sale, who is virtually certain to shrug off her preexisting views and conclude that whoever hired her is right, independent of the facts of the case.
6.
Hire an expert whose conclusions are partly for sale, who will probably decide close cases in favor of whoever hired her, but will come down (however reluctantly) on the other side if the facts clearly favor the other side.
7.
Hire a neutral but ignorant (neutral because ignorant) non-expert, who is influenced neither by her preexisting views nor by the preferences of her client.

So whom should you hire? The answer depends partly on why you’re looking for an outside opinion in the first place. Are you signing up expert witnesses to testify in your behalf at a regulatory proceeding? Are you searching for confidential advice before making a decision whether to cancel a project or move forward on it? Are you trying to find someone critics will consider credible, maybe even someone you can hire to facilitate a community advisory panel link is to a PDF file or to give confidential advice to your critics through a technical assistance grant?

For all these purposes, my clients understandably felt safest opting for an expert in Group #1 or #5, or at least #2 or #6 – an expert whose conclusions were likely to work in my client’s favor. Group #1 is experts who already believe deeply in your position, true believers. Group #5 is experts who are willing to support your position for a price, mercenaries. Corporations mostly deploy expert mercenaries (though a high percentage of them also believe in the cause, or perhaps come to believe in it to resolve their own cognitive dissonance). Activist groups usually can’t afford mercenaries, so they look for true believers, experts who are willing to volunteer their services out of conviction.

On the whole, I think true believers diverge more than mercenaries from the gold standard of total objectivity. It’s not that ideology is necessarily a stronger bias than profit; it’s more that people who are sure they’re on the virtuous side of a controversy feel better about omitting or distorting a few measly facts than people who are in it for the money. And activist exaggeration is less risky than corporate exaggeration; the media and the public are less likely to notice and less likely to object.

Most experts of both types insist on telling the truth as they see it. But only a few insist on telling the whole truth as they see it. And of course “the truth as they see it” isn’t the same as The Truth.

Both true believers and mercenaries vary in how one-sided they are willing to let themselves get. That’s the difference between #1 and #2, between #3 and #4, and between #5 and #6. But to one extent or another, all experts are biased, never perfectly objective. If it’s objectivity you want, hire a consultant in Group #7: neutral but ignorant.

I routinely advised clients to choose their experts from Group #4: predisposed to be on the other side, but willing to concede the rightness of my client’s position when that’s what the evidence showed. “If the facts are clear,” I would argue, “and you’re confident that any honest expert will conclude that you’re right, pick an honest expert whose predilections run against you. When that sort of expert says you’re right, it’s an admission against interest, which makes it far more credible than the words of an expert who was in your camp from the get-go.”

The distinction between an expert who’s obviously on your side and an expert who reluctantly acknowledges that you’re right this time may not matter much vis-à-vis a low-controversy question. If people don’t care one way or the other, they’ll probably accept the word of any expert. But when the audience cares, when we move from “public” relations to “stakeholder” relations, expert credibility becomes a crucial variable. And the most credible expert to a skeptical or hostile stakeholder is the expert who will scrutinize your evidence critically, and if it’s as good as you think it is, will give you an unenthusiastic okay, obviously wishing she didn’t think you were right. “This is a wonderful company and everything it does is safe” is a worthless endorsement. “This company has a terrible record, but what it’s doing this time is safe” is worth its weight in gold.

When I couldn’t talk a client into an expert from Group #4, I sometimes compromised on #2: an expert who leans your way but not too reliably. But when the goal is an expert whom skeptics and maybe even opponents will find credible, #4 is best.

And #1 is worst. Whatever the field, an expert who always comes down on the same side, independent of the facts in the case at hand, has zero credibility in the minds of people who are leaning the other way. (At least an expert in Group #5 isn’t always on the same side; it takes a while to notice that her position depends purely on who hired her this time.) When a research organization says that a particular government agency should be privatized, ask whether it has ever concluded that some other government agency shouldn’t be privatized. When a sexual harassment counselor says that a particular complaint is justified, ask how often she has encountered a complaint she thought unjustified. And if you want meaningful support for your position that your factory’s emissions are harmless, get it from an expert who has a record of finding some emissions harmful.

I should add that there’s an additional group of experts you probably can’t hire: experts who don’t want to get enmeshed in controversy. They have a point. Forty years of research, starting with Rae Goodell’s 1977 book The Visible Scientists, has shown that experts lose stature with their peers when they let themselves get caught up in public controversies. The search for an expert who is well-informed but remains open-minded and comparatively neutral, free of strong previously stated opinions, is near-futile in the first place. The search for someone who meets these specifications and is now willing to go public on one side of a controversy is doomed.

How should expert disagreement affect confidence? How does it affect confidence?

link up to indexLogically, the existence of epistemic peers with different opinions should diminish an expert’s confidence in her own opinion – and the more disagreement the expert encounters, the more tentative she should feel.

There’s a considerable philosophical literature on this issue, the epistemology of disagreement. The mainstream philosophical position – the majority view of experts on expert disagreement – is that disagreement should weaken confidence. But this view, known as “conciliationism,” does have some opponents, philosophers who argue that there are conditions under which it makes sense for an expert’s confidence to be undiminished by other experts’ disagreement. (You have to wonder whether this opposition has at all undermined the confidence of the conciliationists, as conciliationism would require.)

Despite the philosophical near-consensus that expert disagreement should diminish expert confidence, psychologists know that disagreement frequently has the opposite effect. Especially when there’s an audience (or a judge) to be won over, opposition tends to sharpen and harden people’s opinions. And experts are just people.

In the face of controversy, most experts steadfastly ignore the other side’s arguments and just assert their own more aggressively. But some study up on the other side’s arguments and rehearse how to rebut them most effectively. One sign of a hot controversy, in fact, is the proliferation of how-to articles on rebutting the other side. Climate change, vaccination, and the existence of God are examples of issues where both sides have published extensive rebuttal manuals. Another sign: debate among mainstream supporters over whether the minority view “deserves” to be rebutted or should be ignored instead, lest the rebuttal give it more attention and more dignity than is warranted.

Becoming more tentative is a less common expert response to expert opposition. It happens; it’s a niche in the ecosystem, and all niches are filled. There are studies showing that scientists – climate scientists, for example – sometimes tone down their conclusions or at least their rhetoric in response to contrarians. But it’s not a popular niche. Experts under attack are likelier to step up their rhetoric than to tone it down. Even experts who eventually change their minds tend to sound pretty sure of their prior opinions right up to the moment when they suddenly switch sides.

If the question isn’t especially loaded, if the expert isn’t especially invested in her opinion, if the expert who disagrees has new data, and if the alternative opinion is expressed courteously and respectfully, then maybe the response might be, “Hmmm, you could be right, I may have to reconsider….” But conditions aren’t usually that optimal. Once experts are divided into warring camps on a hotly contentious issue, further debate leads only to further polarization.

A more complicated question is how expert disagreement affects the confidence of a non-expert audience.

Let’s start with the well established and sensible finding that expert confidence inspires audience confidence. That’s obviously true when all the experts agree (or there’s only one expert around) and the question isn’t especially controversial in the first place. If an expert tells you a particular conclusion is the firm consensus of her field, all things being equal you believe her. If she tells you the field considers the conclusion to be still tentative and uncertain, you believe that instead.

Of course you’d rather the conclusion were firm. An expert consensus that X is true is obviously preferable to an expert consensus that X may be true but we’re not sure. Expert uncertainty is a drag.

But expert uncertainty is far less upsetting than expert disagreement. Here’s a summary of the relevant research (I’ve removed the links to the studies cited):

Perhaps the most inhibiting type of uncertainty arises from conflicts or apparent disagreements among scientists. Smithson (1999) demonstrated that conflicting estimates from experts generate more severe doubts in participants’ minds than agreed but imprecise estimates. Conflicting estimates also tend to decrease trust in the experts. Cabantous (2007) replicated these findings with a sample of insurers, who assigned higher premiums to risks for which the risk information was conflicting than to risks where that information was consensual but uncertain (see also Cabantous et al., 2011). Any appearance of expert disagreement in public debate is therefore likely to undermine people's perception of the underlying science, even if an issue is considered consensual within the scientific community.

The implications here are worth underlining. Expert certainty inspires more audience confidence than expert uncertainty, even consensus expert uncertainty (the experts all agree that they’re not sure). But expert uncertainty inspires more audience confidence than expert disagreement (the experts are all sure but they don’t agree).

Imagine a 10-point risk assessment scale, where 1 equals very safe and 10 equals very dangerous. Here are four possibilities, arrayed from most confidence-inspiring to least confidence-inspiring:

1.
Unanimous certainty – the experts all say “exactly 4 for sure” or they all say “exactly 7 for sure.”
2.
Unanimous uncertainty – the experts all say “probably around 4” or they all say “probably around 7.”
3.
Disputed uncertainty – some experts say “probably around 4” while other experts say “probably around 7.”
4.
Disputed certainty – some experts say “exactly 4 for sure” while other experts say “exactly 7 for sure.”

If you can avoid expert disagreement, expert confidence inspires audience confidence. But if you have to endure expert disagreement, it’s better to acknowledge it and sound tentative because of it than to stick to one confident answer while another expert sounds equally confident about a completely different answer.

As a consultant, I often urged clients who were trying to calm outraged stakeholders to convert expert disagreement into mere uncertainty – and thereby diminish the outrage.

They could do that unilaterally. Compare these two scenarios:

Scenario One: Expert DisagreementActivists confidently say the risk posed by the company’s dimethylmeatloaf emissions is pretty high, 7 on a 10-point scale. Company officials just as confidently say the risk is pretty low, 4 on the scale.
Scenario Two: UncertaintyActivists confidently say the risk posed by the company’s dimethylmeatloaf emissions is pretty high, 7 on a 10-point scale. Company officials say the risk is hard to measure. They think it’s around 4, but it’s possible the activists are right and it’s as high as 7. For sure it’s somewhere in that range, 4 to 7.

Scenario Two is far from ideal. A risk that the experts say could be anywhere between 4 and 7 arouses considerable outrage. But Scenario One is worse. (Worse from the company’s point of view. It’s better from the activists’ point of view.) A risk that some experts confidently put at 4 and other experts confidently put at 7 arouses a whole lot more outrage.

That’s if the “real answer” isn’t knowable. If the answer will soon be known, audience confidence in the experts who got it right will go up, while confidence in the experts who got it wrong will go down – all the more so (in both cases) if the experts were confident. People who routinely make predictions that are going to be proven true or false soon, such as weather forecasters, can’t afford overconfidence.

In sum: The two paths to audience confidence before the answer is known or if the answer is permanently unknowable are expert consensus and acknowledged uncertainty. An unchallenged expert is more convincing if she sounds confident than if she sounds uncertain. But if you can’t credibly claim consensus – you can’t wrestle the dissenters into line or rule them out of the discussion or get them dismissed as denialists – then you’re best off acknowledging uncertainty.

And if you’ve chosen the route of acknowledging uncertainty, you probably need to acknowledge it emphatically. Precisely because the public prefers expert certainty, people miss subtle or quiet acknowledgments of expert uncertainty. See my 2011 column with Jody Lanard on “Explaining and Proclaiming Uncertainty,” which details an example of a food poisoning outbreak in Germany. As the column title suggests, loudly “proclaiming” uncertainty works better than quietly acknowledging it.

All this is compatible with research dating back to the 1940s on one-sided versus two-sided arguments. In a nutshell: When an audience is uninterested and uninformed, and likely to remain uninterested and uninformed, one-sided arguments are more persuasive than two-sided arguments. But when an audience is aware of information that supports the other side, or is likely to acquire such information later, then two-sided arguments are more persuasive.

In the handful of times when clients have wanted me to testify as an expert witness (usually on what impact the client’s risk communication would have had on the audience), I’ve tried to talk them into letting me sound tentative rather than confident. The jury or hearing examiner would be paying close and even skeptical attention, I pointed out. And the other side might well have risk communication experts of its own. So I’d wind up more convincing if I acknowledged the other side’s solid arguments. If I think my client is 70% right, not 100% right, that’s what I should say. If I’m 70% sure of my position, not 100% sure, that’s what I should say.

Quite often I ended up talking myself out of the gig. The conventional wisdom among most trial attorneys is that expert witnesses should be encouraged to be as confidently one-sided as they are willing to be. If the other side’s attorney asks the right question on cross-examination, okay, maybe the witness will feel obligated to admit a weakness in her case (depending on which of my seven groups she falls into). But at least on direct examination, the conventional wisdom says, expert witnesses should make the strongest, most one-sided case they can for their side.

The weight of the evidence says this strategy is unwise. It results in dueling Ph.D.s on the witness stand, maximizing the jury’s or hearing examiner’s skepticism.

Years ago I read a study that captured perfectly what I’m saying about expert disagreement versus expert uncertainty. I haven’t been able to find the study, so you shouldn’t entirely trust my summary. But here’s how I summarized it in my 1993 book, Responding to Community Outrage:

A study at Carnegie Mellon University used EMFs from power transmission lines as a case in point. One group read a hypothetical news story in which all the experts quoted said power transmission lines were pretty dangerous, say 7 on an imaginary 10-point scale. The second group read a story in which half the experts said the risk was 7 and half the experts were much less alarmed and put the risk at, say, 3 on the scale. The second story frightened people more than the first. If you think about that for a minute, it’s not so strange. If the experts all agree, they probably will work together to solve the problem; if they disagree, there is going to be a deadlock and no action. Moreover, if all the experts say the risk is 7, it probably is 7. But if half the experts say it is 7 and half the experts say it is 3, obviously they don’t know what they’re doing, and it might be 14.

Everything I’ve said so far about expert uncertainty versus expert disagreement assumes that the audience wants to know the answer, and therefore likes consensus expert certainty the most and expert disagreement the least, with consensus uncertainty in the middle.

But sometimes an audience has the opposite priority. Sometimes an audience prefers uncertainty to certainty … and even prefers expert disagreement to uncertainty.

One obvious example is an audience that’s fervently on one side or the other. The worst outcome for an advocate is an expert consensus that the other side is right. It’s better for the cause if the experts aren’t sure, and better still if they’re at each other’s throats. Consensus expert certainty adds value only if it favors your position.

Here’s a more complex example. Suppose the question at hand is whether X is a carcinogen. And consider two different sorts of X: dimethylmeatloaf emitted by a factory in your neighborhood versus caffeine in the coffee you love to drink.

The possibility that a nearby factory may be emitting a carcinogen is an obvious reason to be outraged. People in the neighborhood naturally want to know whether dimethylmeatloaf is carcinogenic, how carcinogenic it is, how much of it the factory emits, how much cancer risk results from that amount of dimethylmeatloaf, etc. Expert certainty about the answers to these questions would be best. Uncertain answers would arouse more outrage than certain answers (even scary ones). And the most outrage would result from expert disagreement – from experts claiming to be certain about radically different answers. “How dare you emit this stuff when you don’t even know whether it’s dangerous or not!”

The carcinogenicity of dimethylmeatloaf (like the risk of transmission line electromagnetic fields) is exactly the sort of issue where consensus expert certainty suits the audience best, disputed certainty is worst, and acknowledged uncertainty is in the middle.

Now think about the carcinogenicity of coffee. For some coffee-drinkers, and especially for non-drinkers, this is the same sort of issue as factory dimethylmeatloaf carcinogenicity. They want to know the answers, so expert uncertainty is bad and expert disagreement is worse. But for most people, especially coffee-drinkers, the possible risks of coffee arouse a lot less outrage than the possible risks of factory emissions. Unlike factory risks, coffee risks are voluntary; they’re familiar; they’re natural; etc. Above all, coffee-drinking has benefits, so whatever risks it entails are experienced as fair. Coffee drinkers like their coffee and would hate to learn that they shouldn’t be drinking it. In these and other ways, coffee is unlike dimethylmeatloaf.

As a result of these differences, plenty of coffee drinkers – though not all – wind up with reversed priorities. If there’s expert consensus that coffee is carcinogenic, we’ll all have to decide what to do: accept the risk or cut back on our coffee consumption or even eliminate coffee from our lives altogether. If the experts aren’t sure, on the other hand, we can probably talk ourselves into suspending judgment (and continuing to enjoy our coffee) until they make up their minds. And if the experts disagree, we can ally ourselves with the ones who deny there’s any risk. We can even get irritated at the ones who say the risk is real: “How dare you try to scare us into quitting when you don’t even know whether it’s dangerous or not!”

If I’m outraged about factory dimethylmeatloaf emissions, expert disagreement exacerbates my outrage. If I’m not outraged about caffeine in coffee – in fact, I’m outraged at the warnings – expert disagreement justifies and sustains my decision not to worry … and not to quit.

That’s why the tobacco industry promoted uncertainty and expert disagreement about cigarettes – so smokers would decide they didn’t have to quit till the experts were sure. And it’s why the fossil fuel industry continues to promote uncertainty and expert disagreement about climate change – so those of us who like our petroleum-based lifestyles can feel okay about not reducing our carbon footprints. As we discussed earlier, any interest group that opposes government policies to reduce greenhouse gas emissions has a strong incentive to emphasize, foster, or even fabricate expert disagreement about climate change. (An interest group pushing greenhouse gas regulation, on the other hand, has an incentive to emphasize, foster, or even fabricate expert consensus.)

Despite these contrary examples, the more usual case is when the audience seeks certainty, resists uncertainty, and absolutely hates expert disagreement.

Summary

link up to indexThis is a difficult column to summarize. It’s not an argument leading to a single conclusion, but rather a compilation of everything you (might) want to know about expert disagreement. But I can at least list what I see as the nine major takeaways:

number 1
Both the literature on expert disagreement and the views of the general public suggest that expert disagreement is a signal that at least one of the disagreeing experts must be either incompetent or biased by financial self-interest or ideology. Expert incompetence does happen and expert bias is inevitable. Nonetheless, expert disagreement signals that something is amiss only when it disrupts an expert consensus – in which case either the dissenter is wrong or the consensus needs to be reconsidered and a paradigm shift may be in the offing. Expert disagreement about ongoing controversies is predictable, ordinary, and appropriate, and should not be considered a problem.
number 2
Because non-experts are so often unable to assess evidence on their own, they necessarily rely heavily on expert opinion. So judging the extent of expert disagreement becomes important. But relying on a “head count” of the number of experts on each side of a question is a sensible way to proceed only insofar as it’s not what the experts themselves are doing – that is, only insofar as the experts are independently assessing the evidence rather than simply joining in the majority opinion of their colleagues; and only insofar as the experts remain open to disconfirming evidence rather than codifying the majority view into dogma. In addition, head counts miss emerging paradigm shifts, so it is often wiser to try to assess the trend line: whether a competing view is gaining expert adherents.
number 3
Because non-experts rely heavily on expert majority opinion, advocates on all sides of all controversies have an incentive to misrepresent the actual distribution of expert opinion. If expert opinion is split, there’s an advantage to looking like your side is the majority. If your side really is the majority, there’s an advantage to overstating the extent of the expert consensus. And if your side is the minority, there’s an advantage to overstating the extent of expert disagreement. For individual experts, meanwhile, there are incentives (such as peer pressure) to side with the majority and hide or even suppress any doubts. But there are also countervailing incentives to dissent (such as a desire to forge an individual reputation). Moreover, the concept “expert” is itself more complicated than it may seem. Experts in different fields may have quite different opinions on a question that crosses disciplinary boundaries. Or highly specialized experts may see the question differently than generalists. All these factors make it difficult for non-experts to conduct their expert head count.
number 4
Most people the public thinks of as experts have little firsthand knowledge of the question on which their expert opinion is sought. They may not have read the key studies bearing on the question, or even summaries of the studies. What makes these experts “experts” is that they know what the real experts (the handful of specialists who actually did the key studies or at least have reviewed them carefully) think about the question. Expert knowledge – like everybody’s knowledge – is mostly secondhand. Secondhand expertise is sufficient for settled questions. But it is problematic when a paradigm shift is emerging; secondhand experts typically get stuck in the old paradigm even after the real experts have reluctantly changed their minds. And secondhand expertise is problematic when there is no true expert consensus; secondhand experts are likely to interpret as consensus the school of thought that they have been exposed to and inculcated in.
number 5

Experts are not the most reliable judges of the limits of their own expertise.
They exceed those limits in at least three ways:

  • They offer “expert”opinions on questions close enough to their actual expertise that they feel entitled to be considered experts, but not close enough that they really understand the details.
  • They offer “expert”opinions that conflate values questions and policy questions with the technical questions about which they have real expertise.
  • They offer “expert”opinions on how their subject matter expertise applies to specific current events, without first doing sufficient investigation of the facts of the situation on which they’e commenting.
number 6
The fact that experts so often conflate technical questions with values and policy questions is especially problematic. Experts elide from technical expertise to nontechnical opinion, mixing the two without making the distinction. They pretend or imagine that their nontechnical opinions are grounded firmly in their technical expertise. They pretend or imagine that their technical opinions are not grounded, even a little, in anything nontechnical – in values, policy preferences, ideology, affiliation, etc. And they treat their nontechnical disagreements with other experts as if they were technical disagreements, defending their values and policy preferences as “sound science” and attacking competing values and policy preferences as unsound science.
number 7
Even when an expert has independently assessed the evidence regarding a question, other factors are always at work, inevitably affecting what conclusions that particular expert reaches. Calling these factors “biases” isn’t a criticism; it simply reflects the inescapable truth that experts are not blank slates. By the time you know enough to be considered an expert, you view every new question through the lens of your prior knowledge, feelings, and experiences; your opinions, values, and biases; your approach as an expert. At the very least, you have already staked out your preferred methodologies. You may be completely neutral vis-à-vis the new situation you’re being asked to assess, but you are nothing like neutral vis-à-vis your views on how to assess that situation. Some of the factors that bias expert opinions are widely seen as biases, such as financial self-interest and ideology. Others typically fly under the radar: friendship, peer pressure, reputational consistency, simple consistency, etc.
number 8
Since there is no such thing as a perfectly neutral, objective, unbiased expert, organizations seeking expert advice should choose which expert to hire with that expert’s bias in mind. If you’re neutral yourself, knowing the bias of “your” expert can help you interpret the advice you get. If you have a bias of your own, you have six basic options:

1.
Choose an expert who’s reliably on your side regardless of the evidence.
2.
Choose an expert who leans to your side unless the evidence clearly favors the other side.
3.
Choose an expert who’s reliably on the other side regardless of the evidence.
4.
Choose an expert who leans to the other side unless the evidence clearly favors your side.
5.
Choose an expert who reliably supports whichever side is paying the bill.
6.
Choose an expert who leans to the side that’s paying the bill unless the evidence clearly favors the other side.
Corporations usually opt for #5 or at least #6; they hire a mercenary. Activist groups usually opt for #1 or at least #2; they hire a true believer. When you’re confident you’re right and you need an expert who will be credible to skeptical or even hostile stakeholders, the wisest choice is #4: Hire an expert who will reluctantly say you’re right if that’s what the evidence shows.
number 9
Logically, the existence of expert disagreement should diminish all experts’ confidence in their opinions. But disagreement frequently has the opposite effect: Opposition tends to sharpen and harden experts’ opinions. The effect of expert disagreement on audience confidence is more complicated. As a rule, expert certainty inspires more audience confidence than expert uncertainty, even consensus expert uncertainty (the experts all agree that they’re not sure). But expert uncertainty inspires more audience confidence than expert disagreement (the experts are all sure but they don’t agree). An unchallenged expert is more convincing if she sounds confident than if she sounds uncertain. But if the audience is also hearing from the other side, then you’re best off acknowledging uncertainty – better yet, proclaiming uncertainty. But sometimes an audience has the opposite priority. If you love your morning coffee, for example, you have reason to resist an expert’s claim that caffeine is carcinogenic; you’re happier if the spoilsport expert is uncertain, and happier still if other experts disagree, thereby justifying your preference not to quit.

Copyright © 2017 by Peter M. Sandman

      Comment or Ask      Read the comments
Contact information page:    Peter M. Sandman

Website design and management provided by SnowTao Editing Services.