Post-Truth and the Rhetoric of “Following the Science”

ABSTRACT Populists are often cast as deniers of rationality, creators of a climate of “post-truth,” and valuing tribe over truth and the rigors of science. Their critics claim the authority of rationality and empirical facts. Yet the critics no less than populists enable an environment of spurious claims and defective argumentation. This is especially true in the realm of science. An important case study is the account of scientific trust offered by a leading public intellectual and historian of science, Naomi Oreskes, and the misapplication of that theory during the coronavirus pandemic.

said, do not respect facts or experts and have a bad habit of spreading disinformation. Some critics charge that the central epistemic doctrine of a wide swath of citizens is a "generalized defiance towards established knowledge" (Nichols , xiii).
Elites such as credentialed experts, journalists, and politicians have long sought to undermine populist credibility (Frank ).  Contemporary critiques of populists often depict democracy as plagued by rising disinformation and conspiratorial thinking. In this narrative, a golden age of democracy and epistemology, when citizens purportedly accepted truth and shared facts, has been overtaken by a herd mentality, in which "creedal conflict and political conflict [have] become indistinguishable" (Rauch , ).
This account is widespread, yet is it true? In part, this is an empirical claim that can be, and has been, empirically challenged. Scholars have cast increasing doubts on whether our age is uniquely defined by misinformation and conspiratorial belief (Hannon ; Nyhan ; Uscinski et al. ). If true this would complicate calls for a return to what Rauch calls the "reality-based community" or what Applebaum () describes as "a world where we can say what we think with confidence, where rational debate is possible, where knowledge and expertise are respected, where borders can be crossed with ease." Its empirical status aside, a more central worry with the account of a populist-banished golden age of democratic trust in expertise and "facts" is that it places the critics of populists in a position of epistemic privilege that may obscure their own epistemic failings. The hypothesis we will examine is that elites who criticize populists may be just as much responsible for the low quality of civic discourse as the populists they castigate. The rhetorical moves made in their critiques may contribute to the undermining of democratic discourse and the search for truth. By rejecting the possibility that populists disagree in good faith, these critiques instead rest on questionable assumptions about the nefarious motives behind dissent. To those on the receiving end of these tactics, they may appear as little more than attempts to avoid engaging in substantive argument. We shall argue that political discourse is disabled by the narrative that we are in a "post-truth" moment. When good-faith disagreement is mislabeled as "misinformation," it suppresses the political and intellectual processes that exist precisely to work through genuine disagreement.
In what follows we examine the epistemology of populists' critics in the context of scientific claims. Policy debates commonly appeal to claims of expertise, and politicians and civil servants frequently employ experts to bolster claims about truth and knowledge in order to underwrite recommendations for public policy initiatives. The expert shores up the authority of the claim to knowledge, licensing a particular policy agenda. When populists express doubts about many of the claims politicians and their experts make, their doubts are roundly dismissed with catchphrases like "follow the science." During the pandemic, there was widespread fretting about skepticism, dissent, and "denialism." Two scholars even called it "the first post-truth pandemic" (Parmet and Paul ). Many turned for enlightenment and support to Merchants of Doubt () by historians of science Naomi Oreskes and Erik Conway, which argues that scientists and corporations were complicit in falsely sowing doubt about nearly universally accepted research on climate change and tobacco. For many reasons we will discuss, this research seems inapt when applied to a novel pandemic. Yet its applicability was often assumed: the disparaging label "merchants of doubt" was applied aggressively to scientists who raised questions about the origin of Covid, the efficacy of particular treatments, and such interventions as school closures and lockdowns. Of the , works citing Merchants, almost  discuss Covid-, according to Google Scholar. This is not a harmless instance of name calling. The conventional wisdom on a variety of issues flip-flopped over the course of the pandemic, as might well be expected, and some of the ideas that were once suppressed are now seen not only as valid but as highly plausible or even correct. We will argue that the application of disparaging labels like "merchants of doubt" to disagreement stems, in part, from a questionable philosophy of science, illustrated in another work by Naomi Oreskes, which develops an account of scientific trust grounded in the role played by consensus and community. Oreskes' work shares much in common with other elite-centric theories of epistemology, like Jonathan Rauch's Constitution of Knowledge (), which hold that truth is best produced by communities of credentialed experts. These accounts allow for scientific consensus only by excluding particular views as out-of-bounds.
We focus on Oreskes' work because it is both influential and emblematic of a widespread attitude towards scientific and populist dissent. We first describe her account of the nature of scientific practice. We then ask whether her theory, taken on its own terms, can be successfully deployed to delegitimate dissenting scientists (and non-scientists). We conclude that it cannot. First, her theory rests on gatekeeping institutions, like credentials and peer review; but many of the scientists she proposes to exclude were properly credentialed and published peer-reviewed scholarship. Second, according to her theory, truth is established by the scientific community, but what is the boundary of that community? In principle, ordinary citizens who claim to engage in science-that is, who take the same data as scientific experts and, employing the same or similar techniques of analysis with a high degree of data literacy, but come to vastly different conclusions about their significance-are part of the community of inquirers. What really seems to demarcate science, for these theorists, are the underlying priors they hold. If one's conception of "science" is a function of one's priors, then "follow the science" sounds like a dogmatic insistence that "they should follow the science I think is right."

Trust and Consensus in Science
The coronavirus pandemic brought the notion of "scientific consensus" into laser focus. Many policymakers, as well as some scientists, claimed throughout the pandemic that they were simply "following the science" or speaking "on behalf" of science, and demanded that others "defer to experts." Some prominent scientists noted, correctly, that there was widespread disagreement within the scientific community both about particular scientific dimensions of Covid, as well as about the appropriate response. But others, many of whom were influential in media reporting on Covid, insisted that acknowledging any rift represented a false and opportunistic narrative propagated by a small number of deniers who stood well apart from the mainstream (Buranyi ).
Many of these commentators implicitly or explicitly drew upon the work of historian of science Naomi Oreskes, with at least  articles about Covid citing Merchants of Doubt. A representative example, an opinion essay in the British Medical Journal's online opinion blog, was entitled "Covid- and the new merchants of doubt." Its authors, Gavin Yamey and David Gorski (), argued that scientists with dissenting opinions on pandemic responses are like global warming denialists, "using strategies straight out of the climate denial playbook." Much of their piece is given over to proposed links between prominent scientists and conservative causes. Ironically, some of their key factual allegations were ultimately retracted in a correction because they were not accurate. But more importantly, they did not consider the possibility that these scientists came to their views in good faith, represent a wide swath of credentialed infectious disease scientists, and might have had a point. Instead, they declared (without demonstrating) that these scientists' views were a product of a "well-funded sophisticated science denialist campaign based on ideological and corporate interests." Merchants of Doubt was largely a qualitative case study, focused on telling the story of big tobacco and big oil's attempt to influence scientific debate. Its relevance to coronavirus is doubtful. There was no coherent industry or lobbying group that would have consistently benefited from the (heterogeneous) views of dissenting scientists. There is no evidence that dissenting scientists were influenced by economic incentives. And, unlike tobacco and global warming, Covid represented a novel threat that was poorly understood, where consensus could not plausibly have yet been reached. Indeed, that was the basic claim of one wellregarded scientist, John Ioannidis, who dissented from some of the conventional wisdom around the pandemic. In an op-ed authored in the first weeks of Covid's global spread, he argued that data, and good data, would need to be collected and deliberately evaluated continually, if we were to develop a reasoned response (Ioannidis ). Although his op-ed was widely mischaracterized and pilloried, this is an uncontroversial tenet within the field of evidence-based medicine.
The philosophical core of Oreskes' work is made clearer in her later book, Why Trust Science? Central to her work is the notion of consensus. To be able to identify merchants of doubt, one must first establish that there is consensus that they are attempting to undermine. Oreskes herself is well known for empirical study of the high degree of consensus within the field of global warming. She analyzed all  peer-reviewed papers published between  and  that contain the word "climate change," finding, strikingly, that none of the papers disagreed with the consensus position (Oreskes ). An analysis of peer-reviewed literature on how to respond to coronavirus would have looked rather different in , when the disease was novel. Consider, for instance, the continued debates, with knowledgeable scientists and intelligence agencies lining up on both sides, about whether the virus was more likely to have leaked from a lab or have natural origins (Harrison and Sachs ; ODNI ). Or consider another contentious subject, masking. A meta-analysis by the Cochrane Library, one of the most respected arbiters of the state of medical consensus, concluded based on "moderate-certainty evidence" that "wearing masks in the community probably makes little or no difference to the outcome of [flu or covid-like] illness compared to not wearing masks" (Jefferson et al. ). Yet others, many of whom had spent much of the pandemic promoting masking, were entirely unconvinced by the Cochrane analysis (Tufekci ). The Covid-related issues with the most consensus appear to be ones which never generated much argument to begin with.  In Why Trust Science, Oreskes argues that science is a community of inquirers. From the eighteenth century through today, the history of science has evolved from focusing on the method of the individual scientist to focus on a community of scientific inquirers, whose trustworthiness is said to come primarily from social consensus. Karl Popper (-) sowed the seeds of a radical change in the perception of the scientific enterprise. Popper denied the empiricist claim that induction was co-extensive with scientific method. Knowledge, he maintained, was not grounded in experience because all observation is "theory laden." He drew attention to the importance of individual attitude in the conduct of science, arguing that experience is always understood from the point of view of a theory. But his focus remained the individual scientist. Perhaps owing to his aversion to Marxism, Popper paid no attention to the collective nature of the scientific enterprise.
A focus on that collective nature would mark the next big turn in the story of the development of scientific method and practice. The work of Thomas Kuhn stands out here: Attracting consensus is the means by which scientists achieve stability when their practice moves from normal science to what Kuhn identified as "revolutionary science." In any scientific practice, anomalies appear. The moment when a paradigm breaks down is particularly fraught, as competing theories attempt resolution of these anomalies. Those theories are often incapable of inter-theoretical comparison-a problem Kuhn termed "methodological incommensurability." Comments such as these attracted the label "relativist." Kuhn consistently denied the charge but there is no denying its plausibility. When you aver that scientists "practice in different worlds" it is a short path to the conclusion that truth is a function of one's framework, and to rejecting the idea that theories are validated relative to a mind-independent reality. Kuhn finished off empiricism with the notion of a paradigm, thereby reconfiguring the entire notion of a "scientific fact." The most fundamental aspect of the change was the shift from the perceiving subject (the generator of hypotheses and validator of truths) to the "community of inquirers." Further developments in this line of argument, not surprisingly, increasingly drew on the field of sociology.
The end result, to which Oreskes subscribes, is the contemporary view that truth is the product of a scientific community. The community has norms which govern the assessment of truth claims. The community decides what is true and false. This account of scientific practice bears almost no relation to the empiricism it displaced. Moreover, this is the dominant view of scientific practice in the field of philosophy of science.
On the specific topic of scientific trust, she asks: if scientists are "just people doing work, like plumbers or nurses or electricians . . . then what is the basis for trust in science?" Her answer is two-fold: ") its sustained engagement with the world and ) its social character" (Oreskes , ). The reason we trust plumbers, electricians, and nurses is because they are "trained and licensed." Tradespeople are trained and licensed and, as such, they are "experts." In a few telling sentences, Oreskes draws the connection between expertise and trust. She writes: "It is in the nature of expertise that we trust experts to do the jobs for which they are trained and we are not. Without this trust in experts, society would come to a standstill. Scientists are our designated experts for studying the world. Therefore, to the extent that we should trust anyone to tell us about the world, we should trust scientists" (ibid., ).
It is true that we trust scientists, and that they are credentialed. Those credentials are a proxy for expertise. But Oreskes seems to be running together social and normative epistemology: the question of why and how we do, in fact, trust scientists at any given point; and the question of whether that trust is valid or misplaced. Consider: whence comes validation of scientific claims? Oreskes locates the process of validation in the "social practices and procedures of adjudication designed to ensure . . . that the process of review and correction are sufficiently robust as to lead to empirically verifiable results" (ibid., ). Specifically, Oreskes focuses on two academic processes: peer review and tenure. Tenure, she states, "is effectively the academic version of licensing" (ibid.). Thus, the (social epistemological and, so, to her, normative epistemological) ground of intersubjective evaluation in science is social. In her view since "the crucial element of these practices is their social and institutional character," this "ensure[s] that the judgments and opinions of no one person dominate and therefore that the value preferences and biases of no one person are controlling" (ibid., -). This may be indeed be the case, but whether this is sufficient grounds for trust is another matter.
In responses published alongside Oreskes' Why Trust Science, several critics noted the many failures of these academic processes-such as the replication crisis in science. Depending on how systematic one views these problems to be (see, most worrisomely, Ioannidis ), these may represent serious challenges to Oreskes' account of trustworthiness. A second problem (which we shall elaborate on soon) is that the same argument for trusting scientific orthodoxy also justify trusting critics of that orthodoxy-so long as we allow that they, too, constitute a community of inquirers.
We are now in a position to examine Oreskes' response to dissenting scientists in the pandemic. Why Trust Science? was originally published in , with the pandemic a year away. In , Oreskes wrote a preface for the paperback version, which began with this paragraph: COVID -. Rarely does the world offer proof of an academic argument, and even more rarely in a single word or term. But there it is. COVID- has shown us in the starkest terms-life and death-what happens when we don't trust science and defy the advice of experts. (Oreskes , ix) Oreskes proceeded to identify a set of countries, including Vietnam, South Korea, and New Zealand, that have done well in contrast to the "disaster" manifest in the response in the United States. Attempting to identify what made the winners successful, she claims the answer is simple: "they did so by trusting science" (ibid., xi). The "science" in question consisted of recommendations by "public health experts [who] immediately made recommendations about how to minimize disease spread," especially hand washing, social distancing, testing and tracing, and lockdowns (ibid., page xi).
After Oreskes wrote her preface, many of the measures she lauded were abandoned because they were ineffective; and debate continues about what measures were and were not helpful. To be sure, Oreskes did not have the benefit of hindsight. But that is our point. She claimed to have found "proof of an academic argument," but the evidence for that was incomplete at this early stage of the pandemic. This may be, in part, because of the nature of the pandemic, but it may also be a product of the very nature of the scientific process-the evidence may, in principle, always turn out to be incomplete at almost any point in the process. Even scientific consensus can be (and has been) overturned. Even if we do trust the scientific orthodoxy at any given point, that does not mean we should: trust may turn out to be misplaced, and thus we may find out that we were overtrusting. It may be impossible to know whether our trust is well-founded.
This suggests that claims to "follow the science" may be a bit too quick, especially in the context of a complex society-wide response to a novel pandemic, where far more than scientific facts are at issue. Oreskes' claims in the preface were characteristic of the many scientists who suggested that doubt represented a problem, not a virtue, in the context of the pandemic. That conventional wisdom was summed up nicely in an article in the Washington Post: "Doubt is a cardinal virtue in the sciences, which advance through skeptics' willingness to question the experts. But it can be disastrous in public health, where lives depend on people's willingness to trust those same experts" (Jamison ).
The denigration of doubt was commonplace in the pandemic, as the hundreds of citations to Oreskes' work in essays about Covid attest. One scientist who was quoted widely as a pandemic expert-despite repeated questions about his credentials and accuracy-described alarmism as a moral obligation (Hu ). Another prominent scientist spoke in his defense, describing the pandemic as an "all hands on deck" situation, and thus not the time to complain about a fellow academic's "style or their tone" (ibid.). Major newspaper reporters publicly urged that we avoid "insistence on [unnecessary] quantification and detail," because science is "not about evidence and detail per se."  Entire subjects were suggested to be marked off from discussion-even when conducted in a nuanced way-because they are "supremely unhelpful given the monumental Covid denialism we are all facing."  Others were less delicate, resorting to name-calling and ad hominem insults when scientists do not share their views. In many instances, these ad hominem attacks did not engage with the substance of the arguments proffered by dissenting scientists. Instead, they impugned motives, labeled them "Trumpian" (even when they emanated from the left),  and raised dubious questions about shadowy financial motives.

Who Speaks in the Name of Science?
Critiques of lockdowns and other non-pharmaceutical interventions during the pandemic fell largely into two categories. The first, which we focus on in other work, rightly interrogated the proper domain of science, and the types of questions that science alone can answer. That is, even were the science established, almost any significant policy involved extensive non-scientific dimensions: social and economic tradeoffs and costs that could not properly be deemed the domain of any one field of expertise. Consider the extended school closures in the United States, which had evident non-epidemiological implications (Russell and Patterson ). Put simply, "scientific" claims to necessity in many areas of policy were value judgments. Yet since they were presented only as objective factual claims, the evaluative and valuative aspect of these judgments went unrecognized.
The second category of critique focused on the science itself. A purely scientific question, for example, might involve identifying the likely differences in effect between natural immunity and immunity produced by a vaccine. Pure science is what Oreskes claims to focus on in her preface,  so that will be the focus of our analysis here. Our goal is not evaluating the substantive truth of the competing claims of the scientific orthodoxy and its dissenters. Rather, the issue is the basis for demarcating dissent as inside or outside of the social boundaries of science. Oreskes provides no criterion for making such judgments. This is, in part, what makes her preface feel dated a mere two years later: it did not anticipate the likelihood that certain claims written six months into a novel pandemic might be premature.
Oreskes's account of science is inherently ambiguous. On the one hand, it treats science as a community of inquirers, which presumes a degree of plurality and dissensus. On the other hand, it treats consensus as the ground of scientific authority and, thus, appears to exclude the possibility of dissensus-or at least of radical dissensus. This ambiguity creates inherent instability and, in some cases, the inability to vouchsafe dissensus. As we discuss soon, the pandemic poses a particularly easy case for observing that scientific consensus had not (and has not) yet congealed about many aspects of interventions. There was-and remains-dissent within mainstream science during the pandemic about the optimal pandemic response, including by those who were selected under the academic licensing procedures Oreskes describes, and who were operating well within their "license" (field of expertise) in arguing over core facts about Covid.
Later, we shall turn to another indicative flaw in Oreskes' preface: her disdain for what might pejoratively be called "armchair" scientists-those who in her view were not the right "kind" of scientists, or were not scientists at all. But the philosophy of science itself-and in particular the version of scientific trust she develops-sets the stage for these dissenters to make their cases against the claims of orthodox science. Once one is of the view that the "community" of inquirers is the basis for knowledge claims, the appeal to "reality" is thrown over. Those armchair interlopers might well be "wrong" in many important senses-but the community-based theory of science that Oreskes expounds cannot provide grounds for demonstrating that they are wrong.
Our first critique of Oreskes, then, suggests that her framework is applied only selectively by many in science, based on one's priors about the data and facts, rather than by the data and facts themselves; and that the criteria as to credentials and competence lack rigor. Our second critique suggests that Oreskes' framework is problematic because it does not, in fact, provide a usable means to distinguish trustworthy and untrustworthy scientific communities.

The Exclusion of Dissenting Scientists
As we detail in this section, Oreskes' preface dismisses some scientists out of hand based on who disseminated their ideas. In other words, their ideas are deemed guilty by association, which makes it easy to avoid engaging with their substantive claims. In the face of what might strike some as legitimate disagreement, she implies that the disagreement is illegitimate for it stems, she claims, from certain scientists being "anti-experts" (Oreskes , xvi). But it is not at all clear what criteria yields this inference. In fact, the scientists whom she describes as hostile to expertise are experts by her own criteria of legitimate expertise. They hold the highest social "indicia" in their field-that is, are tenured at top institutions in the fields of virology or epidemiology, have extensively published in peer reviews, and are widely cited. There is, then, something missing in Oreskes work: Either something else must mark these specific claims about the pandemic as outside the bounds of expertise, or another unspecified criteria is operating when Oreskes decides who is an expert. Her central claim-that science is formed by socially-constructed consensus, tested in the context of academic institutions that serve as conduits for that consensus-would appear insufficient to sustain the substantive claims she makes in her preface.
Before we get to the details of Oreskes' charge, let us first recap the underlying debate. We will oversimplify by dividing the complex patchwork of scientific opinion into two camps. To be clear, many scientists subscribed to neither "camp," believing that both sides made excessive claims about the strength of their evidence given overwhelming uncertainty, and would have called for more nuanced policy options. Some were appalled at the very notion of "science by petition"-the idea that scientists should sign on to public letters in support of a particular claim (Hardwicke and Ioannidis ). Still, many scientists did sign on to two explicitly dueling statements that circulated in .  The first, the Great Barrington Declaration, written by scientists at Oxford, Stanford, and Harvard, questioned lockdowns and called for "focused protection" of the elderly, and for fewer restrictions on those who were likely to be less susceptible to Covid, allowing natural immunity to build in that group (Gupta, Bhattacharya, and Kulldorf ). The second, the John Snow Memorandum, published in the Lancet, argued in favor of lockdowns, past and present, and insisted that pandemic management must not "rely upon immunity from natural infections." It described the Great Barrington proposal as "a dangerous fallacy unsupported by scientific evidence," and claimed that any disagreement was a "distraction" (Alwan et al. , e-). The title of the memorandum is telling: "Scientific Consensus on the Covid- Pandemic: We Need to Act Now." This is the logical denouement of Oreskes' theory: if there is a scientific consensus, we must "follow the science" and act, closing the door for further discussion.
The very presence of a competing declaration favoring the theory of herd immunity indicates, however, a lack of scientific consensus on this issue. Peer-reviewed literature exhibited a range of views about the appropriateness of lockdowns and other non-pharmaceutical interventions. One British Medical Journal headline, published in late , summarized that debate: "Experts divide into two camps of actionshielding versus blanket policies" (Wise ). None of the letters submitted to the journal in response challenged that claim-except for a letter that argued there were, in fact, not two but three warring camps.
The Great Barrington memorandum was authored by three scientists, all with expertise in epidemiology and infectious diseases. Their declaration was open to public sign-on, yielding signatures from , scientists and nearly , medical practitioners, but also rested on the authority of an initial  credentialed signatures. According to Oreskes' theory of science, these authors and signers should legitimately be considered part of the community of inquirers-and thus classified as experts. Yet Oreskes (, xvi) describes them "anti-experts" who "muddy the intellectual waters around Covid-." Her argument for this, however, neglects their substantive claims and is based, instead, on an ad hominem argument that treats their theory as illegitimate by guilt by association.
Her primary evidence is that the declaration was written and disseminated following a meeting hosted by the American Institute for Economic Research (AIER) which "is, as its name suggests, an economic institute with no recognizable claim to biological or medical expertise." (Oreskes , xvii). The AIER claims to have had little role in the statement, other than hosting a meeting there (Magness and Harrigan ). But even if they had greater involvement, the AIER made no claim to biological or medical expertise; rather, it rested any authority on the credentials of the scientists who wrote and signed the declaration, and the argument itself. Oreskes disdains the AIER because of its "political agenda" in favor of free markets, which she says "may or may not be good things, but they are not matters of science."  Assuming arguendo that only science was implicated in this particular debate over lockdowns, to disregard the declaration because it was written at or disseminated by AIER is a purely ad hominem response. To be sure, information about authorship can warrant higher levels of skepticism and scrutiny, as in the case when a corporate drug manufacturer has paid a scientist to author a study of an expensive new pharmaceutical product. But even in that extreme case-which is not implicated here regardless-we do not get to dismiss the data out of hand. Rather, engagement with research results is the coin of the realm.
Most problematic for Oreskes' line of attack, the three lead authors of the statement had separately disseminated these and related views long before the meeting, and apparently before they had even met or had any contact with AIER (Aschoff ; Lourenço et al. ; Sood et al. ). In fact, their views on lockdowns stem fairly clearly from their past research and their particular scientific perspectives. There is no evidence that they came to them based on financial incentive, which is what Oreskes seems to imply. Instead, it is likely they accepted AIER's offer in an attempt to get more traction for their views (quite possibly a naïve attempt, as it allowed people to write them off based on the association). In addition, the Great Barrington Declaration was simply one statement, hardly the only or even primary critique of lockdowns. So discrediting that source of the declaration does not discredit more general criticisms of lockdowns or other aspects of the pandemic response. Many who did not join or support the declaration were critical of much of the orthodox policy response including lockdowns, school closures, and other restrictions (Pelling and Phelps ). And even today, questions still remain unanswered about aspects of the most central tension underlying the dueling petitions-the relationship between vaccination and natural immunity (Block ; Pugh et al.

).
Focused as she is on spurious accusations about motivation, Oreskes engages substance only in passing, asserting that the Great Barrington approach was a "euphemism for allowing people to sicken and die" and that "if the United States had undertaken that approach, more than  million people would likely have become ill, with the potential for  million more deaths" (Oreskes , xvii). Neither point is backed up with citations, though the latter may refer to widely disseminated, and later widely critiqued, modeling by British researchers at Imperial College London in April  (Ferguson et al. ). Setting aside critiques of the model's methodology, its numbers were based on crude data available in April , months before the Great Barrington Declaration was written, and modeled a hypothetical scenario, which the researchers described as deeply unlikely, where there was no mitigation whatsoever-including no voluntary changes in individual behavior (ibid.). If this is what Oreskes is referencing, it is irrelevant to the substantive dispute at hand.

Science by Petition
Public, signed declarations of dueling opinions hardly seem an apt way to conduct an important scientific debate. Although science is built on vigorous disagreement, the publication of competing petitions feels more like an adulterated, performative version of science that values a public relations strategy over scientific nuance. This kind of performance is a likely, if unfortunate, consequence of how many journalists and scientists have come to interpret Oreskes' view of science, community, and consensus. If winning a policy debate can be accomplished by proving consensus, the most vocal factions-who may not reflect the majority, and who may not preserve nuance-seek to prove that they represent a consensus. Scientific legitimacy then becomes a quantitative matter: tallying up the number of adherents on one side compared to another, or looking at the sheer volume of vocal public statements. This is hardly a model for scientific inquiry.
In the case of Covid, paradoxically, the position that Oreskes finds illegitimate should be, by her criteria of credentialism, the more trustworthy position than its competitor. While the John Snow memorandum was certainly popular on social media, the Great Barrington memorandum counted slightly more established scientists among its signatories, based on a statistical analysis of citation counts in relevant fields (Ioannidis ). Ioannidis concludes that social media popularity led many journalists to falsely conclude that the John Snow memorandum reflected the consensus narrative, although the citation data about the authors (however imperfect and problematic) would suggest the reverse.
We might further use the criteria Oreskes provides for legitimately distrusting science to see whether they are applicable to the Covid debate. She identifies two reasons why one might legitimately be skeptical of science. The first is if the process is inadequate in some way: "If there is evidence that a community is not open or is dominated by a small clique or even a few aggressive individuals-or if we have evidence (and not just allegations) that some voices are being suppressed-this may be grounds for warranted skepticism" (Oreskes , ). The second is if there is a conflict of interest. This is why, she argues, we should not trust the tobacco industry's studies on smoking, energy companies on global warming, or soda companies on diabetes. The reason is the same for all: "the goals of profit-making can collide with the goals of critical scrutiny of knowledge claims" (ibid., ).
We agree that it is a cardinal virtue of science that it is skeptical of anyone with an agenda. Applying these criteria for doubting the legitimacy of science to the case of the pandemic yields some surprising conclusions, however. First, the orthodoxy that became "science" during Covid can be said to have formed an exclusive clique.  There were open attempts to suppress dissent within the scientific community. Those most vocal in this suppression were scientists who belonged to the group that Oreskes endorses as the sole claimants to expertise. That suppression was fostered at the highest levels of the scientific community and of science funders (Arora ; Brownlee and Lenzer ; Harrigan and Magness ; Harrison and Sachs ). The tamping down of debate began as early as April , just weeks into the U.S. response to a novel pandemic, and thus very early in the production of relevant research (Prasad and Flier ). This would suggest that, by Oreskes' standard, the first criterion for trusting in science-that its process is inclusive and not cliquish-was not satisfied.
Second, there was little reason to think that the profit motive-or an "industry bias"-explains the fault lines between proponents of the John Snow memorandum and the Great Barrington Declaration. It was often simply assumed that any divergence from the orthodoxy could only be explained by some kind of financial motivation. Yamey and Gorski () noted that one of Declaration's authors, Gupta, had received a research grant from a foundation named after Georg von Opel, a conservative party donor. It was simply assumed that being affiliated with a conservative foundation turns scientists into mouthpieces for "billionaires aligned with industry" (Yamey and Gorski ). This is a controversial assumption, which they do not interrogate.  In any case, Gupta's arguments had been presented before receiving the grant, which suggests that she received the grant because the donor approved of her opinions, rather than that her opinions were motivated by a desire to satisfy the grant requirements.
The kind of assumption at work here is prevalent in our culture. For example, the editor-in-chief of Science recently declared that a doctor who dissented from a particular piece of scientific consensus, and who offered to discuss his disagreement, represented a "move . . . from page  of the anti-science playbook" that "undermines trust in science," just as in the tobacco and global warming cases (Thorp ). The issue is not whether that particular disagreement was well founded, but whether it is legitimate to describe as "anti-science" the act of a scientist attempting to provoke debate. Oreskes and others seemingly neglect the difficulty of drawing a line between those who are legitimately sparking such debate and those who are not.  This is not to say that such a line cannot be drawn, but rather that this issue is often "resolved" simply by appealing to one's personal priors. There is little attempt to articulate criteria for making such a distinction. Instead, the implicit criterion for legitimacy becomes "agrees with what I think is true." This, however, makes disagreement illegitimate by fiat, and it insulates "what I think is true" from objections by treating it as an inviolable truth, rather than recognizing it as a fallible opinion.
This implicit criterion of legitimacy not only obviates the possibility of genuine disagreement, but, in doing so, leads to increasingly dismissive attitudes towards the other. This helps explain the rise in hostility as the pandemic unfolded. Referring to all voters who did not vote for Biden, the same editor of Science referred to above proclaimed them hopeless because "science was on the ballot and this means that a significant portion of America doesn't want science. . . . Science is now something for a subset of America" (Florko ). It is telling that a hostility to science was the only possible reason he could deduce for the votes against Biden. He appears to rule out, from the start, the possibility that there could be legitimate skepticism about whether certain policies were scientifically mandated by facts about Covid, much less does he appear to consider other voting priorities. Instead of interrogating the epistemology of the other side, it is assumed that, by virtue of being the other side, it cannot have an epistemology.
This puts the cart before the horse. Since this orientation neglects the possibility that substantive epistemic issues may underlie the disagreement, the epistemology of disagreement is turned into a search to uncover the "real" motivational cause, that is, some hidden conflict of interest. This dynamic is manifest in the case of Ioannidis, one of the most cited medical researchers working today, who drew flak for his early views on the pandemic. His views were generally misrepresented by such critics. But bracketing that issue, they were the result of a long publishing trajectory devoted to prioritizing the use of data in medicine and to skepticism of interventions that lack evidence. He has long expressed concern about premature adoption of interventions, which might end up causing harm before new evidence ultimately reverses their practice (Prasad, Cifu, and Ioannidis ; Prasad and Cifu ). This approach to evidence, and the conclusions Ioannidis associates with it, are not necessarily correct and have thus been subject to disagreement (e.g., Greenhalgh ). But they do appear to be consistent across his career, and thus provide some evidence that his views in the Covid debate were offered in good faith. Yet when the founder of JetBlue airways granted Stanford (Ioannidis's home institution) a $, grant (which Ioannidis did not personally receive), this was seen as explaining-and thus illegitimating -Ioannidis's opinions about Covid. (Lee ; Brownlee and Lenzer ). This claim is implausible, but more importantly drew attention away from the intellectual underpinnings of the disagreement. This is perhaps the biggest casualty of guilt by association: it obscures the real origins of debates, which often involve conflicting interpretations, and conflicting values that lie beneath those interpretations. Those conflicts are worthy of public discussion. Indeed, they are why we need science in the first place. "The facts" are not obvious but must be interpreted, and they are interpreted through theories and ideologies. Most of the fault lines in the Covid wars were not fundamentally about facts, but about their interpretation, and the values that underlie medical responses. These methodological and theoretical fissures pre-existed the pandemic (Fuller ). On the scientific side, there is substantial-and we think genuine -disagreement about the proper role of evidence, and the ease as well as ethics of generating more evidence through randomized controlled trials. On the policy side, there are real disagreements about how to weigh different types of costs and benefits, and how to reason under situations of uncertainty. A thoughtful and reasonable scientific discussion requires exploring these issues, rather than tabling them, as occurs when skeptics of the orthodoxy is treated as "denying" science.

The Exclusion of Armchair Scientists
Let us shift from discussing the fault lines within the central corridors of the expert community and turn to some "armchair" scientists-members of the public who inhabited social media and did not necessarily have traditional scientific credentials, but attempted to engage with data in a quantitatively sophisticated manner-who were decidedly skeptical of governmental policy.  We will focus on one example of such skepticism, suggesting that, according to Oreskes' and other influential accounts of science, this should be categorized as science, despite its contestable results.
Five professors at MIT and Wellesley College recently investigated how coronavirus skeptics employed social media to create an alternate account of pandemic data visualizations (Lee et al. ). The "counter-visualizations" manifested many of the tropes found in traditional scientific discourse. According to the researchers (who did not endorse the skeptics' conclusion), the skeptics "use rhetorics of scientific rigor to oppose public health measures." The researchers further concluded that the anti-mask communities they investigated were "more sophisticated in their understanding of how scientific knowledge is socially constructed than their ideological adversaries" (ibid., ). This study suggests that disagreement among sophisticated users of science cannot be resolved by simply appealing to "facts" and calling for one's adversary to "follow the science." The categories of "denier" and "anti-masker" thus vastly simplify the way skeptics handled the publicly available data on the pandemic.
We can examine the dissent from conventional expert orthodoxy by examining the disagreement in greater detail. Using qualitative and quantitative methods, Lee et al. reach some surprising conclusions. The first component of the study, the quantitative, consists in an analysis of over a half million tweets together with over , images processed through a computer vision model. The investigators found that "antimask groups on Twitter often create polished counter-visualizations that would not be out of place in scientific papers, health department reports, and publications like the Financial Times" (ibid., ).
The qualitative dimension of the project consisted of a six-month observational study of anti-mask groups on Facebook, conducted in . The result of the embedding produced "an interactional view of how these groups leverage the language of scientific rigor-being critical about data sources, explicitly stating analytical limitations of specific models, and more-in order to support ending public health restrictions despite the consensus of the scientific establishment" (ibid., ). Following the lead of the renowned anthropologist Clifford Geertz, the researchers engaged in "deep hanging out" in the Facebook communities where participants discussed, debated, and mapped the data they took from public sources. What is of interest to us are the conclusions the researchers reached with respect to the participants in the Facebook groups.
In the "discourse analysis" of the Facebook anti-maskers, the researchers concluded that "anti-maskers are prolific producers and consumers of data visualizations, and that the graphs that they employ are similar to those found in orthodox narratives about the pandemic" (ibid, p. ). But, as with so much in the pandemic, it is not the data as such that drives their discussions. It is rather their interpretations. For example, there is the question of which criterion matters most in setting policy: reducing cases or reducing deaths. Not surprisingly, the anti-maskers believe it is deaths that matter more, and not just death with Covid but death by Covid. And for these, they claim, municipalities skew the data in favor of any death where the patient had Covid at death.
How good are the anti-maskers in making their case? The researchers -no friends of this group-conclude that the group exhibits "expertise." For example, "data literacy is a quintessential criterion for membership within the community they have created" (ibid., ). Data literacy, anti-bias and "intellectual self reliance" are the hallmarks of this group (ibid., ). When their work is compared to that of mainstream scientists, the researchers argue that the anti-maskers "skillfully manipulate data to undermine mainstream science" (ibid., ).  The researchers also argue that these anti-maskers are skilled in the rhetoric of scientific practices. For example, the skeptics "point to Thomas Kuhn's The Structure of Scientific Revolutions to show how their anomalous evidence-once dismissed by the scientific establishmentwill pave the way to a new paradigm" (ibid., ). The anti-maskers have shown their deft skill at employing the language of recent philosophy of science. Science is a process with scientific knowledge produced by a "community." For anti-maskers, "increased doubt, not consensus, is the marker of scientific certitude" (ibid, ). Of course, purveyors of the conventional wisdom tend to dismiss the work of anti-maskers as lacking scientific literacy. This, the study authors maintain, is simply wrongheaded, for "if anything, anti-mask science has extended the traditional tools of data analysis by taking up the theoretical mantle of recent critical studies of visualization" (ibid, ).
Anti-maskers have proven themselves to be skilled at data analysis and visualization. Rather than denying science, they produce sophisticated skeptical counter-narratives which bear the hallmarks of accepted scientific practice. But there is something deeper in the anti-mask psyche, something that better explains their wholesale willingness to reject many claims made by "orthodox" and credentialed experts. Described by the researchers as an "epistemological rift," the anti-maskers "espouse a vision of science that is radically egalitarian and individualist" (ibid., ). Preaching naïve scientific realism to anti-maskers simply won't cut it. They have read their Kuhn and they have mastered the mechanics of data visualization to such an extent that they think of themselves as forming their own scientific community with their own paradigm. In short, they play the game of scientific rhetoric as well as conventional scientists.
Nevertheless, while recognizing the sophisticated use of scientific methods, Lee et al. do not, ultimately, think the anti-maskers are on a par with conventional scientists because the epistemological rift between them is too great-that is, their worldviews are too divergent. The anti-maskers are skeptical of the claims of public officials and are not intimidated when credentialed experts reject claims they have come to through their own examination of this data. In this vein, we cannot dismiss the claims of anti-maskers as anti-science, since they are following the science-their science. They have mastered the very same techniques as government experts and have built their own edifice. Perhaps the substantive gulf between anti-maskers and their critics is too wide to be bridged-that is, they may have such divergent ways of interpreting the same data that they may never ultimately agree on what the data mean. But this suggests that a theory (such as Oreskes') that treats this disagreement as stemming from a rejection of science, misrepresents the nature of the disagreement.

* * *
Richard Feynman once defined science as "the belief in the ignorance of experts," in sharp contrast to today's tropes of "following the science" and "deferring to experts." These mantras are, of course, meaningless: public policy nearly always depends on tough tradeoffs that can be informed but not decided by science. Calls for faith in expertise have reached fever pitch today, just as the claim that we live in a "post-truth" environment is becoming more widespread. The case study we have provided here suggests that that claim is overstated, at least with respect to Covid. This is not to say that denial of scientific claims is entirely mythical, nor to deny that there are bad-faith actors who misuse doubt. But the claim of "posttruth" should be investigated domain by domain, and generalizing from one domain to another has little justification. Climate change denial may rest on rather different processes than dissent about lockdowns.
Meritocracy, and its concomitant celebration of credentialism, has falsely "attribute[ed] political disagreement to a simple refusal to face facts or accept science," when in fact "political debate is often about how to identify and characterize the facts relevant to the controversy in question" (Sandel ). The resultant logic-that experts have all the answers, while shadowy cabals try to undermine them by spreading doubt, and that all dissent is due to those cabals-is unfortunately infectious. Populists, in our view, are not necessarily global skeptics when they dismiss conventional experts. Rather, they tend to bristle at what they think of as an unjustified use of expertise, to be more leery of treating credentials as an unambiguous proxy for true expertise, and to worry that value judgments and policy choices masquerade as facts. All this suggests that one does not need to agree with the conclusions reached by populists and skeptics to examine the substance of their argument and their mode of argumentation. When this is done, it turns out that deeper issues come to the fore: about the nature of empirical data; when it constitutes evidence for or against a given policy position; how to weigh it against competing data; and how to balance competing interests and values. NOTES . The definition of populism is widely contested, a point we explore elsewhere.
Here, we take the term populist to refer primarily to citizens who reject the legitimacy of certain groups of experts, rather than to politicians who attempt to appeal to populist voters. In this account, populism is a "thin" ideology (Mudde ) that can have either left-leaning or right-leaning valences. By this definition, there are populist supporters of both Bernie Sanders and Donald Trump; not all supporters of either Trump or Sanders are populists; and whether either Sanders or Trump is a "true" populist is beside the point. . In other work we discuss the academic pedigree of the term "elite." In our usage, as in its conventional, sociological usage, it refers to high-status individuals, like credentialed experts, journalists, and politicians. Like any social group, they "develop in-group social organizations, and share a common lifestyle, while at the same time excluding people they do not see as similar to themselves" (Domhoff , ). In our usage, elites tend to valorize credentials, give priority to expertise within policymaking, and disdain lay participation in complex political decisions. Naturally, not all elites subscribe to that world-view, but we are unaware of a more precise term with a widely understood meaning. . In , a group of researchers put together a "Delphi study," where they attempt to obtain consensus across a large number of experts (, in this instance). The process, because it values high rates of agreements, tends to result in extreme vagueness as statements are refined to garner more support. For example,  percent agreed or strongly agreed with the claim "The COVID- pandemic continues to reveal vulnerabilities in the global supplychain framework for essential public health supplies" (Lazarus et al. ).
Whilst an important issue, supply-chain vulnerabilities were hardly an issue fraught with claims of denialism and misinformation in . . Tweet by Financial Times reporter John Burn-Murdoch (August , ). . Tweet by New York Times reporter Apoorva Mandavilli (September , ) . See, for example, a Twitter thread by Yale public health professor Gregg Gonsalves (Sept. , ), leveling "shame" on Jacobin Magazine for publishing an interview containing Kulldorf's "drivel," a "very bad take" that he terms "practically Trumpian." . Oreskes (, p. xvii) briefly acknowledges that pandemics "do, of course, involve economic matters," but then asserts that debates over herd immunity and lockdowns were pure "public health" issues. . Brownlee and Lenzer () supply a more detailed account of the "science wars" between these two views and of the attempts to suppress debate. . Elsewhere, Oreskes makes clear that they are not good things. Towards the conclusion to Merchants of Doubt, Oreskes and Conway (, ) state that "the most serious critique of the central tenet of free market fundamentalism is simply that it is wrong, factually" because "markets do fail." . Consider Fauci's now-famous statement that to disagree with him was to attack science (Sullivan ). . Although it is tempting to say this is a misapplication of Merchants, the final chapter of Merchants employs similar guilt by (unproven) association, for instance suggesting that scientists have low credibility because they were "defended in the Financial Times, the Wall Street Journal, and the Economist." Later in that chapter, they draw on the authority of George Soros, without explaining why he and his associated think tanks should not be subject to the very same scrutiny as the freemarket think tanks that they dismiss out of hand (Oreskes and Conway , -). This is a common problem in the growing scholarly literature on intellectual influences: it focuses its scrutiny solely on conservative think tanks, without considering whether comparable effects are possible from liberal funders of academic research, nor whether funding is the best explanation for the influence of the ideas under study. . A similar line-drawing problem is pervasive, yet underexamined, in related areas, like the contemporary practice of "fact-checking" (Uscinski and Butler ) and in discussions of what constitutes a conspiracy theory (Uscinski and Enders ). . Some assume most skepticism was the result of conspiracy theories that arose surrounding Covid. However, contrary to the conventional wisdom in the media, people were less likely to believe medical misinformation than other types of conspiratorial views about covid; i.e., "dangerous health misinformation is more difficult to believe than abstract ideas about the nefarious intentions of governmental and political actors" (Enders et al. ). . We note that the researchers do not use the word "manipulate in a pejorative sense. Data is "manipulated" when it is used to make visualizations and other representations.