Early in 2020, a deadly new virus emerged and suddenly seemed to threaten the world with a pandemic. Like the plague of times gone by. What exactly were the risks? How possibly severe? What were the appropriate precautions? What were potential solutions? Reports in the media varied. Sometimes they even conflicted. With life and death possibly at stake, how would an average citizen know which claims were trustworthy? The COVID-19 crisis has dramatically underscored the need for functional scientific literacy.

Now that the initial shock has passed, we are well positioned to reflect on recent history. What information seemed persuasive, but was misleading or not true? What apparently unlikely claims later turned out to be reliable? What can we learn from experience about how to assess any particular scientific claim?

The customary wisdom – this month's Sacred Bovine – is that we should judge the arguments ourselves. With hucksters and ideologues everywhere, isn't it best to think for yourself? Namely, if we can equip someone to reason scientifically, do we not help them achieve intellectual independence? According to this view, argumentation is central to all science education (see Allchin & Zemplén, forthcoming).

Yet a conundrum emerges in pursuing this strategy. To fully assess an argument, you need the evidence. However, “cherry-picked” data or biased samples can be misleading. To know whether you have enough relevant information, you have to be an expert already. To interpret a statistical analysis, you first need to know if the appropriate statistical model was used. That requires expertise as well. To assess experimental results, you need to know if the methods were sound – for example, if all the appropriate controls were included. And that, too, requires an expert's background knowledge. A “simple” assessment of an argument seems to involve an extraordinary level of expertise. But, of course, that very deficit is why the nonexpert seeks an answer in the first place. That's the conundrum: can you assess an argument on its own merits without also possessing all the expertise needed to make it?

Indeed, the theme of expertise vs. argument reappears in the most prominent questions about the coronavirus. Consider a few examples – organized here in the form of an inquiry lesson, using history as data instead of students' own laboratory results (see for questions to pose for student discussion).

Dilemmas in Interpreting Scientific Claims

When the pandemic first threatened, shoppers soon emptied store shelves of hand sanitizer and face masks. No one needed much science to spur people's desire to protect themselves from possible harm. In that environment of fear, many websites (and a televangelist) offered products with the prospect of protections and cures. “VitalSilver,” a colloidal silver solution. Elderberry tincture. Boneset tea. Oregano oil. Antiviral essential oil aromatherapy. Frankincense. Were any of them effective? How would you know?

One did not have to wait long for an answer. None of these treatments was approved by the U.S. Food and Drug Administration (FDA), and the government quickly stepped in (Brewster, 2020). In Britain, the market was flooded with face masks with purported N95 protective status. They were labeled with known brand names, logos, and certifications. However, many were counterfeits (Daragahi, 2020). In India, Rwanda, Kenya, and elsewhere, people were caught selling fake hand sanitizer. All false claims. And all outright fraud. Easily exposed, perhaps. But these “simple” cases indicate that science con-artists and bogus claims are everywhere (Sacred Bovines, Nov., 2012; Oct., 2018). In assessing scientific claims, honesty matters as much as the content of the argument.

From a perspective of scientific literacy, fraud is not so easily dismissed as one might imagine. It poses a critical epistemic problem. Namely: How do you detect fraud?

Fraud does not announce itself. It is not part of the argument. One needs to attend to the context, not just the content of the claim. Who is the speaker? Why are they making the claim? Is there a conflict of interest? Attention must shift from directly assessing what is claimed to analyzing who makes the claim, and why. That involves evidence, too, but of a very different kind. It turns out that the evidence for social context is just as important for other scientific claims as well.

Another element of context revealed by cases of fraud is the psychological status of the recipient. Why – or when – do we trust others? How does trust about reliable information differ specifically from other forms of trust – about moral guidance or personal loyalty?

Many factors contribute to our sense of trust. We tend to believe those who speak with confidence and self-assurance (whether what they say is ultimately true or not). Emotions matter, too. Fear, or a desire to believe in a certain outcome, can distort the judgment of otherwise reasonable people. In addition, we tend to trust friends and allies. Or those who share our beliefs, our sense of identity, or a common enemy. We empathize with those who suffer innocently. Our wariness is quieted and our confidence lifted by assurances of credibility (even if it is a lie), by appearance, by familiar contexts, or by the appearance of a consensus. All these increase the susceptibility to fraud and science con-artists (Sacred Bovines, Nov., 2012; Oct., 2018).

Science advocates often remark how science is founded on skepticism – that is, viewing claims by others with a measure of doubt. But perhaps, given the emotions just noted, we should equally focus the skeptical attitude on ourselves. Our own psychological vulnerabilities may strongly shape what “arguments” or “evidence” we accept as adequate. We should examine our own motives critically.

Cases of fraud, even if infrequent, thus provide an important lesson. The social and psychological context of any scientific claim is critically important to consider, especially in a social setting.

Early on in the history of COVID-19, it seemed that there might be no need to worry about a global pandemic at all. Many civic leaders characterized the virus as nothing more serious than the seasonal flu, and they assured the public that things were “totally under control.” Some called it a hoax. In mid-February, after dozens of cases had been detected in the U.S., the president told governors, “I think it's going to work out fine. I think when we get into April, in the warmer weather, that has a very negative effect on that, and that type of a virus.” In late February, he commented that of 15 cases, “within a couple of days is going to be down to close to zero.” In Brazil, even as late as early May (when most states had already been under lockdown for weeks) the president portrayed media claims about the threat as irresponsibly overstated. Typically, we expect government officials to monitor and heed scientific advice. Was it prudent to believe their claims, here? In retrospect, we can clearly see the answer. The messages minimizing the risks were tragically misguided. The pandemic did indeed become very serious. But could anyone really have known that in advance? From a historically situated perspective, what would have been an appropriate basis for belief? Expertise or argument?

In retrospect, we can see that trust in the experts was warranted. Assess the people, not the arguments or whatever evidence you are given. Experts, even with very simple data about transmission and travel, can build models to anticipate how a disease will likely spread. So, as early as the first week of January, 2020, the World Health Organization (WHO) voiced alarm, and by the end of the month it had formally declared a “public health emergency of international concern.” (That deceptively modest phrase was the technical label for a truly significant threat, such as the episodes of Ebola, SARS, and MERS in recent years, each with hundreds of deaths.) But many elected public officials seemed to disagree. Whose view should one accept? Who is really qualified to know? What is expertise?

Of course, as noted above, we may be strongly inclined to believe those who agree with us politically. But this does not qualify those individuals scientifically. Rather, the trust one needs is epistemic. One needs people who understand disease transmission and epidemiology. In this case, experts from the U.S. Centers for Disease Control and Prevention (CDC), along with other recognized authorities from around the world, concurred with WHO. Again, as the history now confirms, they were the trustworthy voices. The moral? Expertise matters (Oreskes, 2019).

Consider, then, new model projections that were announced March 22 indicating that deaths in Italy were peaking and that coronavirus cases in the U.S. would be dwindling much sooner than most health experts had reported (Guzman, 2020). Promising news! Were the claims credible? On what basis?

In this case, they came from Michael Levitt, a Nobel laureate at Stanford University. His credentials certainly seemed to reflect expertise. Weeks later, however, deaths in Italy were still climbing, and cases in the U.S. showed no signs of abating, even as governors extended sheltering policies well into May. Unfortunately, perhaps, Levitt did not have the relevant expertise. His Nobel Prize was in chemistry. He was recognized for modeling the molecular structure of proteins and nucleic acids, not for modeling pandemics. Levitt's original comments to the Los Angeles Times revealed, perhaps, a telltale bias: “What we need is to control the panic.” In the grand scheme, he urged, “we're going to be fine.” Levitt's models were never formally published, nor endorsed by health experts. One needs not just any expertise, but the relevant expertise (Oreskes, 2019).

What about cures? One rural doctor reported remarkable results using a mixture of hydroxychloroquine (or HCQ, an antimalarial drug), azithromycin (an antibiotic), and zinc sulfate (Roose & Rosenberg, 2020). It was an inspirational story of scientific discovery: a modest local physician, coping with a sudden barrage of COVID-19 cases, tries an obscurely reported cure and finds that it seems to work miraculously on all his patients. That was the narrative that attracted the attention of the U.S. president, who on March 20 touted it during a nationally televised news briefing. While acknowledging the drug's unproven status, he said, “I feel good about it. And we're going to see. You're going to see soon enough.” Anthony Fauci, one of the nation's top officials on infectious diseases (with decades of experience), then observed that the evidence for the cure was “anecdotal” at best, and cautioned against unwarranted hope or action. Several news commentators took Fauci to task for challenging the president's authority. Despite Fauci's remarks, the president continued to repeat his claims for the next few weeks, portraying HCQ as potentially “one of the biggest game changers in the history of medicine” (Crowley et al., 2020; Reuters, 2020). Meanwhile, other medical researchers echoed Fauci's skeptical posture. Again: What should one believe? Miracle cure or tantalizing hype? Argument or expertise?

One might imagine, as a purported ideal, that an ambitious citizen would investigate and assess all the evidence on her own. This assumes, of course, that such a person could interpret all the subtleties of clinical trials. But that level of medical expertise and background is beyond even most well-educated consumers. For example, if you found one study reported in a French journal, would you appreciate its limitations, based on a meager sample size of 20? Also, there was no control group, to ensure that any observed effect was due to the drug, rather than exhibiting the normal course of patients in the sample (Sacred Bovines, May, 2020). Soon, other doctors in Paris could not replicate the results. Indeed, concerns surfaced about how this paper was reviewed and whether it met customary standards for publication (Retraction Watch, 2020). What about the New York doctor's study? Well, he reported his results in a video addressed directly to the president, later posted on YouTube. There were no formal records documenting the course of treatment. And again, no controls. Without systematic evidence, can one justify bold claims? Still, several clinical studies were promptly begun to address the question. In early April, however, a study with 81 patients in Manaus, Brazil, was halted when fatal heart complications developed among many patients (Thomas & Sheikh, 2020). Although the FDA had approved HCQ to treat other conditions, it is not safe for use in the recommended doses for COVID-19. In the meanwhile, the career professionals at CDC had removed comments about the prospective use of the drug from its website, restating its earlier position that “there are no drugs or other therapeutics approved by the US Food and Drug Administration to prevent or treat COVID-19” (Reuters, 2020). “Oversold false hope” seems to be the historical judgment on hydroxychloroquine.

Of course, most ordinary people would not have access to the resources, nor devote the time, to analyze HCQ so thoroughly. Nor would such effort really be needed. As in the cases above, a claim's credibility is most directly and effectively established by expertise. Yet despite their professional status, we should not have trusted those two doctors. Why not?

Reliable medical knowledge is not established by one person or by a few, or by a few loose studies. Experts must also agree. The local doctor may have been an expert on practicing medicine. But he proved not to be an expert in medical research. The French doctor who led the now disputed study seems to have largely retreated from professional discourse since, to promote his cause on YouTube instead (Sayere, 2020). The consensus of experts is essential (Oreskes, 2019). In this case, the collective of informed, qualified experts never endorsed HCQ for COVID-19. As the history now bears out, that expert consensus was a sufficient basis for belief.

Other contested claims about COVID-19 have entered public discourse. For example, the question arose whether one should wear a face mask in public. Some said it was a necessary precaution to limit the rate of spreading the disease. At the same time, a national leader said, “You can do it. You don't have to do it. I am choosing not to do it. … It's only a recommendation, it's voluntary.” So, how important was it? For another example, everyone understood the ultimate importance of a vaccine from the beginning, but when would it be available? In February 2020, some politicians promised that it would be available “soon,” “we're very close.” Others, citing historical experience, estimated 12–18 months. Big difference in expectations and planning. Finally, given the enormous economic hardship, when would/will it be safe for everyone to go back to work, to reopen restaurants, to resume school, sporting events, and concerts? Should we expect a “second wave”? Thinking historically, what would have been trustworthy criteria at the time to distinguish reliable from unreliable claims? Expertise or argument?

All these cases have generally been answered in the long run by what the consensus of relevant experts indicated originally. For example, a month after dismissing the need for face masks, the White House was requiring them for all its workers (Associated Press, 2020a). Expertise proved the reliable benchmark. Not confident claims or attractive arguments. Not selective “evidence” framed by wishful thinking. Not emotional appeals. (To consider ongoing cases of “fake news” about COVID-19, see weekly summaries and fact-checking by the Associated Press [2020b].)

Neglect of expertise has exacted a sometimes tragic toll. A pastor who publicly disparaged the idea of social distancing, as recommended by experts, later died of COVID-19. College students who congregated on beaches during spring break, like Mardi Gras revelers a few weeks earlier, returned to their homes – heedless of what experts were saying – to infect family, friends, and neighbors. A university in Virginia that welcomed students back from spring break soon developed a growing number of cases, which then spread into the surrounding community. Retrospect can offer painful, sobering lessons.

Expert Consensus & Conspiracy

Society's debt to experts is never more evident than during occasions like the COVID-19 pandemic, when science helps guide decisions that affect the lives of millions. Yet, ironically, crises are also when the rejection of expertise proliferates. Conspiracy theories emerge and thrive (Alba, 2020; Associated Press, 2020b; Fisher, 2020; van Prooijen, 2018; Sorkin, 2020). People become confident in their own ideas, which eclipse those of available experts. Individuals find solace among others who share – and thus seem to validate – their ill-founded ideas. How do these cognitive tendencies further shape the lessons on trusting experts?

For example, consider the claim that COVID-19 was caused by the new 5G telecommunication network (van Prooijen, 2020; Sorkin, 2020). The 5G technology was being introduced to China just as the pandemic erupted in Wuhan. Maps elsewhere showed a correlation between confirmed coronavirus cases and recent installations of 5G wireless service. Perhaps that was the real cause? If one searches the internet, one will easily find earlier claims that cell phones cause brain cancer (the basis for an Italian Supreme Court decision) and that high-voltage power lines cause leukemia. Confirmation? But could anything explain the connection? Well, 5G could be causing the disease directly (but political leaders, eager to protect corporate profits and avoid liability, falsely blame a virus). Or 5G might be compromising immune systems, making an ordinary virus more deadly. So: plausible explanation, confirmatory evidence, agreement with other theories – all standard principles of scientific justification. Are these criteria sufficient for believing the 5G theory of COVID-19? Why or why not? How should one decide this case?

There are two alternative ways to unravel this puzzle. The first follows the principle (again, this month's Sacred Bovine) that we each apply scientific reasoning on our own. So, someone who was well schooled in science will recall that correlation is not causation. Even if the map relationship did exist, it did not indicate that the 5G networks were the culprit. Densely populated areas, where cell towers are more frequent, are also precisely where one would naturally expect an infectious disease to spread. The maps were not proof. A shard of evidence is not a complete body of evidence (Sacred Bovines, Sept., 2012). Nor does merely fitting with other cases provide validation. If one checks those earlier cases about radiation and disease – if one invests the effort to delve – one will find them all discredited. Finally, plausibility is not proof. Our minds are predisposed to “connect the dots.” But sometimes those patterns are illusory. The initial perceptions are speculations only. The pattern may not hold consistently. One must test the idea systematically, especially with unlikely instances. Overall, then, fragments of scientific thinking are not scientific thinking. Completeness is essential. When all the checks were done, the 5G theory failed. That was the consensus of the scientific experts.

But this was not always the conclusion of individuals. Ironically, the go-it-alone method is also highly susceptible to emotion, ideology, prior perspectives, and social dynamics. First, when one actively seeks other, similar cases, standards of evidence tend to be eased. Confirmation bias yields distorted conclusions that still appear “objective” (Sacred Bovines, Aug., 2010). Second, under emotional distress, plausibility can seem adequate for proof. Any explanation, validated or not, can generate the comforting feeling of “understanding.” Emotion trumps the need for evidence. Similarly, when one is disposed to a preferred conclusion – perhaps someone fears technology or has had unpleasant experiences with the telecommunications industry – correlation may feel sufficient to demonstrate a purported link. Ultimately, without the appropriate motivations for rigor, one does not bother checking for errors. That is why experts, who cross-check each other in a critical community, are so important. The ideal of the individual, independent scientific reasoner is quite fragile in practice (Oreskes, 2019; Zimring, 2019).

The second pathway for the typical citizen to assess the 5G theory is ultimately easier and sounder. As noted in the cases above: trust the consensus of relevant experts. Let others with more skill and greater scope of knowledge do all the hard work for you. In this case, finding recognized experts, even in the form of a reputable fact-checking organization, was not difficult. Yet many people, including Hollywood celebrities and talk show hosts, believed and endorsed the erroneous 5G theory. In England and other European countries, dozens of cell towers were vandalized. Why? What might be some possible reasons that shaped the thinking of those who rejected the science?

Trust in science, at this point, seems to depend on other deep emotions (van Prooijen, 2018). Psychologically, everyone wants to feel master of their own destiny. A sense of agency, security, or control. It is a powerful motivator. By comparison, deferring to expertise is not always easy. It requires a degree of humility. The situation may require one to yield one's heartfelt desires to someone else's intellectual authority. Accordingly, some people may entertain an erroneous theory that calms their sense of fear. As noted above, uncertainty increases emotional susceptibility to fraud or persuasive falsehoods. One may embrace ideas that, while misleading, may nonetheless provide a feeling of autonomy, liberty, or stability. The COVID-19 pandemic, with its emotional chaos, seems to have helped fuel many unscientific beliefs.

When alternative ideas are not endorsed by experts, individuals may seek validation elsewhere. They may turn to others who share their emotional uneasiness or worldview. Mutual validation will create its own sense of authority. Of course, when one consults only like-minded individuals, the apparent consensus is a false consensus. Still, agreement can establish a powerful social bond. It produces a positive sense of group acceptance, of “we” or identity. Shared endorsement within a social network can thereby psychologically displace the authority of qualified experts. The networks thrive by rejecting science. Unfounded theories can thereby gain traction and substitute for the available scientific expertise. Namely, based on emotions of autonomy, sociality, and political allegiance, trust can shift, and erroneous claims can flourish.

And so on for other theories. During a pandemic – just when respect for expertise is most needed – false theories can circulate widely and seem attractive (Lynas, 2020; Sorkin, 2020). Did American soldiers release the virus in Wuhan, as the Chinese alleged? Did China deliberately make a bioweapon, as some American politicians contended? Was the whole scare a deliberate plan (or a partisan hoax) to influence elections (Alba, 2020)? Could drinking tonic water or diluted bleach cure the disease? All these false claims may have seemed attractive from some perspective. What ways of thinking would help you escape being persuaded by these false theories? What strategies would you use to dissuade someone who found these theories compelling?

A major goal in education has typically been intellectual independence for all. Namely, if everyone can evaluate arguments and evidence themselves, then shouldn't science triumph? But, ironically, such an attitude fosters an illusion of competence that displaces deference to expertise. Specialized knowledge is hard to come by. Culturally, we distribute the intellectual work among multiple fields of expertise. Ultimately, we depend on experts, whether scientists or doctors or lawyers or plumbers or electricians or tax accountants. That means that the educators' hopes for developing intellectual autonomy may be misplaced (Norris, 1995, 1997). Science students may need to learn, instead, how to cope with intellectual humility and when to respect expert consensus.

Beyond Pandemics

Does brooding about recent history and possible errors in judgment truly matter? Once decisions are past, isn't this all just finger-pointing? Another exercise in the shameful politics of blame? Is it any more worthwhile than the empty rhetoric of “Monday-morning quarterbacking”?

We need only consider other significant cases where science has been (and still is) sometimes rejected: vaccination safety, climate change, and human evolution, among others. For example, climate change is not, as naysayers have tried to convince the public, a hoax, a scam, or a fraud (Sacred Bovines, April, 2015). The science is as sound as that used to envision the global coronavirus pandemic long before it actually emerged. Maybe the role of modeling seems clearer now? If the COVID-19 pandemic teaches us anything, it might be that we can no longer afford the luxury of disregarding the experts. And this newfound trust might begin with the science of climate change, whose consequences, if not addressed, will likely be even more devastating than COVID-19.

Argument or expertise? The verdict of recent COVID-19 history seems to be this: Rely on the experts, not one's own fragile personal assessment of the evidence. Of course, that leaves open the potentially problematic question of who is a scientist and who is an expert. And how do you know that? Yet another conundrum (one discussed in Allchin, 2012, 2020).

References

References
Alba
,
D.
(
2020
).
Virus conspiracists elevate a new champion
.
New York Times
,
May
9
.
Allchin
,
D.
(
2012
).
Skepticism and the architecture of trust
.
American Biology Teacher
,
74
,
358
362
.
Allchin
,
D.
(
2020
).
The credibility game
.
American Biology Teacher
, forthcoming.
Allchin
,
D.
&
Zemplén
,
G.
(forthcoming).
Finding the place of argumentation in science education
.
Science Education
.
Associated Press
(
2020a
).
Expertise proved the reliable benchmark
.
New York Times
,
May
11
. https://www.nytimes.com/aponline/2020/05/11/us/politics/ap-us-virus-outbreak-trump.html.
Associated Press
(
2020b
).
Not Real News, April 10: A week of false news around the coronavirus
. https://apnews.com/NotRealNews.
Brewster
,
T.
(
2020
).
Coronavirus ‘cure’ claims get FTC warning, so maybe don't drink silver
.
Forbes
,
March
9
. https://www.forbes.com/sites/thomasbrewster/2020/03/09/teas-essential-oils-and-drinking-silver-ftc-warns-about-dubious-coronavirus-cures/#792d7be31cba.
Crowley
,
M.
,
Thomas
,
K.
&
Haberman
,
M.
(
2020
).
Trump again pushes drug, never mind expert opinion
.
New York Times
,
April
5
,
A17
.
Daragahi
,
B.
(
2020
).
‘Total disregard for people's lives’: hundreds of thousands of fake masks flooding markets as coronavirus depletes world supplies
.
Independent
,
March
25
. https://www.independent.co.uk/news/health/coronavirus-face-mask-fake-turkey-medical-supply-shortage-covid-19-a9423426.html.
Fisher
,
M.
(
2020
).
Why coronavirus conspiracy theories flourish. And why it matters
.
New York Times
,
April
8
.
Guzman
,
J.
(
2020
).
Nobel laureate predicts US will have much faster coronavirus recovery than expected
.
The Hill
,
March
25
. https://thehill.com/changing-america/well-being/prevention-cures/489415-nobel-laureate-predicts-us-will-experience.
Lynas
,
M.
(
2020
).
COVID: Top 10 current conspiracy theories
.
Ithaca, NY
:
Cornell Alliance for Science
. https://allianceforscience.cornell.edu/blog/2020/04/covid-top-10-current-conspiracy-theories.
Norris
,
S.P.
(
1995
).
Learning to live with scientific expertise: toward a theory of intellectual communalism for guiding science teaching
.
Science Education
,
79
,
201
217
.
Norris
,
S.P.
(
1997
).
Intellectual independence for nonscientists and other content-transcendent goals of science education
.
Science Education
,
81
,
239
258
.
Oreskes
,
N.
(
2019
).
Why Trust Science?
Princeton, NJ
:
Princeton University Press
.
van Prooijen
,
J.-W.
(
2018
).
The Psychology of Conspiracy Theories
.
London
:
Routledge
.
van Prooijen
,
J.-W.
(
2020
).
COVID-19, conspiracy theories, and 5G networks
.
Psychology Today
,
April
10
. https://www.psychologytoday.com/us/blog/morality-and-suspicion/202004/covid-19-conspiracy-theories-and-5g-networks.
Retraction Watch
(
2020
).
Elsevier investigating hydroxychloroquine-COVID-19 paper
. https://retractionwatch.com/2020/04/12/elsevier-investigating-hydroxychloroquine-covid-19-paper/.
Reuters
(
2020
).
CDC removes unusual guidance to doctors about drug favored by Trump
.
New York Times
,
April
7
. https://www.nytimes.com/reuters/2020/04/07/us/07reuters-health-coronavirus-usa-cdcguidance.html.
Roose
,
K.
&
Rosenberg
,
M.
(
2020
).
Touting cure brings ‘simple country doctor’ cheers, and doubts
.
New York Times
,
April
3
,
A1
.
Sayere
,
S.
(
2020
).
He was a science star. Then he promoted a questionable cure for Covid-19
.
New York Times
,
May
12
. https://www.nytimes.com/2020/05/12/magazine/didier-raoult-hydroxychloroquine.html.
Sorkin
,
A.D.
(
2020
).
The dangerous coronavirus conspiracy theories targeting 5G technology, Bill Gates, and a world of fear
.
The New Yorker
,
April
24
. https://www.newyorker.com/news/daily-comment/the-dangerous-coronavirus-conspiracy-theories-targeting-5g-technology-bill-gates-and-a-world-of-fear.
Thomas
,
K.
&
Sheikh
,
K.
(
2020
).
Small chloroquine study halted over risk of fatal heart complications
.
New York Times
,
April
12
. https://nyti.ms/2RyHKiL.
Zimring
,
J.C.
(
2019
).
What Science Is and How It Really Works
.
New York, NY
:
Cambridge University Press
.