“Why…in so vast an Ocean of Books by which the minds of studious men are troubled and fatigued, through which very foolish productions the world and unreasoning men are intoxicated, and puffed up…why should I, I say, add aught further to this so-perturbed republick of letters…”1 As he prepared to avail himself of the printing press, William Gilbert, author of a landmark book on magnetism in 1600, gave voice to the frustrations of many natural philosophers of his era. The press was among the greatest technological achievements in Europe’s history, but despite its promise Gilbert felt not enthusiasm but a decided sense of dread. His reluctance was echoed by some of the most influential natural philosophers of the day—Robert Boyle, Isaac Newton, Christian Huygens, René Descartes—all of whom expressed disdain at the prospect of utilizing the printing press for their works. But theirs is not the story we often tell about print or about science. It is, however, a story we should very much keep in mind as we navigate the new technologies related to AI.

Around the year 1455, in the town of Mainz, Germany, a printing press using moveable type made its first impressions. Within fifty years, one thousand presses operated in towns across Europe, producing over eight million volumes, a staggering increase from the manuscript era. By the end of the sixteenth century, that number reached 150 million volumes. It was a technological triumph, one historians have lauded as a catalyst for such massive social shifts as the Protestant Reformation and Scientific Revolution. But reception of this new technology was hardly straightforward. Contrary to what we might expect, the scientific community had deep misgivings about the press and often viewed its products—broadsides, pamphlets, and books—with suspicion. Central to their concern were issues of control, which manifested in several ways. Printed materials were available to a much wider swath of society, and the guardrails that had historically kept knowledge production in the hands of a few were compromised. As information proliferated, consensus around knowledge production—what was true and how that truth was affirmed—was disrupted.

If that were not alarming enough, and many felt the democratization of knowledge production was indeed alarming, there was the matter of print technology introducing errors. Works issued from a print shop were quite frequently adulterated versions of the author’s original manuscript, mistakes having be reproduced by the hundreds or thousands if the pressmen were careless. Mathematician and astronomer Regiomontanus voiced concern in 1494 about “infecting posterity with erroneous copies of books,” a liability that threatened the entire scientific enterprise.2 Compounding this was the proliferation of printed materials that, in the opinion of many natural philosophers, made it nearly impossible for a scholar to sift through and find works of value. In 1638, mathematician John Pell proposed a system whereby “men might be informed, in that multitude of books, with which the world is now pestered, what the names are of those books that tend to this study only.”3 Pell’s taxonomy notwithstanding, natural philosophers bemoaned the undiscriminating nature of the press and called into question information that it produced, as if the technology itself issued reams of uneven scholarship.

It is in this perception of technological agency that the question became less one of material culture and more one of epistemology. The printing press was not simply a tool but a vehicle for knowledge production that directly reflected the historical moment in which it operated. And as historians of print have shown, products of the press were not intrinsically trustworthy. When they were in fact trusted, it was only the result of hard work on the part of natural philosophers to reassure readers that the content was valid. There was no epistemic authority that attached to printed work a priori; that status had to be earned. Like all technologies, the press reflected the capacities and fallibilities of the culture that produced it.

These reflections on print are interesting in their own right, but we should give them particular attention today in light of advances in AI technology. Surveys on AI’s use across scientific disciplines reflect a high level of anxiety about the technology. Iris van Rooij, a computational cognitive scientist from Nijmegen, has asked why academics would even engage with AI, which could lead researchers to “lose their ability to articulate their own thoughts.”4 Even if most scientists don’t go as far as van Rooij, they remain uneasy with the potential for this new technology to upend their field. According to a 2023 survey in Nature, almost 70 percent of scientists are worried about the proliferation of misleading information produced by AI.5 Nearly the same number worry about the introduction of errors that can then yield false—and therefore unreproducible—findings. Such worries led the journal Science to update its editorial policies in early 2023, stipulating that figures, images, or graphics created by AI are prohibited. More critically, the new policy specifies that artificial intelligence tools cannot be authors.

The question of authorship lies at the center of AI debates because of the deeper issues—the epistemic issues—that the technology raises. We trust certain authors and the replicability of their work; we are not so ready to invest that faith in machine learning. Scientists worry that uninformed use of AI will yield mistakes and, more problematically, unreproducible findings, not to mention content that cannot be directly tied to a clear source. Journals increasingly need editors who know both the science and machine learning in order to effectively referee papers. The very definition of an author is being called into question, and consequently, so is the notion of trust.

All of these concerns are uncannily similar to the dilemmas early modern natural philosophers faced: worry over the novelty of the technology, distrust in the errors generated, lack of consensus among users as to how the technology should be leveraged, and significant tension among knowledge producers, reproducers, and consumers. For early modern natural philosophers, authority that they held prior to print had to be reconstructed through the printed artifact. For today’s scientists, the authority of AI-generated knowledge is similarly questioned. Both then and now, scientists confronted a new layer of knowledge production that required verification to ensure epistemic authority. It will only be when scientists negotiate new mechanisms for control, and reach some consensus on how authority is ascribed to AI-generated content, that anxieties around the technology will subside.

As it stands, when ChatGPT was asked—for the purposes of this essay—whether epistemic trust in it should exist a priori, it responded with a ten-step guide to developing faith in AI as a producer of knowledge. One of these steps involved accepting AI’s limitations, an admonition one would have heard in almost any early modern print shop. But most pointedly, ChatGPT suggested that trust in itself can only grow over time, as users accrue confidence in its reliability. It is the echo of a refrain from centuries ago, when on the cusp of an explosive period in science’s history, scholars approached the press with justifiable hesitation. Only with time did they come to appreciate its unbridled potential.

1.

William Gilbert, On the Magnet and Magnetic Bodies, and on That Great Magnet the Earth (London: Petrus Short, 1600), preface.

2.

Adam Mosley, Bearing the Heavens: Tycho Brahe and the Astronomical Community of the Late Sixteenth Century (Cambridge: Cambridge University Press, 2007), 149.

3.

John Pell, “The summe of what I have heretofore written or spoken to you…” (London, 1638), preface.

4.

Chris Stokel-Walker and Richard Van Noorden, “What ChatGPT and Generative AI Mean for Science,” Nature 614 (Feb 2023): 214–16. https://doi-org.eou.idm.oclc.org/10.1038/d41586-023-00340-6

5.

Richard Van Noorden and Jeffrey M. Perkel, “AI and Science: What 1,600 Researchers Think,” Nature 621 (Sept 2023): 672. doi: https://doi-org.eou.idm.oclc.org/10.1038/d41586-023-02980-0