September 2024

In November 2022, the first version of ChatGPT was released. In the span of less than two years large language models (LLMs) have become nearly ubiquitous online and in academia. Our students experiment with various LLMs to generate responses to essay prompts, check for grammar and/or spelling errors, and summarize long reading assignments. Our colleagues use AI to condense meeting minutes, write emails, or standardize footnotes. Our institutions grapple with possible responses to the use of AI and what it means for higher education. Our journals offer some rapidly shifting guidelines about AI use in submissions. Some of us wring our hands at the intrusion of yet another technology that can be abused or used to cheat the system; some of us delight in experimenting with each new iteration of ChatGPT; some of us waver, wonder, and wish for more time to assess and understand.

This ambivalence and ubiquity led us to solicit essays on AI and scholarly publishing for a November special section of “Essays and Reviews.” We have heard from colleagues who weigh in on what that relationship has been, is now, and might be. The essays in this special section reflect the nuances of AI and scholarly publishing, offering a variety of perspectives and insights.

Alex Csiszar explores issues surrounding both authorship and attribution raised by LLMs understood as literature search technologies. Nicole Howard turns to the fraught beginnings of the printing press in early modern Europe and the ways that history surrounding authorship and trust may help us navigate the relation between AI and authorial trust. Samuel Moore suggests that we work to slow the rapid development of unaccountable AI technologies by collectivizing knowledge production. Jennifer Robertson recommends taking a two-pronged approach to understanding AI and robotics: a longue durée perspective and rapid ethnography fieldwork. Damien Williams dismantles the very notion of purported AI and GPT “objectivity,” exhorting us to acknowledge the biases inherent in these technologies, and suggests we use this moment to examine what we value most as scholars.

Finally, Joseph Martin, chair of the editorial board here at HSNS, writes about the anxieties surrounding AI as revealing certain assumptions about human intelligence generally. As you will see in Martin’s essay, Historical Studies in the Natural Sciences does not yet have a stated policy on AI or LLMs; for now, we ask that its use be acknowledged, and the editorial board is actively discussing the need for a broader or more formal policy. We suspect that scholarly journals across disciplines are having similar conversations. Our hope is that the essays in this section inspire readers to reflect on their own experiences with AI and scholarly publishing, and that the historical, anthropological, and philosophical lenses used by our authors challenge and deepen our engagement with the issues raised by these tools.