Since the beginning of this age of “generative AI,” members of fields and disciplines ranging across commerce, food service, medicine, and education have sought to find at least one use case fit to the purpose of what these generative pretrained transformers (GPTs) and large language models (LLMs) were supposed to be able to do. Within the realms of higher education and academic publishing, in particular, scholars have worked to understand the implications of and keep up with the proliferation of this rapidly spreading technology, often with less than desirable results. Some have begun implementing GPT-based plagiarism checkers, others telling students to use GPTs to complete their assignments, and still others seeking to train students to be better prepared to become “prompt engineers.” At the same time, some administrators dream of suites of GPT-integrated tools that might allow them to connect knowledge of a student’s academic and social lives with indicators of their mental and physical well-being.

Look at most universities today and you will find at least three, often conflicting stances on what students should be allowed or expected to do, and in what context. What you are less likely to find is any kind of strategic communications as to what students should think of as “okay” and “not okay” across the board, which is due to the lack of robust discussion among faculty and administrators regarding what these technologies can and cannot do, let alone what they might mean for academia. And the unfortunate fact of the matter is, while both students and established scholars are in danger of ending up on the wrong side of ill-defined rules, norms, and expectations, the seeming eagerness of many to hand over knowledge-certifying authority to GPT-based systems will lead, and in fact has led, to a great deal more harm for marginalized individuals and groups, as well as damaging the production of knowledge itself.

As I noted in my article “Bias Optimizers,” researchers have demonstrated that GPT plagiarism checkers tend to misidentify text written by neurodivergent individuals.1 Since then, we have seen increasing evidence of prejudicial bias within these systems, which are intended to use pretrained transformer models to identify whether a human student has used a pretrained transformer model to complete their work. Research findings also show that those systems tend to generate far more false positives on non-native English speakers.2 From that point, begin to consider the differences in how expectations and perceptions of gender and race can affect people’s communication styles—and how these lived experiences are least likely to be properly accounted for in either the training data or the weighting architectures of GPTs or other “AI” tools.3

To highlight the importance of these intersections, consider an incident scholar Rua M. Williams discussed on social media. Williams recounted a tale in which the writing and scholarship quality of a non-native English-speaking student with whom they were working suddenly and catastrophically diminished. When Williams reached out to the student out of deep concern, it turned out the student had recently begun using ChatGPT to rewrite their work to “sound more white.”4

Neurodivergent individuals, non-native English speakers, women, and racially minoritized individuals tend to “mask” or seek to “pass” in their communications, whether spoken or written. From the outside, this process often appears as if someone is overthinking or being overly formal or precious—even “stilted” or “robotic”—with their word choices. In written form, these word choices and patterns of phrasing get flagged as “anomalous” by GPT checkers, thus assessed more likely not to be “human.” And flagged not only by the automated systems but by other humans as well.5 The way that this comes to pass is honestly fairly simple. We have all been sold GPTs and other LLM-based systems as though they provide factual, truthful answers, rather than doing the statistical language generator equivalent of what philosopher Harry Frankfurt termed “Bullshitting.”6 This happens because technologies that rely on code and math garner veneers of “objectivity,” which in turn gives people a way to wash their hands of any responsibility for what those very human technologies do; the repeated habit of and inclination toward this process is what Meredith Broussard terms “technochauvinism.”7 This happens not only with GPTs but with “AI,” algorithms, and code more broadly as well.

Put plainly, we really want the shiny, mathy, “objective” answer, and we are very inclined to believe that technology will give it to us. People think GPTs tell the truth, and so people think GPT-based checkers tell the truth, and so people think they can trust and learn from GPT checkers how to tell when something is written by GPT rather than a human being. And not one of these thoughts that people think is correct. But the academy and general public’s willingness to trust computers and “AI” over other humans, even and especially when computers and “AI” are making decisions about humans, is currently hurting, and will continue to hurt, a lot of already marginalized people—unless we work hard to fix it. In case you find yourself willing to downplay this, filing it under the header of, “Well, it’s a small portion of scholars or potential experts; what’s the real harm?,” let me assure you that these harms will be borne by all of us eventually; the marginalized and minoritized are just the ones who experience it first.

Thus, it is in atmospheres and tendencies such as these, and incidents like the one described and experienced by Rua Williams, where we can observe the contributing factors that have led an increasing number of scholars to use GPT tools to translate, revise, or wholly write the articles that they submit to peer review. Scholars under pressure to publish or perform or otherwise live up to the perceived expectations of their discipline are willing to, at the very least, run their hard-won knowledge through systems that do not and, as shaped, cannot truly understand what they are being asked to process. The sociotechnical paradigms of these new technologies are often situated in a hypothetical space where the technology is perceived as neutral. At “best,” this results in scholars trying to “get out ahead” of the technology, seeking ways to integrate it into their work or field of research, even where it does not fit.8 At worst, this high-pressure adversarial relationship may lead to scholars using this technology to write papers or generate “research” whole cloth.

These technological systems might provide some benefit at some point—the ability to help researchers and writers hypothesize based on carefully curated datasets, for instance. But the current massive power and water demands of these tools, as well as the unethical provenance of most of their training data, should—at the very least—give us serious pause. And neither is it to deny that while some scholars are seeking tools to deal with their justifiable stress and panic in the face of ever-increasing professional demands and shifting sociotechnical pressures, others certainly are searching for the most expedient, low-effort way through their classes and workload. While the unequally borne pressures of the academy and the existence of scammers, grifters, and epistemic hucksters are hardly breaking news that arose only in the age of “Generative AI,” the materialities of said age do change the scope and proliferation of these problems in ways that will require the academy to rethink our peer review processes and expectations.

At base, underneath all of the considerations and concerns about whether and how academic professionals should use “AI” are the fundamental questions of all scholarship: What skills and knowledge do we seek to cultivate in and through our research? What do we believe actually ought to be the purpose of our work and contributions? And how do we help ourselves and each other to remember that these are different questions, the answers to which have very different but importantly connected implications?

If we as scholars can truly internalize that a major part of our job is about helping the public, the academy, and ourselves understand that we have real and meaningful stakes in scholarship, beyond publication, promotion, and tenure, then we can remember to remain critical and careful about the origins, capabilities, and failures of “AI” and any other new technology that comes our way. If we cannot, then it will not matter what new technologies we engage with in our research or our teaching, because apparently the only thing that will concern us is publication count itself—not what it is supposed to represent.

1.

Damien Patrick Williams, “Bias Optimizers,” American Scientist 111, no. 4 (2023): 204–7. https://doi.org/10.1511/2023.111.4.204

2.

Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu, James Zou, “GPT Detectors Are Biased against Non-native English Writers,” Patterns 4, no. 7 (2023). https://doi.org/10.1016/j.patter.2023.100779

3.

As I note elsewhere, I place “AI” in scare quotes for two reasons: first, because it is an evocative symbol without a concrete referent, allowing it to hide more than it reveals about whatever we label with it (Cf. Tucker, Emily. “Artifice and Intelligence,” Center on Privacy & Technology at Georgetown Law Blog, Medium. March 8, 2022); second, because I want to specifically, intentionally, and critically trouble both that obfuscation, and also our assumptions around notions of both “artificiality” and “intelligence.”

6.

Damien Patrick Williams, “On Bullshit Engines: The Socioethical and Epistemic Status of GPTs and other ‘AI.’” Forum in Ethics, Law, and Society Lecture Series, hosted by the Sonoma State University Philosophy Department’s Center for Ethics, Law and Society, October 16, 2023. http://youtu.be/9DpM_TXq2ws

7.

Meredith Broussard, More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (Cambridge, MA: MIT Press, 2023). https://doi.org/10.7551/mitpress/14234.001.0001

8.

Emanuel Maiberg, “Scientific Journals Are Publishing Papers with AI-Generated Text.” 404 Media. March 18, 2024. www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text