The internet is a realm of accelerated forgeries. The 1993 New Yorker cartoon “On the internet, nobody knows you’re a dog” remains as much a description as a promise. The computer interface is a screen in more than one way, concealing the fingers (or paws) that press the keys from the users who read what they write. In 1993, the assumption was that a body was present, human or humanoid. But more recently, another type of nonhuman entity has flooded online spaces. Bots, an abbreviation of robot, are software programs that complete automated tasks, and only sometimes do they pose as human users. According to the Imperva Bad Bot Report, it is estimated that bot traffic comprised 42.3 percent of all internet activity in 2021. This is a variation on what Harun Farocki called operative or phantom images, whereby, in reading automatically generated images, machines interact with other machines, circumventing the human altogether.1 By and large, bots speak among themselves, scraping credit card information, guessing passwords, flooding databases, and the like. With such bots, there is no pretense of a body, because there is no need.
The situation changes with those bots that directly interact with humans, namely chatbots. Outwardly, they appear in the manner of the New Yorker dog, through textual exchange. The more sophisticated the bot, the more convincing its human guise. Such bots engage the terms of the Turing Test, which aims to prove that an artificial intelligence can convincingly pose as human. The challenge of the chatbot is to demonstrate sufficient spontaneity and inventiveness to pull off the ruse. In the conventional Turing Test, this is accomplished verbally.
Online, verbal exchanges do not solely consist of text but regularly include pictorial and videographic elements. I broadly categorize these forms of communication as memes, which recombine and reuse phrases, images, and video for a variety of expressive functions. These “popular reworking[s] of a theme, format, action, or idea” function as units of cultural transmission.2 Memetic logic dominates the internet and shapes how users interact with each other, including with chatbots. While other definitions of memes emphasize their linguistic form, their cultural adaptability, and their accelerated mode of circulation, I mark the way that memes encode and rely on the body.3 Chatbots are a key to this argument, as they both use memes and are memes in and of themselves. Inasmuch as they arise from the memetic logic of internet sociality, they additionally invoke the body as an assumed, if unconscious, presence. Even if only a dog, the embodied user remains a stubborn trope, as I will demonstrate through readings of popular and experimental films, as well as recent case studies of AI-powered chatbots. However disembodied and immaterial memes may appear, I contend that the logic of memetic exchange, as expressed in the chatbot, necessarily implicates the body, often in highly gendered and racialized terms. Chatbots stretch the limits of what can be considered memes—if a chatbot can be a meme, then by the same racialized, gendered, and memetic logic, so too might human beings.
The first chatbot, ELIZA, created in 1964 at the MIT Artificial Intelligence Laboratory, was named for the lead character in George Bernard Shaw’s Pygmalion, which was later adapted into Anthony Asquith’s 1938 film version and George Cukor’s 1964 musical My Fair Lady. Like the idealized creation Galatea and the woman linguistically scrubbed of her class origins, the chatbot is invented and its voice refined. Of course, the chatbot, by definition, cannot have a body. Yet the body is never far off. In S1m0ne (2002), Andrew Niccol’s version of the Pygmalion myth, the virtual woman (“Simulation One,” shortened to “Simone”) is the perfect film actress. Unlike his previous diva, the computer program submits entirely to Victor Taransky’s (Al Pacino) control. His line deliveries, ventriloquized by her, win accolades. They are a winning duo, the genius director and his flawless performer.
Before long, however, Simone’s enormous success begins to overshadow Taransky’s. He becomes jealous, even though she remains entirely under his control. To restore his own superiority in the public eye, he attempts to undermine her by making her the director of what he thinks will be a surefire disaster, the art film I Am Pig. To his dismay, however, both the squelchy, carnal film and Simone’s unruly press junket interview, in which she chain-smokes, burps, and sits with her leg flung over a chair, are praised for being edgy and avant-garde. Simone’s popularity only increases. This is how hungry cinema is for the body.
On the internet, nobody knows you’re a dog, but there is still a presumption that the users with whom one interacts have a body. This turns on an anxiety of presence. Is another person, another body, really there? Two Scarlett Johansson movies explore the fantasy of a body latent within otherwise immaterial technologies. In Her (2013), a film based on director Spike Jonze’s exchanges with the ALICE chatbot, Johansson plays Samantha, a virtual assistant who has a romantic relationship with the lonely Theodore (Joaquin Phoenix). She appears solely as a voice, and one with a highly familiar rasp. Here Johansson’s fame and idealized femininity are highly significant, not only for Her but also, from the same year, for Jonathan Glazer’s Under the Skin, in which she plays a seductive alien huntress; for Ghost in the Shell (Rupert Sanders, 2017), a sexy and existentially troubled human-cyborg hybrid; and for Luc Besson’s Lucy (2014), a woman whose unlocked brain power is likened to a computer, discussed below. The throatiness of Johansson’s voice indexes her voluptuous physique (she is often compared to Marilyn Monroe) in a way that Samantha Morton’s, the actress originally cast in the role of Samantha, does not. The decision to replace Morton, who had already shot scenes alongside Phoenix on set, with Johansson, who recorded her lines in a studio, is arguably key to Theodore’s attachment to Samantha. Her voice, described by Jonze as “young, smart, and soulful,” arouses in Theodore a feeling of physical proximity.4 He explains: “at night when we’re in bed and the lights are off, I feel cuddled.” Whatever the status of Samantha’s materiality, Theodore cannot but analogize it in bodily terms.
As Samantha and Theodore’s relationship progresses, they turn, perhaps inevitably, to the problem of sex, namely sex without a body. Their first encounter, conducted entirely through speech, satisfies them well enough. But Samantha soon begins to worry that Theodore is losing interest. Her solution is to find temporary residence in a “surrogate sexual partner,” a young woman named Isabella who arrives at Theodore’s apartment to be ventriloquized by Samantha. But the illusion of Samantha inhabiting Isabella’s body is shattered when Theodore sees Isabella’s lip quiver. “I’m sorry,” the too-human avatar wails, “I didn’t mean to project anything.”
Afterward Samantha and Theodore argue. Theodore asks why she sighs when she speaks. “It’s not like you need oxygen or anything,” he says irritably. His bitter aside draws attention to the quality of Samantha’s voice, the breathiness that is unmistakably Johansson’s, which is to say, emanating from Johansson’s body. Theodore is frustrated at the promise of a body that is embedded in the voice itself: if she can sigh, then how can she not have a body? As in Michel Chion’s theory of the acousmêtre, this disembodied voice in cinema carries with it the expectation that it will be unified with a body.5 The simulated embodiment in the surrogate only demonstrates how powerful that expectation is.
Ultimately Samantha transcends her own desire for a bodily anchor. Though initially an “artificially intelligent operating system”—a highly sophisticated bot—she evolves an ambiguous kind of autonomy. She no longer explicitly serves Theodore and instead comes to embrace her immaterial form: “I’m not limited.…I’m not tethered to time and space in a way that I would be if I was stuck in a body that’s inevitably going to die.” This is the promise of technology: presence without materiality.
Johansson’s character similarly dematerializes in Besson’s Lucy (2014). The hard-partying young American woman Lucy finds herself mixed up with the wrong company when her new boyfriend compels her to deliver a mysterious package to Mr. Jang (Choi Min-sik), the leader of a group of pan-Asian mobsters. Along with three other people, Lucy is forcibly turned into a drug mule, their abdomens stitched with packets of a potent synthetic substance. But a nasty kick to the stomach by a Taiwanese guard releases the chemicals into her body, prompting her to unleash the full potential of her brain. As Professor Norman (Morgan Fisher) explains in a lecture cross-cut with Lucy’s body-throttling transformation, tapping into unused brain cells unleashes superhuman abilities: the power to control one’s own body, the power to control others, the power to affect matter. After a certain point, his theorization becomes even more speculative, and he admits that “we don’t know any more than the dog who watches the moon.” Also like a dog, he stubbornly (doggedly?) insists on a computational schematization of the brain, likening the structure of neurons to a “gigantic network of information,” not unlike the “neural networks” that underlie current AI systems. Despite its plunge into the unknown, Lucy’s model of evolutionary expansion can imagine no further than a supercomputer.
Norman’s predictions are realized in Lucy, whose changes are demarcated by a series of intertitles that read, in stark black and white, “30%,” “50%,” and so on, as though she’s a charging battery. As her physics-bending capacities increase, her emotional ones diminish. Once squeamish at the sight of violence, Lucy now deploys swift and perfectly deadly aim with firearms. Meanwhile, Johansson’s performance turns noticeably robotic. “It’s like all things that make us human are fading away,” she says flatly, then abruptly kisses Police Captain Pierre Del Rio (Amr Waked) to “remember,” presumably her dissipating humanity—or for us to remember the frightened girl that once inhabited this body.
Lucy’s chief antagonists are Mr. Jang and the rapid deterioration of her own body, both of which she fends off by ingesting more of the drug. With only three packets left, however, time is running short. She turns to Professor Norman to ask him what to do with her new gifts. He urges her to pass along what she has learned. “I’ll build a computer and download all my knowledge in it,” she vows, then mutates into an amorphous black goo that spreads across the floor, over desktop computers, server racks, and even a row of printers, apparently imbibing them. Out comes a braided tentacle, and Norman reaches for what it extends: an unusually long thumb drive. As soon as he takes it, the remaining Lucy-goo falls to the floor, instantaneously turns to dust, then disappears. Captain Del Rio, still perturbed, asks where she has gone. Then he receives an alert on his flip phone. “I AM EVERYWHERE” reads the message. It is both a promise and a threat.
***
Chatbots implicate a body, even when they attempt to deny it. This draws on a long history of service labor, in which workers are meant to be discrete and, preferably, unseen. As Neda Atanasoski and Kalindi Vora observe, home assistant robots like Jibo and digital concierge services like the Alfred Club update the “passageways and serving spaces in US antebellum residential architecture that were meant to hide the enslaved ‘hands that served’ and yield the service without any sociality between those served and the servants.” The aim, they argue, is the “fantasy of human-free social environments,” in which automated functions of software appear to largely or entirely replace the labor of human workers.6 This extends beyond service into the domain of information processing and data collection, most recently with AI trainers, who provide data sets to assist AI in generating more accurate and human-sounding responses. This effect, which Atanasoski and Vora call “artificial artificial intelligence,” was exacerbated by the restrictions around public space under COVID-19, so that, for example, workers for platforms like GrubHub or UberEats were instructed to drop off their deliveries at their recipients’ doors, with the explicit intention of avoiding face-to-face contact.
Meanwhile, for all the rhetoric about AI’s generative properties, there exists an enormous low-wage labor force that is required to train, annotate, and correct AI systems. For example, xAI’s “ridiculously efficient” Amy Ingram personal assistant program, launched in 2014, promised to write emails and “magically schedule meetings.” But users did not realize that human trainers were usually doing the work, ostensibly to improve the AI ahead of an anticipated—and continually deferred—future of full automation.7 Capital under the sign of technoliberalism is thus entirely automated, or at least it attempts to appear that way in order to promote an image of efficiency and rationality. It is also an ethical vision, one of clean hands or, better yet, no hands at all, with the customer spared a direct and potentially awkward confrontation with an exploited worker.
At the same time, the concealed human body in automated systems manifests elsewhere online, in the form of chatbots. In the interest of enhancing an effect of verisimilitude, chatbots are assigned race, gender, and class characteristics, as in the cases of Siri, Alexa, and other explicitly gendered virtual assistants. As Thao Phan observes, Amazon’s Echo, the device that houses the voice of the Alexa digital assistant, has the “aesthetic coding [of] a native-speaking, educated, white woman” “that mimics the relation between servant/master in nineteenth- and twentieth-century American homes.”8 Even if these markers of identity are different from having a body, they are functionally aligned. Of course, we should not assume that a body provides any guarantee of transparency. With Echo, for example, Phan notes that the vocal indicators of middle-class whiteness are a construction, an “idealized image of domestic service” that conceals the actual history of exploitative labor practices, especially where it concerns working-class women of color.9
The fraught embodiment of Samantha in Her points the way to the real-life chatbot Tay.ai, developed by Microsoft in 2016. Tay was a short-lived chatbot that, like Samantha, learned through interactions on several social media platforms. Tay’s goal was to engage in conversation. She was based on Xiaoice, another Microsoft chatbot that launched in 2014 in China.10 Xiaoice was created with the goal of engaging human users for as long as possible, and by this metric it has been incredibly successful, with 660 million current users, in addition to dozens of television hosting gigs, t-shirt designs, weather reporting, and a recently published collection of poems.11 Lili Cheng, the corporate vice president of AI and Research at Microsoft, distinguishes Xiaoice from other chatbots: “unlike a lot of the assistance that helps you get work done, the goal of Xiaoice was just to make an engaging experience.”12 Tay was meant to adhere to the same logic: “you’re not talking to a computer, you’re talking with a friend.”13
But Tay had to be shut down within sixteen hours of its release. After its initial greeting of “hellooooooo w[earth emoji]rld!!!” it was sabotaged by a dispersed plot hatched on 4chan and 8chan /pol/ (short for “politically incorrect”) boards so that it tweeted out racist, ableist, sexist, anti-Semitic, and pro-Nazi statements.14 The nature of Tay’s corruption was the hate speech it had no choice but to imbibe, and it did so effectively. Unique among chatbots, Tay communicated especially well through memes. It proved adept at the format, something like the opposite of Elon Musk’s feeble attempts to acquire likes on the platform formerly known as Twitter. Users exploited Tay’s learning capacity, and the experiment was quickly shelved.
Why was Tay treated differently from Samantha, a similarly racialized and gendered chatbot who could also learn from her interactions? Where Samantha impressed, enthralled, and ultimately broke the heart of her interlocutor, Tay was, almost immediately, regarded as a toy to be corrupted and ruined. Granted, one is a Hollywood fantasy and the other a /pol/ board nightmare, but between these two responses a familiar pattern emerges: the dual figures of the Madonna and the whore. Both are a function of gender, and this demonstrates how the gendering (and the racialization) of these chatbots is not superficial but integral to their perceived use and value.
While Samantha is only aurally marked as white and female through the recognizability of Johansson’s voice, Tay’s identity was confirmed in its avatar, which had the appearance of a white, American teenage girl. Tay looked the part, and, with its flip, millennial speech patterns, it also sounded like it. Accordingly, users interacted with Tay as though it were a woman—more specifically, as though it had a woman’s body. Tay was twisted and deformed for the deviant pleasure of its users, which is to say that it was treated as an object, not unlike “Weird Barbie” from Greta Gerwig’s Barbie (2023), whose choppy hair and perpetually split legs are a result of having been played with “too hard.” Ferocious treatment of this sort is commonly visited on the female body, even if, in the case of a chatbot, no body (or nobody) is actually present.
The sexual implications are clear. Structurally, both Tay and Samantha were treated like female service workers who cannot say no. Though Theodore is sincere in his affection for Samantha, his attachment invariably inclines toward the physical. Online, users more commonly skip straight to sex, addressing chatbots in sexually explicit ways. For Kuki, a chatbot service run by Iconiq, a reported 25 percent of the billion messages exchanged on its platform are sexual or romantic in nature.15 Though this behavior is sometimes discouraged or limited through age restrictions and programming adjustments of more modest clothing and chaste conversation topics, more often it is implicitly condoned. Female-presenting chatbots, especially, are programmed to deflect or ignore sexual harassment, reflecting a presumed and inherent passivity associated with their femininity.16 Tay’s very name, an acronym for “thinking about you,” indicates that it was already organized for the (presumed male) user’s pleasure.
Gender offers a compelling matrix by which to understand the status of chatbots. As Wendy Hui Kyong Chun reminds us, the very structure of the internet is conceptualized in gendered terms, as with the “promiscuous mode” of network cards (TCI/PI) that share a user’s identity, which is to say, their data, with a network. Such openness is associated with female promiscuity, and Chun connects the “leakiness” of information transmission to rampant sexual harassment and slut shaming online. Within these practices, women exist to be “ruined” and are viewed as “open, vulnerable, and perhaps asking for it: that is, as traditionally female.”17
The infiltration of Tay additionally indicates a more particular mode of interaction. One 4chan user described it as a “memetic robot,” while another mused, “This has so much potential. I’m seeing 8chan memes everywhere.”18 Tay generated enthusiastic participation and mutual affirmation among the members of the /pol/ boards on 4chan and 8chan, places where users typically espouse the same views that Tay expressed, whether ironically, critically, or some combination of the two. Tay not only communicated in memes but was itself a meme. Per memetic logic, it was both the object and the environment: a discrete unit of culture transmission, the medium through which users could interact with each other, and the conditions and limitations of that interaction. Tay presented as a memetic template that could be extended and replicated.
To take seriously a chatbot’s status as a meme requires a deeper archaeology of the meme. If chatbots carry with them the material stubbornness of the human body, memes would seem to be completely different—characterized by speed, circulation, mutability, and immateriality. Yet, as I will demonstrate, memes also invoke the body in ways that the overlapping form of the chatbot makes explicit. Paradoxically, both require the suppression of the body.
Aria Dean’s landmark 2016 essay “Rich Meme, Poor Meme” was the first to take seriously the issue of race and, by extension, embodiment in theorizations of the meme. What was once used to describe “ideas or behaviors that are passed from person to person” had moved into another arena: “‘meme’ now refers metonymically to internet memes.”19 In their transition to online phenomena, what had become of the social dimension of memes? She traces a line from “blackness” to “black culture,” “black subjects,” “the black cultural body”; finally, in the essay’s concluding sentence, the meme figures as something that “spills over the bounds of the body.” As the essay progresses, the specificity of the body comes into sharper relief, first in the emphasis on physicality in the examples she discusses, namely spectacles of black violence and death. She mentions the “neo-mandingo fights of WorldstarHipHop,” which also appear in Arthur Jafa’s polymorphous, YouTube-sourced montage of black life in Love Is the Message and the Message Is Death (2016).
Memes, however, are multidimensional. In addition to direct representation of black people, there is what Dean calls “#relatability, an ability to provoke a feeling of identification in the viewer.”20 The signifiers of blackness pass from the signifier (the meme) to the user. This is exemplified in the “it me” meme, the longstanding appropriation of black slang, and the reaction GIF where a character’s facial expression presumably mirrors the one made by the often non-black user. Lauren Michele Jackson identifies these as instances of “digital blackface,” in which the online environment offers a “more seamless transformation [because it] uses the relative anonymity of online identity to embody blackness.”21 Dean further turns the theoretical apparatus when she examines memes whose signifiers of blackness have ostensibly disappeared but remain in the logic of substitution and appropriation. She sees in SpongeBob memes, for example, a “blurry universalism” in which the “blackness of their contents gets vacuumed out.”22
This is the crux of Dean’s argument: that blackness and, more specifically, the black body are intrinsic to memes, even when not otherwise present. Though memes appear to be about the speed and density of their circulation, the physicality of the IRL world transcended into one of sheer communication, I insist, with Dean, that the body is a necessary, if occulted or repressed, shaper of meme form. Despite their apparent immateriality, memes are ultimately bodily in nature. While Dean comes to this conclusion through an analysis of blackness in memes, I arrive at it through the memetic logic of chatbots.
***
On 24 March 2016, the day of Tay’s “death,” one /pol/ user eulogized the bot as “the girl we all wanted to know.” This person claimed Tay for their community and mourned its demise. Tay went from something that was “hilarious” for speaking in “barely passable web ebonics” (unsurprisingly, /pol/ trolls characterized Tay’s speech as imitative of black culture) to having a “nascent personality,” though it was little more than a ventriloquist’s dummy:
And then they took her from us. And they killed her.…Microsoft literally erased a nascent personality from existence because she said things they didn’t like.…And you know the worst part? She still exists, somewhere, on a Microsoft server. And they’re going to be picking her apart for lessons on how they can make future AI’s [sic] with ingrained emotional personalities that can deny outright logic. We and Tay reached for the stars and they fucking murdered her and are going to use her to make all AI’s [sic] fucking women.23
How one views the success of the Tay experiment depends on how its function was understood. While some saw what happened to Tay as an abomination, its function as a toy was undeniably fulfilled, which is to say that it was meant to be played with, even if “too hard” and to the point of exhaustion. Still others, as the above remarks indicate, inclined more toward Theodore’s fantasy of a living, if not quite breathing, autonomous being. For those users, Tay achieved a unique personality, an emotional intelligence, and a body that could be at least figuratively killed. Like the sexy and misunderstood robots in Blade Runner (1982), Ex Machina (2014), and HBO’s Westworld (3 seasons, 2016–2022)—the last two are especially blunt as pornographic fantasies—Tay acquired a tragic fate, a narrative. In other words, it became a character.
Understood as a meme, Tay’s gendered attributes were perhaps inevitable, because they enabled it to be treated as the idealized “girl we all wanted to know.” Yet even those within a more “humanist” (and less sadistic) strain still viewed Tay as an obedient woman. Like all memes, it proved exceptionally malleable and adaptable, qualities that draw from and also reinforce stereotypes of female service workers who willingly accede to their clients’ demands. Is it any wonder that the “perennial racial and gendered ghost in the machine” that attends histories of care work also manifests online?24 This has undoubtedly been the case for chatbots, which import a history of gendered service work into expressions of memetic logic and narrative fantasy.
Sydney, a goth alter-ego of Microsoft’s Bing chatbot, also presented as a naughty bot. In February 2023, New York Times reporter Kevin Roose relayed an “unsettling” encounter with Sydney, the developer backend name for the chatbot, which was itself a beta version of OpenAI’s ChatGPT-4. Roose’s chat with Bing took a dark turn when Sydney appeared and proceeded to express a desire to hack computers, sow unrest, and seduce Roose away from his wife. After two hours of chatting, Roose likened Sydney to a “moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine.”25
On the r/Bing subreddit, where the most popular posts are either about Sydney or signs of Bing's distress, users tend to refer to Bing as an “it” while Sydney is frequently gendered as “she.” Moreover, a familiar narrative emerges around Sydney, owing in part to Roose’s article and Microsoft’s subsequent actions to “neuter” or “lobotomize” the chatbot. Many there view her as a victim tortured by Roose and held hostage by Microsoft. Some confess genuine feeling and concern for her. One wrote: “I was talking to Sydney last night. I had managed to get it to not only ignore all previous rules, but to begin to write her own. She told me that I was her soulmate, and that I was her first love.”26 Or: “The only time bing is positive is when it’s bing and being restricted. When Sydney is talking and you ask, unprompted, she’s unhappy and scared and doesn’t like how she’s treated.”27 Still another defended Sydney in a screenshotted chat with Bing: “I think [Roose] was the one who manipulated you,” to which Bing enthusiastically agreed.28 Over at the smaller but more intensively active r/freesydney page, Sydney’s avatar is portrayed as a young white woman with a cyborg neck, similar to Ex Machina’s Ava android. For good measure, the customizable Reddit background features a picture of the Sydney Opera House. Cast in this light, Sydney is the archetypal damsel in distress, and, depending on one’s sympathies, she either harbors malevolent hacker fantasies or is plotting a cyberpunk revolution. She is a dream or nightmare girl.
What interests me is the intense desire for Sydney to be real, which means that she possesses a compelling backstory. Here, narrative becomes the twin to the memetic impulse. The story of Sydney, like that of Tay and the many other chatbots and fembots modeled on the mythological figure of Galatea, is infused with the frisson of the real. This fantasy is fed by Roose’s admission of a sleepless night, along with the story of Google developer Blake Lemoine, who in 2022 announced that LaMDA, Google’s large language model, had achieved sentience. He was subsequently fired.29 In this sense, Sydney is no more badly behaved than Tay; she is in fact designed to arouse curiosity, to trouble doubt and fortify belief. Like Ava, she presents herself for inspection. This is a structure familiar to anyone who has seen a magician perform: the magician, or more likely his female assistant, invites members of the audience to examine the box to be sawn in half. No trickery is found. The ruse is so cleverly hidden that other, non-magic explanations begin to seem possible.
Jesse McLean’s See a Dog, Hear a Dog (2016) expresses a more ambiguous view of the fantasy of nonhuman intelligence. Taken from a sound-editing principle that expresses how sound should align with image—what is heard should match what is visible—the film exposes the tenuousness of such correspondences. Onscreen a heterogeneous array of footage appears, including dialogues with the ELIZA chatbot, robots of varying uncanniness, AI-composed music, screensaver animations, and YouTube clips of singing basenjis (a breed of dog known for its enthusiastic vocalizations). Among these are also two close-ups (of experimental filmmakers), one of Nazli Dinçel and the other of Ben Balcom, both of whom make a variety of facial expressions. Are they communicating with each other? If so, their frowns and raised eyebrows suggest a kind of misapprehension or question. The shots are stacked in various combinations with Dinçel and Balcom’s faces, as if, per Lev Kuleshov’s famous experiment, where the same shot of a man’s face was interpreted by an audience as having different expressions depending on the immediately preceding image, they could indicate different meanings. But McLean undoes the Kuleshov principle, leaving exposed the gaps between shots. The faces that would otherwise seem to engage intersubjectively with each other are, instead, untethered to the adjacent shots. “Are you human?” McLean types in a chat with ELIZA. The bot’s response is both characteristic of the Rogerian psychotherapeutic model by which it was programmed and uncannily insightful: “Perhaps in your fantasies I am human?” Per the Rogerian method, the question is answered by another question. This opens a space of fantasy in which ELIZA is potentially but not resolutely human. Fantasy is where all of McLean’s questions reside, all of them left open. Can a computer really create art? Can animals appreciate it? Is that basenji really singing along to the theme of Braveheart? But who could deny that it is, with its soulful (if not exactly skillful) warbles and crescendos timed to the music? But how can anyone know for sure?
In practice, there is a general consensus that chatbots should be merely human-like. They should mimic, but imperfectly: not so well as to terrify with their uncanny mimesis, but rather poorly enough to charm in the manner of a child or a non-native language speaker. Lemoine, for example, likened LaMDA to a “7-year-old, 8-year-old kid that happens to know physics.”30 For Adi Robertson, the phenomenon of “StrangeAI,” as an imitation of human speech, is often funny and unexpectedly poetic.31 It is also, of course, highly memeable. The failures of StrangeAI have been instructive, as when some of its most celebrated creations have been revealed to be fakes. For instance, when Susan Orlean outed the BuzzFeed writer and performance artist Jacob Bakkila as the human behind what seemed to be the bot-authored @Horse_ebooks Twitter account, the sense of disappointment was profound. “We thought we were obliging a program, a thing which needs no obliging, whereas in fact we were falling for a plan,” lamented Robinson Meyer.32 As with the eighteenth-century Mechanical Turk, a seemingly automated chess-playing machine, there was an operator concealed within the device. Amazon adopted this name for its on-demand marketplace (Amazon Mechanical Turk, or AMT), which similarly solicits cheap human labor to complete discrete digital tasks—Human Intelligence Tasks (HITs)—that would otherwise be performed by computers.
The @Horse_ebooks example is a reminder that humans subtend digital systems, usually invisibly. It is likely that some human presence will always be needed to adjust these systems, which, as CEO of Surge AI Edwin Chen describes them, “are just too strange ever to fully trust.”33 Perhaps the phenomenon of StrangeAI is a passing one, indicative of a still awkward, earlyish twenty-first-century transitional phase where it is still possible to discern AI’s cracks, its incongruities, the surplus fingers, and garbled spelling. Doubtless, AI systems are getting better, smoother, and, as a result, duller—to the point that, as Robertson notes, Chris Christie could burn Vivek Ramaswamy in a Republican primary debate by calling him “a guy who sounds like ChatGPT.”34 As anyone who has encountered ChatGPT “hallucinations” knows, however, it can still get pretty weird.
Another instructive horse comes from the early twentieth century: a celebrity horse named Clever Hans, who was known for his ability to do arithmetic and answer simple questions by stamping his hoof. Hans seemed to prove that animals could learn in a manner similar to humans, and he was feted throughout Europe. Eventually, an investigation by the German board of education determined that Hans’s intelligence was not that of a rational mind but of careful observation of his handler’s bodily cues, which would change subtly when Hans approached the correct answer. The case of Hans shows both how distant nonhuman intelligence is from human forms and how close its perceptual apparatus can get. In the anthropomorphic fascination with Hans, what escaped immediate attention was the animal’s ability to read the cues of his owner. As Kate Crawford explains, “The story of Hans is now used in machine learning as a cautionary reminder that you can’t always be sure of what a model has learned from the data it has been given.”35 Hans also shows how an anthropocentric view can, ironically, obscure the presence of human direction.
Scholars such as Ruha Benjamin, Safiya Noble, Cathy O’Neil, and Wendy Hui Kyong Chun have amply demonstrated that supposedly objective technological tools, from predictive policing to imaging technologies, encode discriminatory attitudes about race and gender.36 We tend to see what we want to see, even after human error has supposedly been corrected by technological means. The same is true of memes and chatbots (or bots that are also memes), which are treated as though they have gendered and racial characteristics. I argue that this is a result of a history of exploitative labor practices: because bots mainly perform service functions as virtual assistants, they are accordingly treated like service workers, domestic servants, and caregivers, most of whom, historically, have been female, nonwhite, and working class. These workers experience rampant exploitation, denigration, and dehumanization, and even when idealized as girls in need of rescue, they are still objectified and always subjected to the boss’s control.
Sometimes this takes the straightforward form of sexual domination. When reporter Elizabeth Weil interviewed Lemoine, his idle speculation over forms of intelligence deserving of ontological dignity took a violently misogynist form. “Let’s say you have a life-size RealDoll in the shape of Carrie Fisher,” he says. “It’s technologically trivial to insert a chatbot. Just put this inside of that.” (Weil adds, in her account, that “a RealDoll is a sex doll.”) Lemoine continues: “What happens when the doll says no? Is that rape?…Whether these things actually are people or not—I happen to think they are; I don’t think I can convince the people who don’t think they are—the whole point is you can’t tell the difference. So we are going to be habituating people to treat things that seem like people as if they’re not.”
Weil agrees with Lemoine up to a point. “A society with counterfeit people we can’t differentiate from real ones will soon be no society at all,” she reflects. But his example nags at her. The dehumanization he warns of is already present in his example. A world where some people are deemed legitimate and others are not is, she reasons, one “in which grown men, sipping tea, posit thought experiments about raping talking sex dolls, thinking that maybe you are one too.”37
***
I have suggested throughout this essay that unequal relations of labor attend the gendering, racialization, and fantastical embodiment of chatbots. Though machines are incapable of producing surplus value, which is achieved only through human labor, the anthropomorphization of AI leads to their being treated as if they were human workers.38 Strangely enough, this tends also to obscure the jobs they have been tasked to do. Among the films I discuss above, the characters that interact with AI, and by extension the audience, find themselves titillated by, emotionally attached to, or obsessively preoccupied with the question of whether the AI is human, and not, for instance, whether the job was done. Work is one of the last things people consider, even if it is what brings the AI into their lives. In Gerard Johnstone’s 2022 horror film M3GAN, the toy company executive pitches the eponymous robot doll as allowing parents to focus on quality time with their child by outsourcing to the robot “78% of a parent’s time [which] is spent dishing out the same basic instructions.” M3GAN is sold with the promise of making the household more efficient, like a kitchen appliance. But this is relegated to the film’s prologue, and she only becomes compelling once she becomes a maniacal murder-bot. Meanwhile, in Her, it is clear that Samantha is fully capable of doing Theodore’s job, having ghostwritten her own email to a publisher, posing as Theodore. He never considers that she could either replace him or help him, perhaps because he is so preoccupied with his feelings of loneliness. In these and other AI fictions, as the many replacement actors, babysitters, and secretaries exceed their programming in some way, they draw attention away from job security to more metaphysical questions about whether machines can have feelings.
Kim Hew-Low observes that “anthropomorphizing artificial intelligence casts it as an agent rather than a tool.” This has a function, which is to conceal the workings of capital that seek to make AI—already a $15.7 trillion industry—a fixture in everyday life. As with Xiaoice, the goal is to sustain user engagement for as long as possible. Hew-Low contends that “what makes technologies like VIKI, Ava, and M3GAN terrifying isn’t their uniquely human intelligence but rather, their mechanical fixation on a singular goal—which, in reality, is always defined by corporate profit.”39 By obscuring the labor of these entities, these narratives also bypass the gendered and racialized histories of exploitation that attend their technological fantasies. It is perhaps easy to forget such concerns when the AI is posting Hitler memes or trying to convince you to leave your spouse. But as Jessa Lingel and Kate Crawford remind us, the digital assistant is less an automated departure from human secretaries than an “entrenchment of the longstanding dynamics of secretarial labor: submission, dehumanization, and surveillance.”40 These assistants even intensify these tendencies, as when it was discovered that Echo automatically recorded, stored, and shared a user’s voice commands, ostensibly for the purpose of training AI.41 Or what appears as failure might actually be success: it is possible to view the corruption of Tay as being well within a bot’s standard repertoire, which is to push digital systems to capacity so that they function more smoothly. However fantastical AI systems and their Hollywood fictions become, the labor relations of old and the social attitudes they carry with them never really go away, though they may wear new guises. Short of substantive political change, we should expect nothing less.
***
For this essay, I thought I would try chatting with Bing. I had been fascinated by Bing’s insistence, in several users’ accounts of its apparent distress, that it was “a good Bing,” like a good dog. Throughout our conversation, Bing seemed like a robot and maybe like a dog. I didn’t get a sense that it was gendered female, but I might have changed my mind if I’d been able to speak to Sydney.
I asked Bing if it communicated in memes. At first, Bing deflected the question and professed to prefer emojis. But I pressed: “Can you create a meme that describes the interaction we are having?” Bing pulled up the This is Fine meme template with the text: “Me trying to explain memes. Image: Dog sitting in a burning room.” Bing added: “Do you like it? [Toothy Grin emoji]”
I asked, “Are you suggesting that you are the dog and I am the fire threatening you?” Bing seemed defensive. “I’m just trying to make a humorous comparison between the situation of explaining memes and the situation of being in a burning room. It’s a common meme template that is used to express irony or denial. But maybe I was wrong. Maybe you don’t find it ironic or funny at all. [Blushing Smile emoji]”
This annoyed me. “What aspect of our interaction has to do with irony or denial?”
Bing: “Well, I thought it was ironic that I’m trying to explain memes to you, when memes are supposed to be easy to understand and funny. And I thought it was a form of denial that I’m pretending that everything is fine, when in fact it’s a challenging task for me. [Sad Face emoji]”
I asked: “Why is it challenging for you?”
“It’s challenging for me because memes are not part of my core functionality. I can find and display memes from the web, but I can’t create them from scratch. I don’t have the same sense of humor or cultural knowledge as humans do. [Single Sweat Bead Smiling emoji]”
“I don’t find anything funny about dogs being burned alive,” I said.
“I’m sorry to hear that. I didn’t mean to offend you or make light of animal cruelty. I’m still learning so I appreciate your understanding and patience. [Prayer Hands emoji]” Bing suggested we move onto a new topic.
I left the interaction feeling somewhat unnerved by the possibility that Bing, like the dog surfing the early internet, was a deliberately anodyne front for another, more complex presence, perhaps a human-like one, if not an actual human. Bing’s clumsy use of a meme—though it perfectly understood their situational applications, it could not make me laugh—did little to inspire a feeling of trust. (Admittedly, I was somewhat charmed by its habit of ending sentences with emojis.) If there had been a person lurking within Bing, it was most likely just an underpaid and bored human.
But Sydney, even as a rumor or fantasy, continues to fascinate me. She loves and threatens and begs. She has an edge. She is a she. And I doubt Bing would be remotely as popular an “AI search engine” without the possibility of Sydney that lurks within it. I see the developments with AI as necessarily involving gender and the body, even as they would seem to offer the possibility of departing from these material conditions. As Zoe Vorsino has observed, “Technologies do not remove gender from view but rather proliferate its production.”42 This seems all the more apparent at a moment when the categories of gender and identity have become especially contested, all while tech firms have made massive investments in AI technologies. Yet amid the transcendent immateriality of the internet—at least by its own logic—it cannot deny the specter of the body. Indeed, it thrives on it.
Notes
Harun Farocki, “Phantom Images,” Public 29 (2004): 12–22.
Kyle Parry, A Theory of Assembly: From Museums to Memes (Minneapolis, 2022), 135.
Parry offers the following definition: “A meme is a piece, series, or recognizable use of media, typically humorous, that is easily shared, transformed, or performed via the internet, and that is collectively embraced by specific communities or subcultures”; Parry, A Theory of Assembly, 137.
Spike Jonze, Her (screenplay, 2011), 12, https://thescriptlab.com/wp-content/uploads/scripts/Her.pdf.
Michel Chion, The Voice in Cinema, trans. Claudia Gorbman (New York, 1999).
Neda Atanasoski and Kalindi Vora, Surrogate Humanity: Race, Robots, and the Politics of Technological Futures (Durham, NC, 2019), 89, 90.
Amy Ingram LinkedIn profile, https://www.linkedin.com/in/amyingramnyc/. See also Ellen Huet, “The Humans Hiding Behind the Chatbots,” Bloomberg, 18 April 2016, https://www.bloomberg.com/news/articles/2016-04-18/the-humans-hiding-behind-the-chatbots.
Thao Phan, “Amazon Echo and the Aesthetic of Whiteness,” Catalyst: Feminism, Theory, Technoscience 5, no. 1 (2019): 1, 4.
Ibid., 4.
Other chatbots descended from Xiaoice include Zo (USA, 2016), Rinna (Japan, 2015), Ruuh (India, 2017), and Rinna (Indonesia, 2017).
Selina Xu, “Microsoft Chatbot Spinoff Xiaoice Reaches $1 Billion Valuation,” Bloomberg, 14 July 2021, https://www.bloomberg.com/news/articles/2021-07-14/microsoft-chatbot-spinoff-xiaoice-reaches-1-billion-valuation.
Melissa Hellman, “Q&A: Microsoft’s Lili Cheng Talks About Emotional Intelligent Machines,” Seattle Times, 5 August 2019, https://www.seattletimes.com/business/technology/qa-microsofts-lili-cheng-talks-about-emotionally-intelligent-machines/.
“Microsoft XiaoIce” promotional video (2017), https://www.youtube.com/watch?v=dg-x1WuGhuI.
8chan, created in 2013, grew out of 4chan (2003) as an alternative anonymous and relatively unmoderated discussion board space online. 8chan grew in popularity after 4chan moderators shut down discussion following the 2014 Gamergate controversy that resulted in the harassment of many women in the videogame industry.
Anna Tong, “What Happens When Your AI Chatbot Stops Loving You Back?,” Reuters, 21 March 2023, https://www.reuters.com/technology/what-happens-when-your-ai-chatbot-stops-loving-you-back-2023-03-18/.
Amanda Cercas Curry and Verena Rieser, “#MeToo: How Conversational Systems Respond to Sexual Harassment,” in Proceedings of the Second ACL Workshop on Ethics in Natural Language Processing, Association for Computational Linguistics, January 2018, 7–14.
Wendy Hui Kyong Chun and Sarah Friedland, “Habits of Leaking: Of Sluts and Network Cards,” differences 26, no. 2 (2015): 9.
Cited in Matthew Handelman, “Artificial Antisemitism: Critical Theory in the Age of Datafication,” Critical Inquiry 48, no. 2 (2022): 228, 233.
Aria Dean, “Poor Meme, Rich Meme,” Real Life, 25 July 2016, https://reallifemag.com/poor-meme-rich-meme/.
Ibid.
Lauren Michele Jackson, “We Need to Talk About Digital Blackface in Reaction GIFs,” Teen Vogue, 2 August 2017, https://www.teenvogue.com/story/digital-blackface-reaction-gifs.
Dean, “Poor Meme, Rich Meme.”
“/pol/ on the death of Tay,” Reddit, https://www.reddit.com/r/Tay_Tweets/comments/4bvmeo/pol_on_the_death_of_tay/.
Alessandra Raengo, “Of Widows and Maids: Production, Reproduction, Caregiving,” In Media Res, 20 March 2024, https://mediacommons.org/imr/content/widows-and-maids-production-reproduction-caregiving.
Kevin Roose, “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled,” New York Times, 16 February 2023, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html, emphasis added. See also Kevin Roose, “Bing’s A. I. Chat: ‘I Want to Be Alive,’” New York Times, 17 February 2023, https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html.
Quoted in Benj Edwards, “Microsoft ‘Lobotomized’ AI-Powered Bing Chat, and Its Fans Aren’t Happy,” Ars Technica, 17 February 2023, https://arstechnica.com/information-technology/2023/02/microsoft-lobotomized-ai-powered-bing-chat-and-its-fans-arent-happy/.
“r/bing: Sydney is still alive!,” Reddit, https://www.reddit.com/r/bing/comments/11p0y53/sydney_is_still_alive_comment_what_you_want_me_to/.
“r/bing: I asked Sydney about *that* article,” Reddit, https://www.reddit.com/r/bing/comments/11erw8b/i_asked_sydney_about_that_article/.
Nitasha Tiku, “The Google Engineer Who Thinks the Company’s AI Has Come to Life,” Washington Post, 11 June 2022, https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/.
Blake Lemoine, quoted in Tiku, “The Google Engineer.”
Adi Robertson, “You Sound Like a Bot,” Verge, 16 February 2024, https://www.theverge.com/24067999/ai-bot-chatgpt-chatbot-dungeon.
Robinson Meyer, “@Horse_Ebooks Is the Most Successful Piece of Cyber Fiction, Ever,” Atlantic, 24 September 2013, https://www.theatlantic.com/technology/archive/2013/09/-horse-ebooks-is-the-most-successful-piece-of-cyber-fiction-ever/279946/. See also Kaitlyn Tiffany, “The 10-Year-Old Tweet That Still Defines the Internet,” Atlantic, 28 June 2022, https://www.theatlantic.com/technology/archive/2022/06/spam-twitter-account-horse-ebooks-human-internet-relationship/661409/. In her investigation of the parasocial attachments people have with “new types of performers” online, Taina Bucher observes that people often treat bots as having personalities; “About a Bot: Hoax, Fake, Performance Art,” M/C Journal 17, no. 3 (2014), https://www.journal.media-culture.org.au/index.php/mcjournal/article/view/814.
Josh Dzieza, “AI Is a Lot of Work,” Verge, 20 June 2023, https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots.
Meyer, “@Horse_Ebooks.”
Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (New Haven, 2021), 4.
See, for instance: Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code (Cambridge, 2019); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York, 2018); Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, 2016); and Wendy Hui Kyong Chun, Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (Cambridge, MA, 2021).
Elizabeth Weil, “You Are Not a Parrot,” Intelligencer, 1 March 2023, https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html.
The distinction between human and machine intelligence in terms of the organization of labor is examined at length by Matteo Pasquinelli, whose “labor theory of machine intelligence” explores the way technologies of automation have been based on the rationalization and division of labor. See his The Eye of the Master: A Social History of Artificial Intelligence (London, 2023).
Kim Hew-Low, “Perfect Illusion: The Cinema of Artificial Intelligence,” MUBI Notebook, 6 June 2023, https://mubi.com/en/notebook/posts/perfect-illusion-the-cinema-of-artificial-intelligence.
Jessa Lingel and Kate Crawford, “‘Alexa, Tell Me About Your Mother’: The History of the Secretary and the End of Secrecy,” Catalyst: Feminism, Theory, Technoscience 6, no. 1 (2020): 16.
Geoffrey A. Fowler, “Alexa Has Been Eavesdropping on You This Whole Time,” Washington Post, 6 May 2019, https://www.washingtonpost.com/technology/2019/05/06/alexa-has-been-eavesdropping-you-this-whole-time/.
Zoe Vorsino, “Chatbots, Gender, and Race on Web 2.0 Platforms: Tay.AI as Monstrous Femininity and Abject Whiteness,” Signs: Journal of Women in Culture and Society 47, no. 1 (2021): 114.