The text explores the theme of controlling technology—specifically, the development and deployment of artificial intelligence and generative models like large language models. It reflects on the challenges of maintaining control in the face of technological advancements, drawing parallels with historical attempts to control industrialization. The narrative emphasizes the multifaceted nature of control, extending beyond technical functioning to encompass societal, ethical, and environmental dimensions. The discussion delves into the evolving landscape of artificial intelligence, considering the competition among major corporations and the societal impact of technologies like ChatGPT. It explores the illusion of control and its consequences, questioning the effectiveness of safety measures in the context of rapidly advancing technologies. The text also addresses anthropomorphism, the blurred lines between human and machine agency, and the potential loss of control in the face of the capabilities of artificial intelligence. Finally, it contrasts the modern world’s reliance on a transcendental framework with the enchanted universe of our ancestors. It raises questions about the erosion of transcendental bearings in the face of technological advancements and the potential for a redefined relationship with digital entities. The text concludes by suggesting that embracing uncertainty and reimagining control as a coevolutionary process may pave the way for living harmoniously with the digital Others in an uncertain and collectively imagined future.

“On the highway towards human-level AI, LLM are merely an off-ramp.” This dismissive comment by Yann LeCun about large language models (LLMs), which are the basis of generative AI such as ChatGPT, displays the confidence of someone who is a Turing Award winner now working for Meta as chief scientist in the fierce competition that has recently erupted between the major corporate players. Inadvertently, it summarizes the dilemma of control inherent in the development and deployment of any technology. We believe we know where the highway leads and that accidents will happen on the way. The metaphor promises speed, efficiency, and a clear sense of the destination. But preventing accidents may require more than an off-ramp. During modernity, highways were built with ever more lanes, swallowing more land and land use, seemingly without end. Now, we build digital highways. The challenge inherent in any technology is how to retain control. I will argue that digital technologies require an extended definition of control and, given its cognitive and emotional impacts, special measures to guard against illusions of control.

The development of AI continues to be accompanied by techno-enthusiasm as well as by doubts and dystopian visions. Reaching human-level AI may turn out not to be as straightforward as the construction of a highway. Whether the forthcoming technological advances are under control is an open question, both in the narrower techno-scientific sense but also regarding off-ramps and other safety features that need to be built into their design. The history of AI demonstrates that the pursuit of preset goals can be elusive. Initial attempts followed the use of logic and formal symbols that eventually led to an “AI winter,” a dead end that occurred before the advent of neural networks and the “unreasonable efficiency” of machine learning (ML) by letting algorithms (self-)train on an enormous amount of data. As for the inbuilt safety features: we are still trying to stem the tide of hate speech freely circulating through social media and to design algorithms that do not simply replicate and diffuse the discriminatory bias inherent in the data on which their predictions rest. Our credentials in dealing with criteria like trustworthiness, fairness, responsibility, transparency, and others are glaringly poor. Not to mention the impact on people’s lives and jobs, on our understanding of the world and of ourselves. We rightly expect that the dominant technology of the twenty-first century, whose speed of development dwarfs everything we know from the past, somehow will be able to control the fallout.

Control of technology serves more than one function. It is an integral part of the design, construction, and operation to make technology “work.” As a smooth and efficient functioning can never be taken for granted, control implies foreseeing and preventing what can go wrong. Errors are inbuilt, and accidents happen. The fault may lie in the design or lack of proper maintenance and repair. The interfaces between technology and humans are multiple and often unpredictable. Now we must add the impact on the natural environment as digital infrastructures need a lot of energy and depend on rare minerals often located in conflict zones.

Control is inherent to every technology as otherwise it will not function. This includes the gamut of safety valves and other protective features. The problem is that we can never be sure whether these controls will be sufficient to ward off harm, or prevent failure or developments in an undesirable direction. The processes underlying creeping errors remain invisible for a long time before they lead to collapse. As the effectiveness and the affordances of a technology increase, control expands as well. Beyond the immediate, technical functioning, it needs to account for what can go wrong, which increasingly encompasses the foreseeable, and possibly also the unforeseeable, consequences. Control must adapt in line with the dynamics of change it is expected to manage.

The road from controlling the technical functioning of the machine, making sure “it works,” to control over the effects it has on those serving it, the workers, and beyond has been a long one. During industrialization and under pressure from the labor movement to which the dismal working conditions gave rise, the focus understandably shifted to the health and safety of workers. The profit of factory owners should not come at the expense of workers’ lives and well-being. After many conflicts, workers’ demands were heeded, and their dire conditions improved. In many European countries, a state-sponsored welfare system was established with insurance and compensation for the millions of workers whose lives and health were at risk. Gradually, safety features became a central part of the extension of control, designed into the functioning of the machines and the environment in which they operated.

By now, at least in most highly industrialized countries, the increase of safety features in products and production processes, regulation, and standards has become the norm, and such features continue to proliferate. They extend beyond manufacturing and pervade market-approved consumption use. Backed by legislation and bureaucracy, certification of products and safety measures have become mandatory, enshrined in obligatory checklists, safety drills, extra protection gear, and risk-reducing infrastructures. Whether it relates to the safety of cars and traffic, keeping medication out of reach of children, or safeguarding nuclear power plants—control over industrial products and processes to guarantee their safety has become paramount. The approbation of new drugs and medical treatments takes years of randomized clinical trials to assure the public that harm is avoided and side effects will be known.

Thus, the control of technology has multiple, nested layers and continues to pervade our technological civilization. Control is expected to increase productivity and efficiency as well as guaranteed safety. Maintenance and repair, recycling, and disposal of waste have become indispensable for protecting the environment, with the ambitious goal of a circular economy on the horizon. But control also has a dark side. It exerts power by installing constraints on things and processes, while prescribing how to interact with them, which easily can transform into control over others and the rights they have. The widespread fear of digital surveillance and its abuse by governments is a forceful reminder of the power of control exerted through technology. It can be visible like the surveillance cameras in public places or more surreptitious by following our digital traces, legitimized as being “only” for our safety.

It is difficult to pinpoint the exact locus of control. Control has been installed by humans, and the technological devices are operated and owned by humans, following their instructions and goals. The agents of control are the large corporations with their concentration of economic and political power. They are the state, represented by its institutions, but also each of us when we conduct our daily lives and relationships with each other. Control flows through the multiple links that constitute a socioeconomic and technological system. It changes form and purpose, including the answer to the question quis custodiet ipsos custodes? (who oversees the overseers?). Therefore, control of technology and by technology makes it difficult to install regimes of accountability and responsibility. For a long time, efficiency had absolute priority. We begin to realize only now that we will have to invest more into resilience.

Where there is control, there is also its shadow—the illusion of being in control. Humans were always at risk of being overwhelmed by their senses and biases—by the wish to believe what they wanted to believe, even when contrary facts stared them in the face. The causes for such illusions are many. They range from the overconfidence that disproportionately affects political and economic leaders to the gullibility reserved for simpler minds. Illusions are nurtured by the cognitive biases we all have, but individual biases are reinforced by social and economic circumstances, by information and misinformation, and by the institutions and cultures into which we are socialized. Illusions of being in control are put to the extreme test in war, when each side is convinced that it will win, with technology on its side.

One peculiar feature of the illusion of control is its blind spot. Those who are in its grip fail to notice their condition until a clash with reality forces them to do so. The history of humanity is full of stories of human hubris, of excessive self-confidence, originally in defiance of the gods and in modern times in defiance of the unintended consequences of human action. Technology makes it all the easier as it provides an intermediary shield, raising the question of whether the digital technologies that invade our lives will enwrap us even more in the illusion of being in control. Or will they have the contrary effect—that they and the powers behind them will control us?

When Blake Lemoine, a software engineer at Google, told the Washington Post that he had become convinced, shortly after a limited version of LaMDA (a generative AI specializing in dialogue) was opened to the public in August 2022, that it was “sentient,” he caused a stir. Google was quick to dismiss him on grounds of having violated the company’s confidentiality rules. His professional colleagues were more outspoken but equally swift in declaring that he was wrong. They were unanimous in proclaiming that no AI had attained (as yet) anything like being “sentient,” let alone some form of “consciousness.” The public was reassured that artificial general intelligence (AGI), although high on the research and innovation agenda, was far in the future, and so was “singularity,” the point in time when machines would overtake human cognitive capabilities. Yet behind the scenes, the race between Google, Microsoft, and a growing number of start-ups staffed by their former employees continued to take the convergence of ML and LLM a decisive step forward and to release a new generation of generative AI models to the public.

The incident of sacking Lemoine and the reasons behind it were soon overtaken by the excitement caused by the release of ChatGPT, the generative AI developed by OpenAI and financed by Microsoft. It rapidly turned mainstream, raising fears about students deploying it to write essays or what it would mean for journalists if articles could be written with amazing speed on almost any topic. Others worried that the LLM would already have run out of high-quality texts that are publicly available on the Internet in 2026 and that this might entail a downhill ride toward literary mediocrity (Andersen 2023). As a remedy, the generation of synthetic data is already underway. But the capabilities of generative AI do not end there. In addition to writing almost any text, they produce images following the prompts of the user or compose music in whatever style wanted. The pecuniary consequences for artists are obvious, and claims for their copyrights are already being fought in courts, as the lawsuits against Meta currently lodged in San Francisco show (e.g., the class action led by Chabon, Hwang, Klam, et al.; Kadrey, Silverman, Goldman v. Meta Platforms).

This is only the beginning. Google reacted by releasing its version, Bing, a dialogical generative AI that promptly upped the stakes of everything that can go wrong. More foreseeable and unforeseeable consequences are likely to follow with the rapid diffusion and adoption of these digital products soon to inundate the market. DeepMind plans to bring to the market a new generation of PAs, personalized assistants, designed to guide you in your decisions and how to lead your life. Behind the excitement and bafflement, anxieties concerning the most fundamental questions about the relationship between humans and the technologies created by them return with insistent urgency: how can humans keep control of the machines they have created, and how liable are they to fall into the illusion that from now on the bots, or those operating them, are in control?

The incident about the former software engineer at Google is a tale about the illusion of not being in control. An illusion is a cognitive state that is out of sync with reality. If we are in thrall to an illusion, we are convinced that what we see, hear, and believe accords with reality. Only after an imploding clash with reality, the beholder realizes that it has been an illusion. In the case of Lemoine, his professional peers declared so on his behalf. Obviously, this raises questions about the role of scientific and professional expertise, underlining the necessity of a commonly accepted framework of reference. Once scientific authority is no longer accepted as the arbiter of a shared and commonly accepted reality, we risk falling into a state of anomie, consisting of “personalized realities” that obliterate common ground.

These tendencies manifest themselves in the free circulation of fake news and deliberate misinformation through social media, which has reached an unprecedented level and threatens to undermine our shared understanding of the world. Since the Enlightenment, the shared assumption is that science stands for an approximation of Truth. Science is “organized skepticism,” which means that scientific claims are critically evaluated in accordance with specified rules for argumentation and empirical validation. In liberal democracies, the regulative idea of Truth has served us well, but it remains to be seen how it can stand up when it is being delegitimized. Once an accepted frame of reference becomes eroded and replaced by a “Googled” or “felt” Truth or by the infamous “alternative facts,” liberal democracies and the place of science are at risk.

The recent encounters with generative AI have also exposed our vulnerability to anthropomorphism, to seeing the systems in which AI is embedded as more humanlike than they really are. Since AI has become extremely adept at mimicking human language and other cognitive abilities, including scientific and artistic creativity, the line between the “natural” tendency to anthropomorphize as expressed in the language we use in our dealings with technology, and the belief that the technological artifact is indeed an entity that “knows,” “understands,” and “thinks,” becomes ever thinner. The unreflective use of such words, which are relatively harmless if they refer to familiar technologies that we have incorporated into our world and hence have under control, can transform into a dangerously compelling illusion of being in the presence of a thinking creature like ourselves (Shanahan 2023).

This puts us closer to the moment that Alan Turing defined as the arrival of a genuine artificial intelligence, namely when it is impossible to distinguish whether one is speaking to a real person or to recognize the image of a real person compared to a composite artificial face. However, the rapid advances in facial recognition and language processing have led to disputes of Turing’s definition and even declarations that it is obsolete. Everything we know about the construction and functioning of these artificial systems tells us that they are very different from human understanding and our mental and cognitive capabilities. Being led to believe that a bot is a human agent may therefore be more a sign of human gullibility than a testimony of the presumed “intelligence” of the machine, which, in any case, is not the same as human intelligence.

Despite the many caveats reminding us that generative AIs are only mathematical models, our anthropomorphic tendencies have a profound effect on how we relate to them. They model the statistical distribution of tokens from the vast public corpus of human-generated texts that tell us what words are most likely to follow the sequence of words in the question we ask (Shanahan 2023). And yet they continue to amaze us by their speed and versatility, being able to switch tone and genre in the answers they give according to our questions. We tend to be also more lenient in tolerating errors committed by a machine compared to errors by humans when we believe the machine to be more “objective”—another bewildering inconsistency in how we learn to live with the digital Others that are so clever at imitating and pretending to be like us.

Control is about power and domination. The illusion of control confuses what or who exerts power over whom or what, and how it happens. The deep-seated propensity to anthropomorphize a technology by treating it as if it were a human is a confusion about agency, identities, and relationships. From experience, we know that neither is unambiguous. They may change. Our perception and our knowledge of the world we share with others and what we assume to be mutual understanding are continuously challenged and in need of being reconfirmed. We may also collude with the machine, despite knowing that doing so is not in our interest and may even harm us. This happens when we hand over data about the most intimate aspects of our lives to Big Tech in return for their convenient services. We are cognizant that algorithms have been designed to boost engagement, yet we remain in an addictive relationship. All addicts live in the illusion that they can exit at will. If we mistakenly believe the AI to be “human,” we give up control over who we are.

The power of technology has permitted us to do things that otherwise would be unthinkable. It has enabled the human species to transcend some of its biological limitations, and the temptations of further enhancement know no limits. At the same time, it has revealed our biological limitations and our deep and intricate interconnectedness with other living organisms and the natural world around and within us. The flip side is the power technology has over us. It forces us to behave in certain ways—from observing traffic lights to obeying when facing a gun. Erroneously, we think that technologies are neutral and autonomous. Yet they all have goals designed into their functions. They follow instructions, sophisticated as they might be. Whether technology is used in ways that are beneficial or to suppress other human beings, that use is never about technology alone. Human agents have transferred agency to the machines that carry out functions to attain precisely specified goals. Human agents have interests, be it profit or to advance scientific understanding. Nowhere are the effects more profound and transformative than in our dealings with AI.

We are thus facing a range of complexities that fail to be captured by superficial references to human-machine interaction or by well-intentioned attempts to create an ethical, responsible, fair, beneficial AI, aligned with human values. The efforts to transfer and incorporate such properties into digital machines are sometimes compared to the task of educating children. We want them to grow up and become responsible members of society. This is a laudable task, but it reinforces the goal to make the machines more humanlike, not only in the level of their intelligence but also in their moral and ethical principles. Before jumping to transhumanistic and premature conclusions, it might be worthwhile to reflect on how to achieve a more profound cultural change, the practice of a digital humanism (Werthner et al. 2019).

Marshall Sahlins, a towering figure in cultural anthropology, has left a posthumously published tribute to a world he calls the Enchanted Universe (Sahlins 2022). “Most of Humanity” lived in a world surrounded by meta-persons or spiritual beings. These were gods of various standings, ancestors, souls of plants and animals, and others who were immanent in human existence and, for better and worse, determined human fate. They were not “outside,” but together with human persons formed one big society of cosmic proportions. In this Enchanted Universe, humans were in a dependent but also in an interdependent position. The meta-human powers were present in every facet of human experience and in everything that humans did. They were the decisive agents in human existence and the undisputed sources of success, or lack of it; they were involved in hunting or political ambitions; in repairing a canoe or cultivating a garden; in giving birth or waging war. Interdependence was manifest in the continual ritual invocation of spirit beings through numerous cultural practices. Everything was the material expression of their potency, and nothing could be undertaken without evoking the powers of the meta-humans.

A major transformation took place some 2,500 years ago during what Karl Jaspers called the “Axial Age” (Joas and Bellah 2012). Timing, geographic reach, and the concept itself continue to be controversially discussed, but there is agreement that the immanent social order of the Enchanted Universe dissolved and gave way to a transcendental superstructure. The immanentist assumption that the capacity to achieve any objective depends on the intervention and approval of supernatural forces was replaced by that of “another world.” It is separate from humans, constituting its own reality outside and above them—a transcendental world that we recognize as the objective reality in which we live today. Researchers working with Seshat, a large dataset of prehistoric societies, detect a correlation in the rise of social complexity in early societies that coincides with what they call the advent of moralizing, punishing gods (Turchin 2023). The transcendental realm is at the root of the monotheistic religions and the fundament of modern societies with the rise of differentiated spheres of “politics,” “religion,” “economy,” and “science.” It paved the way for modernity and the belief in the linearity of progress.

Seen through the transcendental lens, we “moderns” are convinced that our ancestors “only believed” in the Enchanted Universe, while “in reality” they “knew” better (Latour 1993). In other words, their Universe was a perpetual, collective illusion. Sahlins refutes this interpretation. “We share the same existential predicaments,” he writes, “as those who solve the problem by knowing the world as so many powerful others of their kind, with whom they might negotiate their fate.” The common predicament is human finitude. Just like our ancestors, we are not the authors of our life and death as we depend on a world that is not of our making.

And yet more and more is of our making, beginning with the enormous impact humans have on the natural environment during the short period now called the Anthropocene. The world that we inhabit is ever more a human-made world, dramatically changed through human intervention. It is populated by sensors, satellites, and space telescopes that bring information about what happened in the universe millions of years ago into the present. “Welcome to the mirror world,” I wrote in my book, referring to the digital world in the making (Nowotny 2021). Tiny robots are used to deliver medicine into those body parts where they are most effective. We have begun to edit genes and to vaccinate tumor cells. With the help of AI, brain waves can be transferred to a computer that transforms them into speech. We continue to create numerous artificial entities, nonhuman digital Others, with whom we share power and with whom we negotiate to gain or retain control. We seem to have reached what Giambattista Vico adumbrated in his New Science (1711)—namely, that “verum (the true) and factum (the made)“ are interchangeable—we only understand what we made. The true and the made are reciprocal, each entailing the other.

I am not suggesting that with the end of modernity, characterized as the Weberian disenchantment of the world, we are about to create a new, digital reenchantment. The transhumanistic movement and long-termism1 is in my view only another flight of fantasy and wishful thinking to escape human finitude and death. Yet the transcendental bearings on which the modern world relies are undergoing a long-term process of erosion. Are we creating a suprahuman force, this time in a secular vein, or are we challenged to find novel ways of living with the digital Others created by us? We do not fully understand Vico’s factum, the machines we have created, in the details of how they work, let alone in the implications they exert on us, their creators. We transfer agency to them when we begin to “believe” that everything predictive algorithms tell us must come true, forgetting about probabilities and that the data are extrapolations from the past. At the heart of our trust in AI lies a paradox: we leverage AI to increase our control over the future and uncertainty, while at the same time, the performativity of AI, the power it has to make us act in the ways it predicts, reduces our agency over the future (Nowotny 2021).

In the Enchanted Universe in which most of humanity lived, everything that was done happened with and through the meta-persons who decided the fate of humans. If we believe that predictive algorithms “know us better than we know ourselves” and that they “know” the future, do we not risk returning to a deterministic worldview in which human destiny has been preset by some higher power? Most of humanity presumably experienced the enchanted world they lived in with a mixture of constant anxiety and awe to which they responded with sacrifices and rituals. In contrast, our digital enchantment seems rather bland, although we are promised an ever more exciting and fulfilling virtual world. It is dominated by the monopolistic power of large international corporations that provide us with cheap entertainment and an overload of data that wants us to crave more of what they offer to us. Although we partly see through these virtual illusions created by them, we remain under their spell.

The pandemic marked the recent clash with reality that shattered the illusion of many, including our governments, that we were as much in control as we had thought. Modernity generated hubris of all kinds, among those described in Seeing Like a State (Scott 1998). It boosted the conviction of being able to control everything—if not in the present, then in a brighter future to which the single-minded vision of linear progress, backed by planning and continuous economic growth, would lead. Today, the realization has set in that despite the many benefits modernization has brought, it has also moved humanity closer to an environmental abyss and that the promise of a better life for all has failed many people. Inequalities have been on the rise within Western countries, and the global North-South divide has hardly shrunk.

Our liberal capitalistic system has, as Martin Wolf poignantly writes, produced many angry people (Wolf 2023). Social media reinforce the already present tendencies of a further polarization in our societies, and emotions like anger and hate are easily captured by populists and nationalists for their purposes. We have gravely underestimated the role that imagination plays in politics and have failed to realize the extent to which any vision or ideal of a political regime, including liberal democracy, depends on imagination and the necessity of fiction (Ezrahi 2012).

Maybe the time has come to restore space for imagination as the positive side of illusion. If unchecked, both can run wild. The history of modern science is filled with attempts to reign in the imagination and to put the brakes of empirical verification on the senses and human passions. Objectivity in science is an ongoing story of keeping the temptations of an unrestrained imagination at bay while leaving space for it as a vital source of human creativity (Daston and Galison 2010). Imagination plays an important role not only in science and the arts but also for the ways in which we conceptualize and perceive the future. As I have shown elsewhere, until a few decades ago, the future was seen as a huge projection screen, filled with collective imaginaries. Some were dystopias, mirroring the grievances and fears people held at present. Others drew inspiration from science fiction and were filled with wondrous gadgets like flying cars or the amazing things computers would do. The future was seen as an exciting period ahead, and, for the most part, it seemed desirable.

Today, this future has disappeared. As science-fiction writer William Gibson wrote a long time ago: “The future has arrived; it is only unevenly distributed.” It arrives with every new digital advance and does so more quickly and more overpoweringly than expected. As a result, the present becomes overloaded with data from the past and filled with data collected “live.” The result is a continuous emotional and informational overload that fills every minute of the time we are awake and continues to monitor our physiological functions while we are asleep. We live in a present that has become densely compressed as it has to absorb the digital future that continues to invade our lives (Nowotny 2020).

Digital devices have not, as promised, led to a decrease of our workload—on the contrary. We are too busy and captivated by downloading apps to have any time left to imagine a future that is rapidly dissolving in a digital haze. We are at risk of losing our capacity to imagine a desirable future, let alone the drive of wanting to shape it. Yet another illusion is lurking behind every “next gen” digital product, the illusion that we are not in control, infused with the belief that no alternatives exist to the advent of the SuperIntelligence in the making. We are still in the grip of another modern dichotomy:—that there is either full control or none—and in urgent need of will and the capability to imagine that it could be otherwise.

Yet, if there is any lesson to be drawn from the history of attempts to control the technology humans have created, it points in the opposite direction. Humans have held many illusions about the capabilities to control their aggrandized visions, only to be pushed back by the forces of Nature, which still holds the upper hand as signaled by the complexities of coping with climate change. Despite the sobering background of human hubris, including some of the most horrendous consequences of the illusion of being in control, we must avoid the illusion of having no control. Our ancestors from the Enchanted Universe would have told us that by practicing the proper rituals to invoke the goodwill of the spirits, they succeeded precisely because the power of the spirits had been transferred to them, empowering their activities.

Obviously, to gain control over the digital Others requires more than rituals and sacrifices. It begins with rethinking the concept of control and reinventing forms of control that include care and responsibility. We have embarked on a coevolutionary trajectory between humans and digital machines. If efficiency alone remains the overriding goal, we will be outpaced and overwhelmed by the machines very soon. If we pursue other goals, like building resilience into the system and innovating sustainably, the chances of keeping ahead are much greater. However, such goals must be embedded in the collective imagination, driven by the desire to reappropriate a future that is open, even if it remains uncertain. Embracing uncertainty will not restore us to being in control, but hopefully it will enable us to learn to live with the digital Others in a common world yet to be made.

None to report.

Helga Nowotny is Professor emerita of Science and Technology Studies, ETH Zurich; in 2006 she became a Founding Member of the European Research Council and served as President from March 2010 until December 2013. Helga Nowotny received a doctorate in law at the University of Vienna and a PhD in sociology at Columbia University, New York. She has held teaching and research positions at the Institute of Advanced Study in Vienna, King’s College, Cambridge, UK, the University of Bielefeld, the Wissenschaftskolleg zu Berlin, the École des Hautes Études en Sciences Sociales in Paris, and Collegium Budapest IAS and was Professor of STS at the University of Vienna before moving to ETH Zurich. She continues to be actively engaged in research and innovation policy at the national, European, and international levels. She was Vice-President of the Lindau Nobel Laureate Meetings and Visiting Professor at NTU, Singapore. Currently, she is member of the Board of the Falling Walls Foundation, Berlin, of the Austrian Council for Sciences, Technology, and Innovation and Chair of the Scientific Advisory Board of the Complexity Science Hub Vienna. Together with Saadi Lahlou she directs a research project on societal transition in the domain of food, funded by the NOMIS Foundation. Helga Nowotny has published widely in the field of Science and Technology Studies, on social time, curiosity and innovation. Her latest book, In AI We Trust, was published in 2021. She has received numerous awards, including the rarely awarded Gold Medal of the Academia Europaea, the Leibniz-Medaille of the Berlin-Brandenburgische Akademie der Wissenschaften, and the British Academy President’s Medal. She is an honorary member of several European Academies of Science and holds more than ten honorary doctorates, including from the University of Oxford and the Weizmann Institute of Science, Israel.

1.

Long-termism is an aspect of “effective altruism,” a philosophical and social movement that gives priority to improving the long-term future of humanity. Critics claim that by focusing predominantly on “existential risk,” it favors eugenics and neglects today’s foremost problems.

Andersen, Ross. 2023. “What Happens When AI Has Read Everything?” The Atlantic, January 18, 2023.
Daston, Lorraine, and Peter Galison. 2010. Objectivity. Princeton: Princeton University Press.
Ezrahi, Yaron. 2012. Imagined Democracies. Necessary Political Fictions. Cambridge: Cambridge University Press. https://doi.org/10.1017/CBO9781139198769.
Joas, Hans, and Robert Bellah. 2012. The Axial Age and Its Consequences. Cambridge, Mass.: Harvard University Press.
Latour, Bruno. 1993. We Have Never Been Modern. Cambridge, Mass.: Harvard University Press.
Nowotny, Helga. 2020. Life in the Digital Time Machine. The Wittrock Lecture Book Series, No. II. Uppsala: Swedish Collegium for Advanced Study.
———. 2021. In AI We Trust. Power, Illusion and Control of Predictive Algorithms. Cambridge, UK: Polity Press.
Sahlins, Marshall. 2022. The New Science of the Enchanted Universe. An Anthropology of Most of Humanity. Princeton: Princeton University Press. https://doi.org/10.1515/9780691238166.
Scott, James C. 1998. Seeing Like A State: How Certain Schemes to Improve the Human Condition Have Failed. New Haven: Yale University Press.
Shanahan, Murray. 2023. “Talking About Large Language Models.” arXiv: 2212.03551v4. https://arxiv.org/abs/2212.03551v4.
Turchin, Peter. 2023. “The Evolution of Moralizing Supernatural Punishment: Empirical Patterns.” In Seshat History of Moralizing Religion, edited by Larson, et al. forthcoming.
Werthner, Hannes et al. 2019. “Vienna Manifesto on Digital Humanism.” https://dighum.ec.tuwien.ac.at/dighum-manifesto/.
Wolf, Martin. 2023. The Crisis of Democratic Capitalism. London: Penguin Books.
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.