Regulation is a means societies use to create the stability, public goods, and infrastructure they need to thrive securely. This policy brief is intended to both document and to address claims of a new AI cold war: a binary competition between the United States and China that is too important for other powers to either ignore or truly participate in directly, beyond taking sides. We argue that while some of the claims of this narrative are based at least in part on genuine security concerns and important unknowns, evidence for its extreme binary nature is lacking. This absence of factual evidence is concerning, because related geopolitical tensions may be used to interfere with regulation of AI and agencies associated with its development. Here we first document and then analyze the extremely bipolar picture prominent policymakers and political commentators have been recently painting of the AI technological situation, portraying China and the United States as the only two global powers. We then examine the plausibility of these claims using two measures: internationally registered AI patents and the market capitalization of the companies that hold them. These two measures, while each somewhat arbitrary and imperfect, are often deployed in the context of the binary narrative and can therefore be seen as conservative choices in that they should favor exactly the “champions” of that narrative. In fact, these measures do not produce bipolar results: Chinese capacity has been exaggerated and that of other global regions deprecated. These findings call into question the motivation behind the documented claims, though they also further illuminate the uncertainty concerning digital technology security. We recommend that all parties engage in contributing to a safe, secure, and transparent regulatory landscape.
Prominent policymakers and political commentators are increasingly lending their voices to a new but flawed narrative. The narrative asserts that a cold war between the United States and China over artificial intelligence (AI) leaves Europe no reasonable option but exclusive engagement with the United States to secure Europe’s continued liberty. The authors of such claims are addressing the highest transnational policy circles such as the Council of Europe (CoE) and the Global Partnership for Artificial Intelligence (GPAI); the claims are also repeated in international media such as POLITICO and the Economist. While AI does have significant military applications (Sisson et al. 2020), these are generally explicitly excluded from such discussions (GPAI 2020), which focus instead on data, privacy, surveillance, market power, and innovation. Nevertheless, warlike language is invoked. Europe is accused of “philosophizing on the ethics of AI” rather than participating in “the third world war which had already begun,” a war of technological and economic competence (see full quote from Laurent Alexandre below).
Given the pressing importance of digital governance policy concerns, it is, of course, essential to hear a diversity of opinions. However, in our view, this particular narrative is coming to greater prominence than it deserves without further examination or evidence. Our concern is that the AI cold war is often mentioned in the context of discussing digital market regulation. New, almost purely digital sectors of the economy have been generated in recent decades, such as web search, software cloud services, online shopping, and social media. These sectors have to date seen very little regulation, and in part because of the nature of digital transmission—inexpensive and high fidelity over great distances—market consolidation has been swift in each.
Since the largely successful implementation by the European Union of its General Data Protection Regulation, or GDPR (European Union 2016), Europe has been widely seen as a champion of not only human rights but also market regulation more broadly in this new digital age. The GDPR demonstrates that an organized bloc with adequate market strength can exercise control over even externally headquartered transnational digital commerce taking place within its borders, affecting the lives of its residents. The EU has now proposed drafts of substantial new regulatory legislation, including the Digital Services Act (DSA) and the Digital Markets Act (DMA) both announced December 2020, and the AI regulation, announced in April 2021. These announcements were widely anticipated; the expansion of the narrative we document below took place in the final months before the December announcements, and is ongoing as of June 2021.
The final overall legislative packages are expected to include measures that significantly affect the way large digital companies do business in the European Union. This may well include measures to ensure a more equitable revenue redistribution from technology companies to the countries from which the data underlying their wealth is derived—a goal also of the Organisation for Economic Co-operation and Development (OECD), a more global organization of rich member states (Economist 2020). Against this background, could at least some of the proposing or amplifying of claims postulating an AI cold war be intended to disrupt new regulation?
In the present article, we seek to first document the arguments we have been hearing that give rise to our concern. We then turn to examine the plausibility of the narrative itself. We do so using two measures: internationally registered AI patents and the market capitalization of the companies that hold them. These two measures, while both imperfect, can be seen as conservative in that they should favor exactly the “champions” that are the focus of the AI cold war narrative. Our research finds no evidence of a strongly bipolar AI world. We therefore suggest policy aimed at increasing transparency around claims such as these, and through such transparency, achieving greater security globally.
Background: Narrative Proposals and Political Context
The appropriate regulation of AI is of growing international concern, as indicated by the founding of GPAI as well as similar efforts by the G20 (Jelinek, Wallach, and Kerimi 2020), the European Union, and the OECD. Yet at hand is often not the question of how to best regulate AI but whether such technology should be regulated at all. Calls for enhanced oversight stem from growing convictions that AI, social media, and information communication technology (ICT) more generally may be causing destabilization, disrupting everything from individual well-being to the viability of liberal democracy (European Commission 2020). AI is widely recognized as being of immense economic value and as providing many other goods, including the potential for improved transparency, equity, and governance more broadly (Misuraca and Viscusi 2020). Yet the digital revolution is also seen to be facilitating social ills, including amplifying prejudice and hate and perpetuating the colonial order (Ali 2019).
Still, as we document below, there are many claims that call into question the necessity of any regulation for AI at all. Regulation is the means by which a society (or any other agency; see Tyson and Novak 2001) coordinates and sustains itself into the future. In fact, all contemporary commerce takes place in some type of explicit regulatory environment. Given this, calling into question the necessity of any regulation seems disingenuous and so may indicate a disruptive strategy. Interests that view themselves as potentially constrained by proposed new regulation may believe they could benefit from narratives that might ultimately minimize their regulatory burden, and might therefore disseminate such narratives themselves even at the expense of the public interest. Indeed, such strategies on the part of one corporation have been revealed in a leak (Espinoza 2020).
The claim is often made that due to the importance of the sector and its security consequences, corporations producing AI—particularly in the United States—should either be entirely free from regulation or be governed only when they themselves recognize and appeal for arbitration on difficult policy matters.1 Regulating AI, it is claimed, could tip the balance of power in favor of China and therefore damage the protection of fundamental rights globally. Attempts at regulation might also have a consequence of excluding the regulating region—anticipated to be the European Union—from true economic or geopolitical power. Until recently, such claims have largely been made off the record, but they have been publicly alluded to previously (Thompson and Bremmer 2018; Tan 2020), including by prominent figures such as Ulrik Vestergaard Knudsen, deputy secretary-general of the OECD (Knudsen 2020).
Consider this example of the narrative,2 in which a prominent expert witness, Mr. Laurent Alexandre,
expressed his astonishment at the fact that Europe still had not understood what was happening in relation with AI. Europe continued to philosophize on the ethics of AI rather than to be concerned about participating in the third world war which had already begun. It was a technological war and the two main protagonists were China and the US. Europe was a technological castrate: we had emasculated ourselves…with the GDPR… Europe had to stop being naïve and childish, and move on from philosophy to industrial battles, otherwise we were going to be leaving ruin to our children. Philosophy was not going to feed our children. … If Europe was not autonomous and sovereign, it would become a technological colony and would not be able to defend its ethical and political values. In the 21st century, being a technological colony meant to be simply a colony. We needed to look at the size of the giants in the field of AI and the total absence of any European AI platform, in fact, the total absence in Europe [of] any databases…. The day that we became definitively a technological colony, we would have in Europe a dictatorship and no longer a democracy. If we wish to maintain the democracy to which we were all so attached, we needed to put an end to this technological suicide.
We are not in agreement with Mr. Alexandre on a number of points: in the world as it is presently, the European Union is the second-largest economy. Philosophy and the arts more generally are only one part of that economic strength. The European Union does have a strong digital as well as manufacturing economy, as we illustrate below.
China is typically portrayed as the greater evil in this bipolar narrative, although versions of this narrative do also exist in Chinese media reversing the attribution.3 We concur that China’s use of technology against some of its minority communities and cultures—notably, at present, the Uyghurs, who have seen over 10 percent of their population interned and 65 percent of their religious sites demolished (Raza 2019; Ruser et al. 2020)—is both terrifying and abhorrent. More generally, the stated Chinese aim and capacity to use AI to track and exclude those who behave in even minor ways designated antisocial would seem at odds with the basic human rights of freedom of thought and opinion. We are now seeing such power and intent expressed also outside China’s borders, with state-linked disinformation campaigns taking place on social media platforms and Wikipedia—public goods constructed largely in other nations (Walker, Kalathil, and Ludwig 2020).
However, when it comes to protecting the public interest of all its citizens and their human rights equally, the United States also has a mixed record. As of 2020 the United States is the country with the largest proportion of its citizens incarcerated in the world (Statista 2020). The United States suffers substantial disparities in both income and life expectancy determined by protected characteristics such as ethnicity (Wrigley-Field 2020; Case and Deaton 2015). For example, non-Hispanic Americans of African descent make up 38 percent of the US prison population, over three times more than would be expected by their 12 percent share of the country’s overall population (Statista 2020). American “surveillance capitalism” is also seen as a threat to liberty both within and beyond US borders (Zuboff 2015). Social mobility in the United States has dropped well below that in the European Union, bringing many social ills (Corak 2013). The United States has historically failed to invest in some basic human rights for which it is signatory in international law, such as universal health care (United Nations General Assembly 1948). Such failings in a leading democracy are generally considered to be a consequence of too little rather than too much regulation.
Even so, Chinese advances in and deployment of AI are perceived as so threatening or menacing compared to use of AI in other parts of the world, including the United States, that fully backing the United States and its model of AI innovation is being portrayed by some as the only alternative for other democratic—and even nondemocratic—regimes globally (Economist 2020). Of course, a single narrative can be deployed for a range of different motivations by different commentators, or sometimes even by the same commentators. We have no doubt that many embracing the binary stance have real and justified security concerns. However, to the extent that the narrative encourages an alignment of global AI regulation with US rules, it could result in a relatively lower regulatory burden on American corporations for their activities abroad. In fact, in the context of GPAI, at least one voice has even argued that the European Union should contribute to increasing the already considerable positive regulation US corporations currently receive in terms of governmental support and subsidies.4 Similarly, former Finnish prime minister Cai-Göran Alexander Stubb, now director and professor at the European University Institute,5 speaking at a publicly webcast event (Maydell et al. 2020) in late October 2020 on a panel entitled “The Age of Artificial Intelligence and Disruptive Technologies: Reimagining Regulation and Society,” said, “the Confucius model is quite different from the fairly individualistic model that we have in Europe, so we should stop pretending that they [China] are going to adopt our system. That’s why I think our best bet is to work closely with the Americans, not against them” (minute 23:34–23:49).6
Again, we wish to emphasize that it is important to the functioning of policy organizations such as the Council of Europe and the GPAI that all viewpoints are heard, including and perhaps especially the narratives of those in power. We are not criticizing that these viewpoints are being raised, and we appreciate that such debates are occurring in public. Nevertheless, the veracity of such a narrative if it is potentially gaining influence needs to be examined.
We first became concerned that this narrative might be based on misinformation because of a frequently circulated figure type we had seen at policy conferences in the past few years. Figure 1 is an example of this figure that we have chosen as typical—it was gathered from Twitter, in a tweet stating that it had been shown to twenty Dutch ambassadors (Schäfer 2020). Although the title of the slide says “US—EU—China,” the slide’s graphic shows all Asian corporations clustered as if forming a single entity, making that entity look similar in scale to the strength of the United States. The distorting label “China” was introduced after the original research behind the graphic (Schmidt 2020, cf. supplement.) Yet even the original, undistorted figures create an artificial contrast between more and less politically harmonized continents. Further, by considering only two hundred companies with no clear criteria for inclusion (“platform company” is not well defined), the figure displayed here may skew its results toward perception of power rather than objective attributes of power. A similar figure has been recently published by the Economist (2020), which, while correctly attributing China separately, still does not explain its criteria for inclusion and focuses on market capitalization. Such figures might overweight companies that are household names, disregarding, for example, powerful and innovative companies operating business-to-business. They may also underestimate the importance and overall economic power of regions with a relatively less concentrated corporate sector. Reduced concentration can be the result of regulatory regimes that favor large numbers of small- to medium-sized corporations over hyperpowerful individual actors—for example, by enacting and enforcing antitrust legislation.
In Search of Objective Data concerning AI Dominance
Given the important alternative hypothesis that good regulation can, in fact, strengthen both security and economies, we set out to determine whether objective evidence did promote the concern that we were in extraordinary times that might justify the disruption of such order. Well-functioning governments ordinarily tend to promote social order, facilitating both physical security and prosperity, creating stability that further facilitates both innovation and industry. The documented anticorrelation between inequality and social mobility is just one example of this: adequate redistribution promotes better access to the best employees (Jäntti and Jenkins 2015). Given this, we might expect overall economic strength and resilience to derive not necessarily from small numbers of large corporations but rather from larger numbers of smaller companies in an economic zone regulated for greater equity. Indeed, we have reason to believe that the European Union should be far more comparable to China and the United States than figure 1 indicates, given that economically, these three regions account for similar proportions of the global GDP—in 2019 22 percent (the European Union, including the United Kingdom), 16 percent (China), and 25 percent (the United States) (International Monetary Fund 2019). We anticipate that any strong contemporary economy must also have a strong digital economy, which includes AI and its associated tools for productivity and efficiency. Therefore we hypothesize that the European Union has a strong digital economy that figure 1 is not capturing.
We set out to find, therefore, an objective illustration of the relative strength of not three but four global regions: the two postulated cold war combatants (the United States and China), the postulated regulatory region (the European Union, now minus the United Kingdom), and, fourth, the rest of the world as a comparator. We chose to use two measures each known to be imperfect: market capitalization, for congruence with figure 1, and patenting, as a measure of innovation. Both are imperfect measures because neither necessarily measures purely underlying corporate strength. Rather, either can also reflect strategic and commercial decisions. For example, many corporations consider patenting to have more risks than benefits, since the process of patenting requires disclosing intellectual property (IP), and the capacity to defend IP successfully in court depends in part on the financial capacity of a firm. Even powerful companies with abundant financial means, such as Apple, sometimes prefer to maintain corporate secrets rather than rely on the courts for defense.7 The number of patents therefore informs only weakly on the quality and quantity of IP in terms of its contribution to innovation, but it is at least an objective and well-established measure.
Market capitalization (MC) is similarly affected by strategic decisions, including the choice of the jurisdiction where a company decides to issue securities. A high MC largely reflects the large size of a company and the capability to derive profits from turnover, but this is not the only measure of a firm’s true capacity to achieve either long-term or short-term goals (Pistor 2019). Although market capitalization might be viewed as a pure expression of the alleged ultimate aim of a corporation—maximizing its value to shareholders—in fact, MC also reflects a number of other factors. These start with the very strategic decision concerning whether to take a company public, and carry on through the whims and bubbles of investment fashion. Strategy, in the case of MC, is not only corporate but also is a question of national regulatory context, as is illustrated in our results. Large MC companies wield significant economic and political power, both nationally and transnationally. While many countries worry about the political and societal impact of such power, some governments may see value in having one or more economic “champions” capable of, for example, deploying financial assets for economic acquisitions that further market dominance (Motta 2004). The 2019 initial public offering by Saudi Arabia of Aramco, a company built around the country’s national oil wealth, is a particularly interesting example. This comes in the context of a high-profile effort to diversify the Saudi economy, particularly into AI (Agence France Presse 2020). Notice also that within the continent of Europe, three of the largest companies by market capitalization that we identify as having at least two AI patents are all in one small country, falling outside the European Economic Area (EEA) — Switzerland (see figures 2 and 3 and accompanying data in the supplement).
Market capitalization may fail to evidence subtleties of human capital or brand, or strength in specific markets. It certainly does not always indicate that the products of a company are individually superior compared to those of competitors. Achieving a large MC allows a company to invest resources not only in its primary product areas but also in expanding its market range, and potentially in influencing government decisions (Motta 2004). Financial resources proportionate to the overall wealth of the shareholders reflected in the MC can be used to exert pressure over government decisions through, for example, corporate decisions to fire or hire large numbers of employees in specific constituencies. For these reasons, some corporations will take strategic decisions to invest disproportionately and perhaps riskily in areas likely to increase this particular metric (Kumar and Shah 2009). Depending on the regulatory context, MC growth can also have an accelerating effect. Strategies geared specifically toward gaining political influence may result in regulatory capture that, in turn, may benefit a company or a sector through favorable regulation, subsidies, and bespoke tax treatment, ultimately further increasing its MC (McCarty, Poole, and Rosenthal 2016). As with IP, MC does come with costs some companies choose not to pay—not least, the loss of autonomy with regard to shareholders that comes with the public listing, and also further transparency in the form of the accompanying regulatory disclosure.
Although both our measures are therefore imperfect, they do indisputably indicate strength and potential power, both nationally and internationally. Specifically, a higher MC allows a company to raise proportionately larger funds from financial markets and to deploy these for expanding into different geographic and product markets through aggressive commercial strategies and pricing, through acquisitions, and in the long term through large research and development expenditure potentially resulting in process and product innovation. Patents are frequently used not only to defend IP but also for bartering between companies and otherwise establishing market power (Feldman and Lemley 2018; Jeon and Lefouili 2018). The two measures, then, may be viewed as conservative as they certainly display some power, though equally certainly not all of it. They also directly address some of the concerns voiced in the quotes above, and provide comparability to the figures previously referenced.
Figures 2 and 3 display the outcome of our research. We consider here all corporations that registered at least two patents over the calendar year 2019 with the World Intellectual Property Organization (WIPO 2019) in the category G06N (IPC) dedicated to “Computer systems based on specific computational models,” which includes many but not all of the AI technology patents. On this basis, we are able to draw up one version of an objective list of companies that are innovating in AI—again, conservatively. This is, of course, a subset of all patents registered that might be considered AI, for any particular definition of that term. The subset might therefore be seen as arbitrary, but first, we could (with consultation) find no better match for the term AI in the WIPO ontology, and second, if the cold war is as pervasive as has been implied, we should be able to capture it with even somewhat arbitrary subsets of data. As explained earlier, we measure the significance of the companies in two ways: by their number of patents and by their resources as indicated by their market capitalization. By using these two measures together as (logged) axes on a graph, we create an indicative illustration through which to examine the narrative of a dominating China or an excluded Europe (figure 2), as well as a simple bar chart for direct comparison (figure 3). For comparability with figure 1, we also illustrate absolute market capitalization via bubble size in figure 2. Private corporations that meet the patent inclusion criteria are represented by stars; their size and location on the x axis is meaningless since they lack market capitalization. We use color to illustrate four global regions:
the United States
China
the EEA—the region implementing the General Data Projection Regulation (GDPR), thus excluding, for example, Switzerland and the United Kingdom
the rest of the world.
We include “the rest of the world” as a comparator category for a number of reasons, including completeness and scale, but primarily to assess the accuracy of the binary cold war narrative. Importantly, the strategy by which the EEA regulates the transnational digital market with respect to its use of its citizens’ private data could in principle be applied by any sufficiently large market, and any such market could in principle be agilely defined. The GDPR coerces compliance on citizens’ rights only because the markets it regulates are so commercially attractive. The European Union coordinates the national implementations of the GDPR, but other large markets—notably, China and the United States—could and indeed do similarly enforce their will on international commerce. Geographic contiguity is not, however, essential to such an effort; what is required is plausible enforcement methods by the block against its own members to avoid defection. In fact, in analyses such as figure 1, where radically antithetical governments like those of South Korea, Japan, and China are bundled together without explanation, assuming geographic contingency can smack of ethnocentrism.
Discussion
To reiterate, we fully acknowledge that both intellectual property and market capitalization are imperfect tools for measuring either economic or technological strength. In fact, we believe our figure 2 contributes substantial evidence of this, showing that both MC and patenting reflect strategic decisions taken both by individual companies and by individual regulators. Nevertheless, we believe our figures firmly illustrate the lack of evidence for the binary AI cold war narrative, at least as it is being promulgated against further EU regulation. In terms of these two commonly cited measures, the narrative misstates the strength of all parties. In terms of IP, the European Union is comparable to China; in terms of both MC and IP, the European Union and China together are dwarfed by the fourth category, “the rest of the world,” which contains a number of countries that, like the United States, seem to facilitate large MC strategies more than either the European Union or China. Nevertheless, all three of these regions taken together are still dwarfed again by the United States, in terms of both MC and IP. We can discern here no reason to believe that US corporations require the assistance—or should, indeed, fear the regulation—of the European Union.
We do not mean through our analysis to imply that all actors presently describing a new or an AI cold war are solely, partially, or intentionally aiming to disrupt European or other legislation or regulation. Security is, of course, of the utmost concern, and the dynamics of information are such that many could be entirely earnest in the concerns they voice, even if others partially or entirely aim such claims to disrupt (Lazer et al. 2018). We certainly have not proven that there is no new cold war. There are many ways in which a nation or a region can be insecure. We by all means would encourage every region to focus, for example, on measures such as mutually assured cybersecurity, particularly given progress in quantum computing that may be expected to alter the cybersecurity (and AI) landscape if it proves economically and ecologically tractable.
Indeed, some dangerous forms of power communicated through all digital artifacts—whether or not they are labeled as “intelligent”—are invisible to our two measures. Cybersecurity is one such. Strikingly, several countries well known for their cyberoffensive capacities do not appear in any of the figures shown here, nor in any other similar that we have been able to find. No digital technology can be considered reliable without secured communications, and weak links anywhere on a network can jeopardize all so networked. The invisibility of cyberoffensive national powers in our own figure 2 again reinforces the primary point of this article by indicating a weakness in the narrative that China is outside the global or Western order. Chinese corporations are at least present in the WIPO database.
Cybersecurity is not the only security consideration. Where markets are global, some government-injected levels of redundancy in the supply chain—such as we have already seen for commercial airlines and global positioning systems—may be advisable to ensure resilience and to limit corruption. Lax regulation leading to extreme wealth inequality is not only associated with a lack of investment in essential infrastructure but also correlated with violent political upheaval that eventually benefits no country or region (Atkinson 2015). Large transnational corporations now provide essential communication infrastructure for the world, and like other shared resources (including the Earth itself) need coordinated transnational policy, negotiated by treaty.
Governance is not only about governments. Much as the European Union and hopefully soon other global transnational unions have been able to negotiate with powerful external entities, we could also envision coalitions of small- and medium-sized corporations complementing and sometimes challenging the political voice of the tech giants. We can even imagine coalitions of transnational corporations enforcing requirements of good governance on nations that desire the economic benefits of their services (Dixit 2016).
Without better evidence of a real technological threat, we believe that all parties should be working to develop their own innovative industries and should be seeking equity with respect to (for example) revenues from transnational trade, including in data. Regulation is necessary to any economy and society, and good regulation can strengthen all sectors, and through them global security. Given the urgent problems facing our planet as a whole, we invite all parties to reconsider the AI cold war rhetoric and to take a data-led approach to honing regulation to benefit resilient, diverse markets and societies globally.
Methods
We used the publicly available WIPO database (WIPO 2019) to construct a list of corporations that registered at least two patents during the calendar year 2019 in the category G06N of the IPC classification dedicated to “Computer systems based on specific computational models,” which includes many but not all of the AI technology patents. For each corporation, we then looked up its September 10, 2020, market capitalization (if the corporation was publicly listed) and its company of registration using Bloomberg. Where companies were subsidies (e.g., DeepMind holds patents but is a subsidiary of Google’s Alphabet), both MC and patents were attributed to the controlling corporation and its country. We checked our figures by having them independently replicated by volunteer academic researchers (master’s degree students; see acknowledgments.) Nations were attributed to the four global regions by hand and checked by both authors. The data compiled and the code for producing the figures are available online (Malikova and Bryson 2020).
Competing Interests
The authors have no competing interests to declare.
Acknowledgments
Helena Malikova is a member of the Chief Economist Team of the Directorate General for Competition, European Commission. The content of this article does not reflect the official opinion of the European Commission. Responsibility for the information and views expressed in the article lies entirely with the authors. Writing primarily by Bryson, data analysis primarily by Malikova, though both done in coordination. Data analysis is also available as a stand-alone data set publication (Malikova and Bryson 2020). Thanks to Vincent Jerald Ramos and Jean Pierre Salendres for replicating the data gathering and analysis technique described in the methods. Thanks also to Jia Kate Yang for research on Chinese media and James Kanter for comments. Thanks to Holger Schmidt for helping us identify the origins and distortions of the bubble meme in figure 1.
Author Biographies
Joanna J. Bryson is an expert in intelligence broadly, including AI, AI policy, and AI ethics. Her original academic focus was the natural sciences, using artificial intelligence for scientific simulations of natural cognitive systems. During her PhD, she first observed the confusion generated by anthropomorphized AI, leading to her first ethics publication, “Just Another Artifact,” in 1998. In 2010 her work in AI ethics was first recognized by a policy body when she was invited to participate in the UK research councils’ Robot Ethics retreat, where she was a key author of the UK’s (EPSRC/AHRC) “Principles of Robotics,” the world’s first national-level AI ethics soft law. Her present research focuses on the impact of technology on economies and human cooperation, transparency for and through AI systems, interference in democratic regulation, the future of labor, society, and digital governance more broadly. She consults frequently on policy, including to the EU/EP/EC, OSCE, OECD, Red Cross, Chatham House, CoE, IEEE, WEF, and UN as well as national government agencies and NGOs in Switzerland, the United States, the United Kingdom, Canada, and Germany. She currently co-chairs the AI Governance Committee of the Global Partnership of AI, to which she was nominated as an expert by Germany. She holds two degrees each in psychology (BA, Chicago, and MPhil, Edinburgh) and AI (MSc, Edinburgh, and PhD, MIT). From 2002 through 2019 she was computer science faculty at the University of Bath, where she founded and led their AI research group; she has also held postdoctoral, sabbatical, or visiting positions at Harvard, in psychology; Oxford, in anthropology; Nottingham and Mannheim, in social science research; the Konrad Lorenz Institute for Evolution and Cognition Research; and the Princeton Center for Information Technology Policy. She has been the professor of ethics and technology at Hertie School, Berlin, since February 2020.
Helena Malikova works for the European Commission on competition policy. She started her career in investment banking for Société Générale and Crédit Suisse, before soon moving into public service. She was the case manager of the investigation under European State aid rules into Apple that resulted in a EUR 13 billion claim for unpaid taxes in Ireland. Malikova is currently running a financial data analysis project at the Directorate General for Competition to enhance understanding of the corporate strategies of companies in the platform economy. One key focus of the data analysis is an assessment of the consequences for European consumers of increased corporate market power. She holds a master’s in European economics from the College of Europe in Brugge, and in 2016 was awarded the EU Fellowship at UC Berkeley.
Footnotes
The US company Microsoft has so appealed—once, in 2018 (Smith 2018). Facebook has very recently made similar but less specific calls.
From the minutes of the September 25, 2020, testimony to the Parliamentary Assembly of the Council of Europe’s Culture Committee, revised and declassified December 2020.
In fact, the term “AI cold war” or at least its widespread use may be due to Kai-Fu Lee, though he was protesting against the metaphor. For example, in Lee (2019, 31), “An AI arms race would be a grave mistake. The AI boom is more akin to the spread of electricity in the early Industrial Revolution than nuclear weapons during the Cold War.”
Personal communication to JB, who is one of the expert members of GPAI.
Mr. Stubb also recently ran for the nomination of the largest party in the European Parliament to be the president of the European Commission.
Mr. Stubb does go on to acknowledge that the European Union has to continue to work with China, nevertheless.
Apple’s dual capacities in hardware and software may reduce the external exposure of its IP and therefore its reliance on external enforcement.