The rapid shift to online teaching and learning during the COVID-19 pandemic has accelerated the penetration of an algorithmic world view into education systems around the world. Promoted by a burgeoning educational technology industry, platforms that use algorithms to structure and monitor teaching and learning have been presented as technical solutions to systemic problems. But they have also created new problems and reinforced existing inequities, stirring up public and political backlashes. Beyond its immediate effects during the pandemic in 2020, the expanded use of algorithm-driven learning management systems backed by major corporations has major implications for the future of global education.

Millions of schools and college campuses around the world closed during the first waves of the COVID-19 pandemic in the spring of 2020. With over a billion students affected, education systems rapidly adopted digital technologies to enable emergency remote teaching and learning. In the months that followed, governments, nongovernmental organizations, and commercial enterprises alike made concerted efforts to “pivot online.” For the education technology industry, the crisis created a global laboratory to test a novel form of schooling. Separated from physical classrooms and campuses, students would experience education almost entirely through digital media.

Advocacy organizations such as Privacy International have warned that the online pivot could exacerbate commercial exploitation, surveillance, automated decision-making, and manipulation in education. Others, however, see the pandemic as a historic opportunity for massive technological experimentation in the forms and functions of education, with potentially long-term transformative effects on every continent.

The anticipated benefits of increased online education include greater access to quality schooling for underserved populations, as well as innovations in curricula and pedagogy, all helping with “upskilling” students for a high-tech future. The Organization for Economic Cooperation and Development (OECD) released a “strategic foresight” report envisioning post-COVID education as increasingly personalized by digital technology, outsourced to private providers, experimental in organizational form, and taking place “anywhere” through the “power of the machine.”

The exploitation of the pandemic as a laboratory for reimagining education is a result of four intersecting trends: the emergence of a technology-centered experimental worldview in education systems around the world; the development of education data science, learning analytics, and artificial intelligence (AI); the expansion of a commercial education-technology (edtech) industry that has embedded data science methods in education systems; and the growth of an investment sector to support the development of edtech into the post-pandemic future.

Tying these trends together are algorithms, which have attained major cultural, economic, and political importance in contemporary societies, owing to their capacity to process huge volumes of data, produce insights, assist in decision-making, and automate tasks. A look at the historical precedents of the shift to online learning during the pandemic reveals the central role that these lines of code have been positioned to play in future education systems, and the risks of enabling them to shape the course of students’ lives.

Grading on a Curve?

During the pandemic in 2020, hundreds of thousands of British school students became unwitting subjects in a vast government experiment with algorithms. After schools closed across the United Kingdom, cabinet ministers and other officials decided to cancel the annual high-stakes examinations that determine whether graduating high school students qualify for places in higher education. Teachers’ estimates based on students’ past performance would replace exam grades.

There was one problem. Ministers and their agencies assumed that teachers would be overly generous in their scoring, leading to national grade inflation and inconsistency with the distributions from previous years. So exam regulators adopted statistical algorithms (the Alternative Certification Model in Scotland and the Direct-Centre Level Performance approach in the rest of the country) to “moderate” the grades.

When the results were released in August (first in Scotland and a week later in England, Wales, and Northern Ireland), tens of thousands of students learned that their grades were lower than the assessments given by their teachers. Students with high teacher-predicted results in large schools with historically low performance (mostly located in disadvantaged parts of the country) were disproportionately downgraded by the algorithms, compared with students in smaller, high-performing schools serving more affluent students.

The outcome was a huge political scandal and public outrage. The algorithmic models and their developers were accused by the media of unfairly determining students’ life opportunities, reproducing entrenched patterns of class-based inequality, and stifling social mobility. Scottish First Minister Nicola Sturgeon announced a U-turn and publicly apologized to students, acknowledging that her regional government had put too much trust in an algorithm. When the downgraded results were released in England, student protesters chanted anti-algorithm slogans outside the Department for Education in London. After teacher-estimated grades were reinstated in Wales, Northern Ireland, and England, Prime Minister Boris Johnson sought to evade responsibility for the fiasco by blaming a “mutant algorithm.”

Algorithms reflect the social contexts in which they are produced.

The backlash should have been anticipated. A month earlier, the use of predictive grading for the International Baccalaureate qualification, accepted for university admissions by 5,000 schools in 150 countries, had resulted in student protests, legal action, and widespread condemnation. These events revealed the extent to which algorithms have become influential in education—and their potential to have a life-changing impact on a huge number of young people.

Over the previous two decades, education had become an arena for experimentation with algorithms in the UK and beyond, concurrently with the emergence of data science as an academic field, a growing commercial sector, and a policymaking resource. Although algorithms have been a central focus of computer science since the 1950s, their role was amplified by the rapid expansion of commercial data processing and the new discipline of data science at the beginning of the twenty-first century. Google’s search engine algorithms became keys to locating and accessing information. The algorithms developed by online services like YouTube, Amazon, Netflix, and Spotify to provide users with personalized product recommendations based on their browsing or purchasing histories have reshaped media consumption. Facebook’s ranking algorithm and microtargeted advertising influence people’s social interactions and access to political content.

By the time of the COVID-19 pandemic, private sector algorithms were playing hugely powerful and controversial roles in societies and individual lives. Public sector institutions started applying data science–based techniques and software in policy areas including health, justice, and social welfare. In 2020, governments in countries around the world relied on these techniques in expedited efforts to create contact-tracing apps to map the spread of the coronavirus.

The turn to algorithms as seemingly efficient and effective responses to public policy questions is a form of technological solutionism. This algorithmic worldview assumes that quantitative data analysis can provide accurate, objective, precise solutions to highly complex societal problems. But this assumption, stemming from nineteenth-century statistics and natural sciences, obscures the fact that algorithms reflect the social contexts in which they are produced.

Algorithms are always created by specific social actors, whether commercial organizations or political bodies, to accomplish designated tasks by processing data sets. Thus, algorithms cannot be said to be neutral. Many decisions go into determining how any single algorithm will operate, which data it will process, and what results it will produce.

Every algorithm is the product of many human practices and organizational priorities. In some cases, algorithms also express political commitments. For instance, the Direct Centre-Level Performance algorithm was shaped by political and regulatory decisions to discount teacher-estimated results in a way that reflected and reinforced preexisting social, economic, and demographic inequalities. It was driven by long-standing political concerns about grade inflation. Civil servants and technical experts from the government’s examinations and qualifications agencies built the algorithm to intervene accordingly.

Technically, it performed as designed, producing a national grade distribution consistent with previous years. But its effects also reflected the long history of political efforts to regulate the distribution of academic success. The grade-standardization algorithm was the embodiment in mathematics and code of the powerful algorithmic worldview that student “achievement” can be objectively measured and ranked, while obscuring the socioeconomic and demographic factors that structure educational outcomes.

Data-Driven Visions

In the decade before the pandemic, the algorithmic worldview spread across education systems worldwide, in step with the expansion of new fields such as education data science, learning analytics, and artificial intelligence. At the outset of the pandemic, charismatic proselytizers talked of unprecedented opportunities for massive “natural experiments” using data science and analytics to assess online learning and compare it with the outcomes of in-person instructional methods. The OECD proclaimed that the pandemic was an opportunity for envisioning new models of education powered by big data and artificial intelligence, reflecting the organization’s previous calls to modernize education systems with digital technology.

As part of a longer-term global trend toward data-driven education, the rise of test-based school accountability and teacher performance measurement systems in the 1990s led to the creation of vast information infrastructures for processing and reporting school data. The development of standardized real-time indicators of teaching and learning outcomes became increasingly desirable and feasible in the early 2000s. Education managers used this data to measure progress toward institutional performance targets and improvement goals.

Starting around 2005, specialists in advanced statistics and data analytics began utilizing this student data for systematic quantitative analysis of academic progress and outcomes at increasingly individualized levels of granularity. Learning analytics became the most prominent expression of education data science. New ventures emerged from early investments in virtual learning and online course delivery at elite US universities, including the Massachusetts Institute of Technology and Stanford.

These ventures were backed by corporations and philanthropic organizations dedicated to educational technologies, such as Pearson Education and the Gates Foundation. An international professional association, the Society for Learning Analytics Research, was established, along with research centers and labs worldwide. Learning analytics grew into a movement, drawing the interest of researchers and education administrators alike.

Learning analytics involves examining traces of students’ activity, engagement, and participation captured on digital education platforms, and then improving outcomes by adapting pedagogies and curriculum materials to better match the needs of each individual. This approach has been promoted as “personalized learning,” the pedagogic embodiment of an algorithmic worldview in education. It is based on the idea that each individual student’s performance can be measured and predicted intimately, in “real time.”

The algorithmic approach to personalized learning soon escaped the confines of the academic fields of learning analytics and education data science. The Gates Foundation invested heavily in personalized learning programs, research, and advocacy. Microsoft likewise framed its educational software services as providing personalized learning support. Pearson pivoted to prioritize “digital-first” education product development and analytics expertise. In 2019, the company launched an AI-based personalized learning assistant and a “Global Learning Platform” modeled on Netflix and Amazon. New education technology industries emerged beyond Europe and North America, particularly in India and China, as well as in African and Latin American nations.

In these ways, even before the COVID-19 pandemic struck, the algorithmic worldview had already become encoded in the technical systems used daily by schools and colleges around the globe. By the time classrooms and campuses closed for physical instruction during the pandemic, data science, learning analytics, and AI had consolidated into a family of algorithmic technologies and methodologies that were ideally situated to analyze and organize students’ online learning. New markets were opening for technology companies and products that could scale up these techniques across entire education systems. Edtech companies based in China and India grew considerably in reach and market value, driven by geopolitical strategies to embed AI in education as a way of bolstering future technological innovation, productivity, and national economic advantage.

The edtech industry had already adopted technologies such as learning management systems (LMS) for the administration, assessment, and delivery of educational courses or programs. These systems were used globally and had amassed huge data sets. LMS companies began launching proprietary data analytics services and algorithms to process this data.

In 2019, Instructure, the company behind Canvas, one of the most widely used LMS platforms in both higher education and schools, announced plans to develop predictive algorithms and analytics for more personalized learning recommendations and feedback. In December 2019, Instructure disclosed that it would be acquired for $2 billion by a private equity firm.

Meanwhile, a decade of development of online learning technologies—highlighted by industry and media hype over Massive Open Online Courses (MOOCs)—led to the consolidation of a global Online Program Management (OPM) industry. OPM companies provided platforms for universities to offer online degrees (typically for a 60 percent cut of the fees), along with proprietary analytics to monitor the learning behaviors of millions of enrolled students. The education market consulting firm HolonIQ estimates that the market value of OPMs, MOOCs, and similar public–private online learning partnerships will reach $15 billion by 2025, with a big boost from the COVID-19 pandemic.

British student protesters chanted anti-algorithm slogans.

Viral Surge

The onset of the pandemic in early 2020 catalyzed surging global adoption of educational technologies for learning management and online classes. As school and campus closures affected more than one billion students, according to UNESCO estimates, teachers and learners around the world were forced to adapt to edtech as the default medium of education. Edtech companies began offering their products free of charge or at heavily discounted prices during the emergency period, as governments turned to them to provide digital services for schools and families.

Adding to the logistical and moral complexity were factors such as parent and teacher protests over school safety, campus outbreaks, and digital inequalities. National governments and intergovernmental organizations such as UNESCO and the World Bank turned to some of the biggest global companies, including Google and Microsoft, to solve these problems. As short-term measures, such interventions were necessary. But they also encouraged technological solutionist thinking about how to improve education over longer time spans, condensing the complex structural challenges facing education systems into definable problems to be addressed with technical codes and algorithms, despite the thinness of evidence demonstrating benefits from edtech for teaching or learning. Mirroring private sector education reforms in other emergency contexts, such as New Orleans after Hurricane Katrina, immediate emergency relief for COVID-affected school systems and universities was translated into large-scale, experimental technology–based reconstruction programs.

Google and Microsoft rapidly scaled up their products to facilitate delivery of distance education. By early April, Google reported that active users of its Classroom platform for online learning had doubled, to 100 million worldwide, in a single month. By the summer, in collaboration with UNESCO and the International Society for Technology in Education, Google had repackaged its educational offerings as The Anywhere School, promoting it as a platform for teaching and learning whether on campus or at home. It also began providing enhanced data analytics, reflecting a growing belief across the primary, secondary, and higher education sectors that the ongoing disruption caused by the pandemic would require increased monitoring of student engagement and participation.

Public–private partnerships focused on educational technology and online learning proliferated on a worldwide scale during the pandemic. UNESCO launched a Global Education Coalition to support the rapid rollout and scaling up of edtech with a massive multisector partnership of 140 members. It included international organizations like the OECD and the World Bank, technology and communications companies from Google, Facebook, and Zoom to China’s Huawei and Tencent, and a range of both commercial and nonprofit edtech providers like Blackboard, Coursera, and edX.

The coalition highlighted the growing prominence of international and private sector organizations in education, as well as the idea that policy problems should be addressed through multisector public–private technology partnerships. These arrangements were framed both as a necessary emergency response to school and campus closures, and as a model for long-term educational reform and transformation. They were part of an ongoing reconfiguration of education as a globalized sector open to private sector involvement.

By the time the pandemic took hold, edtech had become a central focus of this growing sector. Just months earlier, in late 2019, the OECD’s fourth annual Global Education Industry Summit had focused on the theme “Learning in the Data Age,” bringing together education ministers and other government officials with industry leaders to consider the potential of adopting learning analytics, big data, and artificial intelligence in national education systems. The summit was a convergence of these trends with the commercial edtech sector, international policy-influencing organizations, and national education system leaders.

This convergence was the context for the emergency turn to edtech in early 2020, and the enthusiasm of many prominent education industry actors for the idea that the pandemic represented a historic opportunity to experiment with digital transformation. Google’s Anywhere School exemplified this vision: it existed primarily in cloud computing servers, making it amenable to algorithmic analysis on a scale that would be impossible for education in physical rather than online settings.

Eager Investors

For education and technology companies in the business of algorithm-driven data services and platforms, COVID-19 was a major market opportunity. The global education industry treated school closures and distance education as an ideal laboratory for enhancing algorithmic interventions in education systems at multiple scales—from national education systems down to institutions and even to the individual, in the form of adaptive, AI-based personalized learning platforms.

Over the previous decade, enthusiastic market forecasts for the edtech sector had prompted huge growth in venture capital investment, especially in companies whose products boasted data analytics or AI capacity. During the pandemic, edtech forecasts spiked again. HolonIQ predicted that the crisis would stimulate long-term increases in edtech spending and investment, estimating in July 2020 that the edtech market would be worth more than $400 billion by 2025. By October 2020, the two most valuable edtech companies in the world were Byju’s, an Indian online learning provider valued at $11 billion, and the Chinese online tutoring and AI company Yuanfudao, which had received over $3 billion in investment during the year to that point, taking its total market value to $15.5 billion.

In the United States, the high-tech and data-intensive vision of education was advanced by the investing and philanthropic vehicles of wealthy technology entrepreneurs, including Microsoft founder Bill Gates, Facebook founder Mark Zuckerberg, and Eric Schmidt, former chief executive and chairman of Google. In June 2020, New York Governor Andrew Cuomo enlisted Gates and Schmidt, and their respective charitable initiatives, to help “reimagine” education based on their visions of a data-driven world.

Later in the summer, Schmidt Futures launched a competition with the Gates Foundation and the Chan Zuckerberg Initiative, with prizes of funding for edtech innovations that promised to “accelerate the recovery from pandemic learning loss and advance the field of learning engineering.” A core focus of both the Schmidt and Zuckerberg foundations, learning engineering involves a combination of data science, analytics, and AI with psychometrics, social psychology, and cognitive brain science. Schmidt Futures also partnered with hedge fund giant Citadel on the competition and named an expert panel to judge it, including venture capitalists and officials from nonprofit groups. The post-pandemic future of education, Schmidt Futures suggested, would depend on combining algorithmic learning engineering applications and sources of private capital.

Meanwhile, new financial instruments were emerging to capitalize on edtech growth. In July 2020, the South Korean investment firm Mirae Asset launched the Global X Education Exchange Traded Fund (ETF) on the NASDAQ stock exchange to facilitate investment in edtech company stocks. It was followed later in September by the Education Tech and Digital Learning ETF, launched on the London, Berlin, and Milan stock exchanges by a partnership between Rize, a London-based asset management firm, and HolonIQ.

Like other types of index funds, ETFs allocate investor capital across a range of companies in a given category. At launch, both the Global X and Rize funds featured a portfolio of approximately 30 high-value Chinese, Indian, American, and British education and technology companies, most of which offered online learning and tutoring platforms that could capitalize on school and college closures. Both Global X and Rize emphasized that their funds would support companies in a position to use digital technologies to transform education through the algorithmic personalization of learning.

These investment vehicles were another manifestation of how the pandemic had been framed as an opportunity to experiment and demonstrate the transformative potential of edtech beyond the emergency, shaping future education systems on a mass scale. As the OECD had envisioned, and as Google had already realized, future education could take place “anywhere,” powered by proprietary algorithms and funded by private capital. Classrooms and campuses could effectively be transferred to commercial cloud networks.

The pandemic presented a historic opportunity for massive technological experimentation.

Trust at Risk

An algorithmic worldview now permeates education systems and is encoded into the digital platforms that proliferated during school and college closures in the pandemic. COVID-19 has been treated as an experimental opportunity to scale up the use of algorithmic technologies, generate fresh forms of capital investment, and grow market share—while presenting a model vision for the future of the education sector itself.

These intertwined developments have begun to shift authority in the education sphere to new players and new algorithmic devices and technologies. Private technology companies have become closely involved in setting transformational agendas based on the perceived objectivity and precision of algorithms and data science. They are backed by high-profile philanthropists and investors, as well as international organizations, that can direct substantial funding to algorithmic edtech models and influence the direction of education policy.

In the algorithmic worldview, software and data science applications have become the default solution to education systems viewed as faulty and in need of fixing. The analytics, data, and AI systems developed by global technology companies and edtech businesses have become experimental engines of algorithmic education—and school systems have become their laboratories for new digital forms of teaching and learning in the post-COVID future.

Yet these experiments may yield only short-term change, weak results, or even rejection by students, educators, and wider publics. The path to post-pandemic transformation of education systems will not be smooth, as indicated by public resistance to plans to involve Bill Gates in “reimagining education” in New York, and by widespread media coverage of the risk that monitoring students with remote algorithmic systems during school closures could become a form of surveillance. The disputes in the UK over grade standardization have further demonstrated the potential of algorithms to stir public distrust when they are used to make high-stakes decisions in education.

Such questions over responsibility and accountability point to the political risk of placing trust in algorithms, as well as the danger that they may worsen existing patterns of inequality and unfairness in education systems. Although promoters of algorithmic solutions foresee a transformational future of data-intensive teaching and learning, the UK test results scandal shows how public trust in education authorities can break down when opaque, privately controlled algorithms are perceived as determining students’ life prospects. Resistance is likely to be particularly acute if experimental algorithmic education is seen as profiting companies and investors at the expense of students’ social mobility and equality of opportunity.