There is a need for more nuanced and theoretically grounded analysis of the socio-political consequences of methodological reforms proposed by the open research movement. This paper contributes by utilising the theory of academic capitalism and considering how open research reforms may interact with the priorities and practices of the capitalist university. Three manifestations of academic capitalism are considered: the development of a highly competitive job market for researchers based on metricized performance, the increase in administration resulting from university systems of compliance, and the reorganization of academic labour along principles of “post-academic science”. The ways in which open research reforms both oppose and align with these manifestations is then considered, to explore the relationships between specific reforms and academic capitalist praxis. Overall, it is concluded that open research advocates must engage more closely with the potential of reforms to negatively affect academic labour conditions, which may bring them into conflict with either university management, or those who uphold the traditional principles of an ‘all round’ academic role.

The term ‘Open research’ often refers to a grassroots movement of researchers calling for reform of standard methodological practices and reporting, which currently lack transparency and enable the proliferation of epistemically unreliable findings. Recently, Uygun-Tunç et al. (2021) critically reviewed claims by scholars including Mirkoswi (2018), Callard (2022) and Peterson & Panofsky (2020, 2021) that open research is a politically neoliberal project. Uygun-Tunç et al.’s conclusion was that this premise is inherently flawed: open research is not a unitary entity and comprises a diverse set of suggestions for reform, and there is no direct link between methodological reforms and political ideology. Instead, there are complex, heterogeneous relationships between specific methodological practices, the axiological position from which their benefits are considered, the policies and regulations which incentivise their use, and the political and ideological positions with which such policies are connected.

However, given the potential impact of open research reforms on the existing research ecosystem and labour conditions of researchers, a critical examination of the socio-political consequences of such reforms is a timely and important endeavour (Uygun-Tunç et al., 2021). The present paper provides discussion of these issues via two novel initiatives. First, I focus specifically on the role of universities in the research ecosystem, including the internal policies and strategies they may develop to support and/or incentivise open research practices. Second, I use the theory of academic capitalism (Slaughter & Rhoades, 2009) to provide a more specific, historically situated definition of neoliberalism as applied to universities and ‘the academy’ in the context of the scholarly landscape. This landscape is described using the Open Scholarship Framework (Knowledge Exchange, 2017), which arranges actors into three levels: the labour to produce knowledge is done at the ‘micro’ level by individual researchers, who are employed and coordinated by ‘meso’ level actors including universities, scholarly societies, publishers, and platform providers; Meso-actors are directed and regulated by ‘macro’ level actors of national and multinational research funders and governments. This is a dynamic system in which the priorities, actions, and influence of actors at all levels are in flux, often in bidirectional ways.

My analysis will allow for a consideration of how the reforms proposed by the open research movement may be practically implemented by universities. It will also consider how the use of specific open research practices may oppose and/or disrupt academic capitalist praxis, as well as how policies employed by universities to promote open research can align with and facilitate managerialism and neoliberal principles in universities, and the potential impact of this on researchers in the future.

‘Academic Capitalism’, ‘Neoliberalism’, and ‘Managerialism’ are all terminologies that have been used interchangeably in some contexts, and to describe distinct phenomena in others (Tight, 2019). Having been informed primarily by Münch (2014), Tight (2019), Slaughter & Rhodes (2009), Shepherd (2008), Jessop (2018), and Klikauer (2013), in this work I will define Academic Capitalism as the tendency for universities as individual actors to operate in competition with each other in markets, competing over both economic and social capital. This tendency is manifested in their priorities, activities, internal organisation, and management. Such markets are the result of Neoliberalism, a strand of political economic philosophy designed to create the conditions which form unregulated, tariff-free markets in whatever area it is applied, through the use of policy, governance, and legislation to promote competition through the competitive allocation of resources (e.g., funding). Finally, Managerialism is a primary way that academic capitalism is implemented in universities, in utilising professional managers and private sector business ideology to make decisions about the way that the university is run. These definitions will limit the scope of the paper primarily to the activities of the university, and the ‘academy’ of scholars within, rather than the influence of capital on other areas of the scholarly ecosystem. The terms “researchers”, “scholars”, “academics” and “faculty” will be used interchangeably to describe those employed by universities at least in part to conduct scholarly research.

Academic capitalism affects all arenas in which universities engage in competition with each other, including research, which has become a key aspect of most universities’ purpose and public identity (Brink, 2018). Informal competition between universities in the arena of research has always existed: comparing achievements and reputation is not a novel development as a result of neoliberalism. However, academic capitalism changes the nature of the competition, including the rules and rewards, and who is responsible for the governance of the research process (Münch, 2014). In order to properly assess these changes and their impact, a brief comparative view of the ‘traditional’ research governance model is warranted.

In the ‘traditional’ model of research governance, it is academics who are primarily responsible for organising the endeavour of collaborative knowledge production. They are employed as autonomous scholars, but participate in the ‘academic game’, a social institution whose rules structure the production and distribution of knowledge (Münch, 2014). In this game, a researcher’s contribution to knowledge is offered to the broader academic community in a ‘gift exchange’, receiving recognition and prestige in return. Whilst the ‘gift’ metaphor implies taboos against counting or quantitatively valuing contributions, novel or high-quality pieces of research that significantly advance knowledge are well-received and prestige allocated accordingly. As such, the game is competitive, but this competition channels a researcher’s innate curiosity to produce the Mertonian paradigm (Merton, 1973); a description of how four communal norms of communism, universalism, disinterestedness, and organised scepticism provide a moral framework for researchers to abide by whilst navigating an individual reward system to progress knowledge. Communism dictates that knowledge produced by the game is a ‘public good’, to be shared with other researchers and made for the benefit of wider society rather than private interests. Universalism means value should be assessed on the quality of the research itself, rather than the reputation of the researcher. Disinterestedness stipulates that the purpose of conducting research should be to advance knowledge and not for private interests, and organised scepticism states that all claims and ‘gifts’ should be treated to critical evaluation. Practically, a researcher’s participation in the game shapes the daily routine of research and scholarship that maintains the system such as teaching, supervising PhD students, conducting peer reviews, editing journals, and developing and sharing resources.

The academic game is largely self-regulating, meaning that academics are responsible for governing, organising, and administrating the process of research. This responsibility is formalised through organisations such as scientific associations, and other actors defer to academics to determine the research agenda, research funding and evaluation, and disciplinary ethics. Academics control what should be researched, by allocating funding to ideas that are seen to build upon existing knowledge in the discipline. In addition, they judge the quality and relevance of research, by evaluating a study’s contribution to knowledge (through the process of peer review, but also continuously through repeated citations) and allocate prestige accordingly (e.g., through prizes, awards, invited talks, etc). This trusteeship of the governance of research is an important component of ‘academic freedom’, and the protection of knowledge generation (and dissemination in the form of teaching) from reprisal by legislation or private interests (Reichman, 2019).

In the traditional model of research governance, the university plays an important, yet passive, role. It exists primarily as an organisational unit, and an interface between the academy and wider society. The university provides scholars with resources (both physical and digital) to conduct research and acts as a platform to support them to pursue their scholarly activities. It provides academics the freedom to concentrate on scholarship by renumerating them with a reasonable salary and overseeing the broader financial necessities for research, including journal subscriptions, modern technology, and recruiting students. In this context, scholars are seen as appointees rather than employees of the university, in that they have professional autonomy, and are not “answerable” to the university’s management (Finkin & Post, 2011). The university also fulfils a social role of fostering a local academic community, allowing a base of collective security from which academics can exercise their academic freedom, and insulates the faculty to be autonomous from the rest of society (Brink, 2018). Finally, the university’s reputation provides scholars with the credibility to participate in the academic game. In this model, whilst the university has senior management who make financial decisions and take on leadership roles, they are primarily academics who fill these positions out of obligation or by dint of seniority or experience, rather than as a separate career (Kirkpatrick, 2016).

The competitive aspect of the traditional model is worth emphasizing, given that terms like academic capitalism and neoliberalism have often been used as “universal scapegoats” (Tight, 2019). That competition has always been integral to academia is an element sometimes ignored when Mertonian norms are invoked in the defence of a cooperative, harmonious research endeavour. Accompanying Merton’s norms were Mitroff’s (1974) four “counter-norms” which he argued more accurately described academic’s behaviour: solitariness (treating research as an individual rather than communal endeavour), particularism (preferential treatment of some researchers’ work over another), self-interestedness (conducting research for personal gain) and organized dogmatism (biased defence of preferred findings and theories). Mulkay (1976) argued that the Mertonian norms are more accurately described as “vocabularies of justification” used by researchers to defend the traditional research governance model against external interference (e.g., from academic capitalism), rather than inherent values.

Whilst the traditional model may not be a fully historically accurate description of the system, it still serves as a useful comparison to the model of research governance under academic capitalist logic, which is increasingly found in modern universities. The fundamental difference between the models is that the unwritten codes of the gift-based ‘academic game’ are supplanted as the dominant paradigm by the more powerful force of market logic in structuring the rules of the competition of knowledge production (Münch, 2014). The traditional competition between researchers is constrained by the taboos, social ties, and mutual obligations of the gift economy, whose intangible code of conduct is what generates the trust between participants required to sustain the system (Godbout & Caillé, 1998). Treating research as a gift creates social obligations for academics to continue producing it as part of their role in society and as participants in the academic game, but precludes the existence of any specific expectations of its production or receipt. Therefore, the question of how to motivate, incentivise, or measure its production is inapplicable. In contrast, the shift to a market economy creates the explicit expectation of appropriate and timely recompense: the concept of ‘obligation’ is obsolete from a market perspective (Fisher, 2009). Treating research as a commodity creates, at a meso-level, the expectation that universities produce a competitive amount and quality of research to justify their existence and continued public funding (Collini, 2012). At a micro-level, researchers are expected to produce competitive amounts and quality of research to justify their continued employment in such a role.

This change from gift economy to market economy is historical yet ongoing, and the result of the accumulation of various neoliberal policies from macro-level actors across the last half century. Chief among these are governmental policies designed to change both the function of universities and how they are primarily funded, with an emphasis on competitive, conditional funding based on their contribution to the economy (including educational and research outcomes; Collini, 2012), an application of “business ontology” (Fisher, 2009). Two of the most influential proximal drivers of the research market which record the observable results of the competition are: (1) the competitive allocation of research funding via national research performance evaluation exercises (e.g. the Research Excellence Framework ‘REF’ in the UK) and (2) the quantification of research performance in university rankings developed by media organisations and private consultancy firms, (e.g. the Times Higher Education (THE) World Rankings and QS World University Rankings). Within these markets, the university competes over both the inputs (research funding) as well as outputs (publications and research impact) of the research process. The shift from gift to commodity has profound philosophical and epistemological consequences for conducting research (Oancea, 2019), but also produces changes in research governance and the practical organisation of the production of research within universities.

The university’s newfound position as a participant in a research market means it takes on an active rather than passive role (Münch, 2014). It adapts to this role by copying the behaviours of other capitalist actors: by adopting the business practices of managerialism that have been used to successfully navigate and compete in markets in other industries. The premise of managerialism is that differences between organizations are not as important as similarities, and so following ‘tried-and-tested’ ways of organising and incentivising a workforce can be used to improve the performance of any organization (Muller, 2018).

The university’s role as a capitalist actor fundamentally changes the relationship between the institution and its academics. Instead of being a background platform to support professionally autonomous researchers whilst they engage in their independent academic game, the university now aims to strategically manage and “steer” its academics to ensure that the research they are doing also serves the university’s aims of succeeding in its own competition in the research market (Rees, 2015). Such logic comes from the managerial ‘principal-agent theory’ which dictates that when an interested principal (the university) and its employed agents (the faculty)‘s motivations diverge, the agents must be coaxed into following the principal’s priorities via a combination of monitoring and reward (Muller, 2018). In this model researchers are no longer ’appointees’ of the university who are intrinsically motivated and obligated to advance knowledge, but instead are expected to be mindful of and work towards the priorities and mission of the university as employees of a competitive business, demonstrating ‘performance’ in support of the university’s extrinsically motivated goals. To facilitate this, the university seeks to take over administration and governance of research from academics. The capitalist university’s position in a broader “knowledge economy” (Powell & Snellman, 2004), also means the university may de-prioritise a default ethos of sharing knowledge as a public good, instead taking an entrepreneurial perspective of their researcher’s work, developing it into intellectual property where possible.

Whilst the market system is clearly a new powerful force in the knowledge ecosystem, it does not replace the traditional academic game. Instead, the capitalist system manipulates the participants in the academic game and distorts the norms of the traditional model to serve its own purposes. There are limits to neoliberal intervention in non-economic systems and on human behaviour, and despite all the resources universities put into competing in the market, “success” in this arena often actually has a limited effect (at least in the short term) on changing the status quo of historical status or prestige (Marginson, 2013). However, the reversal of academic capitalist influence is also unlikely: many academics are now “metric-natives” whose only experience of research governance is one in which research is treated as a quantified commodity, and the marketized competition to produce research is part of their academic habitus (Oancea, 2019). Without comprehensive changes to the neoliberal ethos at the macro-level, there is little incentive for universities to hand power back to academics in the governance of research (and even if there were, it seems farfetched that they would). As such, the lens of the academic capitalist university is crucial for understanding the way that research governance is organised both presently and in the future.

The neoliberal ethos of a market-like environment for research at the institutional level is a political (and therefore societal) decision, based on decisions about the funding (and therefore valuing) of research. However, such neoliberal policies do not typically impact researchers directly, but indirectly through the decisions and actions of the university. These ‘manifestations’ represent observable, yet not inevitable, consequences of academic capitalism. The following section will cover three common manifestations: the quantification of research performance of academics, the growth of administration and bureaucracy, and the reorganisation of academic labour. Local expressions of these manifestations may vary considerably, as there is not a ‘unity of object’ when considering a university’s form or organisation, and the pressures for change from neoliberal policy are not universal across the sector (Nedeva et al., 2014). The extent to which such principles manifest will depend on the size, shape, and mission of each university. In particular, smaller or specialist universities which have not historically competed with larger institutions may retain more traditional academic organizational structures and not (yet) be as influenced by market logic. Nevertheless, the descriptions of these manifestations will be at least partly recognizable to most academics.

Quantification of Research Performance Through Metrics

The most visible manifestation of academic capitalism is through the university’s assessment of its researchers’ work by metrics: quantitative measures to assess the quality of research and (by extension) researcher performance. This includes both peer-led quantitative assessments of research quality, as well as automatic bibliometrics which record data about research outputs. Bibliometric assessment is often achieved through the university’s use of research monitoring systems, examples of which include Elsevier’s PURE, Jisc’s CORE, and Digital Science’ Symplectic. Academics are mandated to engage with such systems, and populate them with information about research outputs, from which bibliometric data is obtained in conjunction with scholarly databases (Lim, 2019). The system providers then sell the tools to analyse this data to universities and policy makers as strategic aids, but also as performance management systems. These ‘data-analytics’ services are an increasingly important part of the providers’ future strategies and revenue streams (Chen & Chan, 2021), and the fact that engagement with the systems improves their value and utility (by increasing the raw materials) predicts their future expansion and embeddedness in universities. This cycle helps form an “invisible infrastructure dominance” enabling the systems to exert an increasingly powerful influence on university decision-making (Chen & Chan, 2021). The systems are adaptable and can record and quantify other types of academic work, such as peer reviews, consultancy, and in-progress research work (Lim, 2019). Often, these systems populate information on a researchers’ personal university webpage, providing an instant public record of performance assessment – not only a traditional list of publications but also their bibliometric performance.

Universities employ quantitative measures not only as strategic tools to understand their overall performance in the markets in which they are assessed, but also in a neoliberal way to incentivise the competitive performance of individual academics. Therefore, the assessment of researchers via metrics reproduces the drivers of the research market from the meso-level at the individual micro-level and localises the impact of macro-level neoliberal decisions. By linking such performance with rewards such as hiring, promotion, development opportunities, and terms of employment, the university ensures the alignment of the activities of researchers with its own goals. The effect of this is to greatly increase the stakes in the already naturally competitive research environment. Researchers who fail to achieve a competitive level of performance may struggle to survive in the academic job market, exemplified by the maxim of “publish or perish” (Waaijer et al., 2018). The growth in precarious positions for early career researchers and an oversupply of PhD students has produced an academic labour force larger than the number of positions the system can accommodate (Larson et al., 2013). In this situation, the ‘publish or perish’ assessment of researchers produces an intense global competition for stable and well-resourced research positions, which are subsequently dependent on a very high and consistent level of metricized performance.

The competition is further heightened by a ‘quantitative mentality’ (Mau, 2019) about the process of research, cultivated by both university bibliometric measurement, and the prevalence of research platforms marketed directly to academics as social networks or professional tools (for example, Google Scholar, Researchgate, Academia.edu). Public, granular quantitative assessment of any individual academic is increasingly available at the click of a button, enabling previously incommensurable “new horizons of comparison” between researchers, for example between academics in different disciplines. The quantification of research performance therefore extends beyond an individual researcher’s relationship to their employer or direct competitors in a market but becomes integrated into the global academic status competition. This institutes a more profound and immediate change in the ‘social semantics’ of how the research endeavour is described and understood as an inherently quantified activity (Mau, 2019). Quantification becomes infused into the competitiveness of the traditional academic game for prestige, and literally ‘gamifies’ it by providing scores and status rewards for high performing researchers. Kelly and Burrows (2011) argue that the metricization of research performance has become so ingrained into the “fabric of the life-world of the academy” that they are inseparable, and there is evidence that even qualitative judgements of research quality are influenced by beliefs about metrics (Langfeldt et al., 2021).

The omnipresence of quantitative research assessment, combined with the “hyper competition” for academic positions and the link between success and employment coalesce into “regimes of valuation” centred around highly competitive metricized performance (Fochler et al., 2016). These regimes represent the combined discourses, practices, and infrastructures, as well as the participation of researchers within them, that shape the entire phenomenological experience of life as a researcher and the conception of the value of themselves and their work. Fochler et al. (2016) found that nearly all decisions taken by researchers – research-wise, career-wise, and even private life decisions – were strongly geared towards to succeeding in these regimes. This exclusive orientation towards competition has pervasive effects on researchers’ decisions about their work, career, collaborations, and relationships (Anderson et al., 2007). Taken together, the expectation of quantified research performance greatly intensifies the traditional competition in academia both within and between universities, reinforcing Mitroff’s (1974) norm of self-interestedness (enabled by solitariness and organized dogmatism) for researchers to focus on their own competitiveness in the market.

Administration and Bureaucracy of the Research Process

Because the university takes an active interest in the research of its staff, it also seeks to take responsibility for the administration of the research process. This leads to a significant increase in bureaucracy when conducting research and is a second manifestation of academic capitalism. Whilst conceptually some of the root causes of administrative growth may be detached from academic capitalism (e.g., necessary adherence to health and safety legislation), the implementation by the university of systems for compliance is symptomatic of the new private sector logic and centralised managerial control employed by universities (Shepherd, 2008).

Examples of increased administration in the research process are numerous. It is found in the process of ethical approval, where complex, multi-page forms need to be filled out for review by Institutional Review Boards (IRBs). Forms are also typically needed to assess the suitability of research data storage; structure formal data sharing and intellectual property agreements; ensure compliance with data protection, copyright, health and safety, and animal research legislation; comply with financial conflict of interest guidelines; and administrate personnel management (Bozeman & Youtie, 2020). Complex flow charts and ‘standard operating procedures’ are subsequently required reading for researchers in order to navigate and understand which forms are needed, as well as mandatory training sessions to be able to complete forms correctly. The process of applying for research funding and administering research funds is also increasingly bureaucratic. Researchers are encouraged or required to engage with “research development professionals” who support academics with the process of developing, applying for, and administering grants (J. Levin, 2011). This assistance is often essential and welcomed when planning complex projects (Carter et al., 2019), but such engagement often necessarily requires more administrative activities for researchers in order to communicate their needs and access such support.

Viewed through the theory of “administrative burden” (Bozeman & Jung, 2017; Bozeman & Youtie, 2020), research administration in universities can be understood to be in many cases unnecessary. Any ‘burden’ of administration is relative to its benefits. In some cases, the burden is justified in improving or facilitating the research process. For example, compliance with necessary animal protection legislation to prevent legal ramifications for the researcher, or appropriate financial planning of grant funding to ensure enough resources are available to complete the research project. However, these tasks become ‘red tape’ when the burden of conducting the administration makes “no contribution to achieving the rules’ functional objectives” (Bozeman, 1993).

Red tape propagates when the strategy of using a central, managerial ethos to ensure compliance is applied. For example, a standardized ethics form used across an entire university may require every researcher to answer questions on whether animals are being mistreated, even though for the vast majority this is inapplicable (Bozeman & Youtie, 2020). The use of automated technology to facilitate administration (‘robotic bureaucracy’) is considered by many universities to be a solution to administrative burden. However, asking researchers to remotely use technology for administration exemplifies a shift towards “workers becoming their own auditors” (Fisher, 2009) and shifts the burden of administration from administrators to researchers themselves. Technology can also end up increasing administrative burden, for example when unclear wording on electronic forms requires slow, repetitive emails with anonymous, context-naïve administrators to clarify. This can explain the paradox that despite the vast increases in administrative roles and spending on administration in universities, the amount of time researchers spend doing administration has also increased (Hogan, 2011).

The tendency for universities to ‘overcomply’ with legislation in order to avoid noncompliance is also common, and further increases red tape. Whilst this may be sympathetically understood as a necessity, in practice the risk of non-compliance is very low, and the consequences overstated. Researchers do not typically seek to behave unethically or unsafely, and many risks are not feasibly likely or consequential in the majority of research; and could not have been prevented by prior review anyway (Schneider, 2015). In a less charitable view, universities may be ignorant of the effects of their policies and procedures on researchers, for example in rejecting the idea that administration may be a burden at all and “seek[ing] every opportunity to go beyond compliance” in their approach to meeting regulations (Manchester Metropolitan University, 2017).

Reorganization of Academic Labour

A third manifestation of academic capitalism is the reorganization of academic research labour within universities. The capitalist university behaves in line with the tenets of Schumpeterian entrepreneurship, seeking to act as an economic innovator in order to navigate the market (Jessop, 2017). Entrepreneurship in this context entails a type of leadership dedicated to inventing new and more efficient ways of producing research. Three innovations developed by universities to improve the efficiency of research are the creation of research strategies, new academic roles, and new conditions of employment. Any one individual university has limited power in shaping the market as a whole but can still seek to make adjustments to the way it conducts its own business, and innovations which prove successful may be copied by its competitors.

The first innovation is the development of ‘research strategies’ and an increase in the top-down organisation of academic labour, to enable the university to strategically manage its staff and guide investment. Managerial practices are used to try to artificially create areas of strength and reputation, and thus increase efficiency (in terms of expenditure per research output) in producing research in particular areas. This includes the creation of ‘research centres’ with which to both internally organise research staff as well as externally promote the university’s research capability. These centres are headed by “institutional academic research leaders”, who mentor junior researchers from an explicitly institutional perspective (Rees, 2015). Research centres are often multidisciplinary, moving away from a traditional departmentalized organisation of the university, and focused around “challenges” or areas of societal need or interest (Kellogg, 2006), which purposefully mirror the conceptualisation and priorities of many of the main research funders’ missions (Rees, 2015). Research centres develop ‘corporate identities’ in the mould of private firms (Braun, 2014), in order to market their research and expertise to the public, industry, and other academics. This may help to improve performance in research evaluation exercises and a university’s ranking score by raising the awareness amongst academics at other institutions, whose views are collected in ranking reputation surveys. Internal funding for research costs, equipment, and training, as well as institutional support for external funding applications is often made dependent on researchers conducting research within these centres. Therefore, whilst the university cannot violate the concept of ‘academic freedom’ by forbidding academics from researching in alternative areas, they can construct an environment in which it is more arduous to do so, ‘nudging’ researchers towards aligning their research with the university’s strategy.

A second innovation comes in the creation of new academic roles and changes to the activities that academics perform. In aiming to produce higher quality research and attract more grant funding, research centres focus on larger, inter-disciplinary projects (Rees, 2015), in which there becomes a practical need for specialization of roles (Kellogg, 2006). Specialization is accompanied by ‘collectivization’ and novel combinations of expertise into research teams (Nyhagen & Baschung, 2013), including roles linked to private enterprises which enable industry partnerships, “knowledge-exchange” programs, the “blending” of academic and professional roles, and researchers’ careers that are “braided” from academic and industry work (Oancea, 2019). These developments inevitably change the working practices of researchers, who are encouraged and incentivised to adopt “nimble knowledge production” strategies (Hoffman, 2021) with a clear entrepreneurial focus on adaptability and rapid capitalization on changes to external funding and collaboration opportunities, rather than autonomously pursuing their own research agenda. This innovation helps to fulfil the university’s aims of generating capital by commodifying the results of its research: successful ventures may be expanded into spin-off companies or privatized and licensed using intellectual property agreements (Slaughter & Rhoades, 2009).

The university may also change the terms of employment of researchers for maximum efficiency. For example, by using metrics to strategically re-deploy staff who fail to maintain a sufficient level of research performance to other activities such as teaching. Redistributing the teaching load frees up research time that can then be reallocated to more productive researchers, or those demonstrating “talent” (Rees, 2015). In extreme cases, a perceived lack of efficiency or alignment in the research output of a particular research group may result in “restructuring” exercises, and mass termination of contracts or ‘firing-and-rehiring’ on teaching-only contracts (see HLS47, 2022; Hooley, 2021). A more pernicious form of reorganising academic labour comes in the proliferation of part-time or fixed-term contracts for academics (Macfarlane, 2011). This development echoes their growing use in other industries and can primarily be seen as a measure for improving economic efficiency of the university, by employing workers only for the completion of specific necessary tasks, and no more. One example is the increase in temporary “postdoc” positions, where researchers are employed on short-term, externally funded projects, without prospects for permanent employment. Candidates are selected for postdoc positions based on their skills for completing specific projects, rather than their potential development into traditional, well-rounded, autonomous “principal investigator” scholars (Herschberg et al., 2018). The lack of opportunities or time for development of the skills necessary for leadership roles leads instead to an emphasis on further technical specialization, perpetuating the reorganization of research as it produces a labour market geared towards rapid, project-based entrepreneurial research rather than long-term, incremental research agendas. The role of these researchers shifts from becoming “integrated” multitasking researchers to hyperspecialist “supporting scientists” (Lee & Walsh, 2022). This labour market has been compared to a “gig economy” (G. Nelson et al., 2020), characterized by independent workers contracted to short-term, temporary, “contingent” positions, and normalized in the language of job advertisements, seeking “casual researchers”, and with circular justifications that this is just “how the system works” (Ivancheva, 2015).

Collectively, these innovations to reorganize academic labour have been described as a shift to “post-academic science” (Ziman, 2000) since they overwrite many of the norms of academic work under the traditional research governance model. Whereas traditionally the academic role comprised a range of scholarship duties which maintained the system of the academy (with an emphasis on the co-dependent relationship of teaching and research; Münch, 2014), in post-academic science the duties of the scholar are “unbundled”, leading to major changes in academic identity and the redefinition of the activity of research (Macfarlane, 2021). The collectivization of specialists may result in greater practical collaboration between researchers, but as these research centres also serve to fragment disciplines and the conditions of employment are unstable; the power of the faculty as a traditional community, collective decision maker, and unified labour force is diminished (Braun, 2014).

The Impact of Academic Capitalism on Research

The principles of open research and the development of the science reform movement can be seen as a reaction to the effects that academic capitalism has had on the actual process of research and the resulting quality of knowledge produced (Uygun-Tunç et al., 2021). These issues have been discussed in detail elsewhere (Young et al., 2008) but are covered briefly here. The market forces at the meso-level dictate the criteria for which type of research is rewarded in the micro-level market, and the unavoidably imperfect nature of these criteria results in deviation between “what is good for scientists” and “what is good for science” (Nosek et al., 2012). This in turn results in incentivising the production of research which furthers careers and market performance but not the development of human knowledge. Key metrics researchers are assessed on include the number of published pieces of research and the number of citations per publication. Here, the ‘publish or perish’ effect of academic capitalism interacts with the ‘bottleneck’ of the publication process to incentivise research which has “publishability” but not necessarily validity (Nosek et al., 2012).

Primarily, publishability involves criteria related to the perceived novelty and significance of research, values which also exist in the traditional research governance model. Publication standards (encompassing journal policies, editorial decision-making, standards and scope of peer review, and tolerance of breaches of citation ethics) are thus an important contributing influence on these issues independent of academic capitalism. However, academic capitalism also indirectly sustains these standards in a feedback loop: journals are funded primarily through the efficiency-conscious library budgets of the capitalist university, thus must compete with each other for subscriptions (Morrison, 2013). Libraries’ decision-making on which journals to subscribe to is done partly by considering quantitative journal impact factors and journal rankings, and journals typically pursue performance on these metrics by publishing research which fits ‘traditional’ standards of novelty and significance (Spezi et al., 2018). Thus, academic capitalism both exacerbates existing problems with misaligned incentives in research publishing whilst simultaneously acting as a barrier to reform.

The most severe example of discrepancy between publishability and good research can be seen in cases of fraud, including fabrication or manipulation of data, (self-) plagiarism, and duplication of publication (Harvey, 2020) to improve a researcher’s performance on metrics. Academic capitalism exacerbates the likelihood of fraud by increasing the benefits of perpetuating it (via ‘publish or perish’), thus making it cognitively easier to justify; and by obscuring the likelihood and severity of negative consequences, as many institutional investigations into research misconduct are not reported publicly to avoid reputational damage (Science and Technology Committee, 2018). More prevalent than fraud are Questionable Research Practices (QRPs), grey areas of acceptable practice that can be exploited to enhance publishability (John et al., 2012) and “grease the way through the publication bottleneck” (Giner-Sorolla, 2012) by improving the aesthetic presentation of research to exaggerate or fabricate standards of novelty or significance that it would not otherwise possess if presented honestly.

Whilst QRPs are not novel, their use is undoubtedly incentivised by the increase in competitiveness caused by academic capitalism. Under the traditional research governance model, the expectation for producing novel and significant research for all researchers was not universal (Macfarlane, 2021), but in the competitive market, it is ubiquitous and linked to employment conditions and livelihood, and survival in academia itself (Anderson et al., 2007). Therefore, conducting and publishing multiple small studies using QRPs represents an effective strategy for survival under these conditions (Bakker et al., 2012). It is thus not surprising that the pressures of ‘publish-or-perish’ have been linked to researchers’ admittance of using, intending to use, or tolerating QRPs in research (e.g., Bruton et al., 2020; Gopalakrishna et al., 2022; Haven et al., 2019; Tijdink et al., 2014; van de Schoot et al., 2021).

The overarching effect of fraud and QRPs is an increase in research that is epistemically unreliable (Romero, 2019), which stalls the incremental progress of knowledge-accumulation and damages trust in the entire collaborative research endeavour needed to sustain the system (Wilholt, 2013). This in turn diminishes the ability of the system to retain academics, and results in a “leaking away of trust” among researchers (Oancea, 2019); particularly those early-career researchers who often enter academia as intrinsically motivated and with humanistic ideals (Cidlinska et al., 2022). Such researchers are subsequently confronted with the reality of “regimes of valuation”, creating frustration and disillusionment. It is this frustration that has been the main source of fuel for the grassroots movement of open research, providing a strong ethical imperative for reform of the current way that research is conducted and published (Lupia, 2021).

The term ‘open research’ has been used to refer to a multitude of initiatives and ideas (Fecher & Friesike, 2014), but reforms to standard methodological practices, specifically practices of the way that research is reported, can be seen as the fundamental type of reform that facilitates other forms of openness. Methodological reforms target the lack of transparency in the research process, which is what the majority of QRPs and fraud exploit given the disconnect between the limited contents of a post-hoc publication and the complex process of the research activity itself. Many of the suggested reforms to combat epistemic unreliability are not novel (e.g., de Groot, 2014; Greenwald, 1976; Kerr, 1998; Loftus, 1993; Mills, 1993), but the modern open research movement has a momentum not seen in previous iterations, and so has been dubbed a “revolution” (Spellman, 2015). Multiple factors have coincided to enable this, including new technologies that facilitate open scholarship (Weller, 2011), and the growth and influence of ‘meta-research’ (Ioannidis, 2018) which has provided new empirical evidence to support arguments for the need for reform. This includes proof of the widespread use of QRPs (John et al., 2012); indications of “crisis” levels of a lack of reproducibility in published research (Gelman & Vazire, 2021; N. C. Nelson et al., 2021; Open Science Collaboration, 2015); and high-profile examples of research using QRPs to produce findings which are publishable yet ontologically impossible (e.g., ‘evidence’ for psychic powers; Bem, 2011; see Wagenmakers et al., 2011 for a commentary).

Munafò et al., (2017) describe many of the key reforms, which provide transparency to all stages and aspects of the research process. These include: sharing time-stamped research plans (pre-registration); making research materials, data, and analysis code “open” and freely available; following standard reporting guidelines; transparency of the roles and specific contributions of all individuals involved in a piece of research (open authorship); and making the final publication (as well as all of the above elements) freely and easily accessible to other researchers and the public. Research that adopts these practices is not immune from being gamed by QRPs, but it makes their use detectable and transparent. This improves the epistemic reliability of research by providing sufficient information for the reader to determine whether fraud, errors, or QRPs have compromised this. Adoption of open research practices therefore helps to restore trust in the system as a whole: as epistemically unreliable research is subsequently easier to detect (and thus discount or ignore), it enables a more efficient accumulation of reliable knowledge that is the bedrock of incremental science (Lakens & Evers, 2014). These progressive arguments that are used to support open research reforms mean the movement can also be conceptualized as a ‘civilizing process’, and the epistemological changes cast in moral terms: “a pursuit of what is right when it comes to knowledge-making” (Penders, 2022, p. 112).

The use of open methodological practices also has broader downstream consequences that relate to other conceptualisations of ‘open research’ (Fecher & Friesike, 2014). For example, the normative sharing of plans, data, and materials combined with transparently reporting contributions facilitates large scale inter-institutional collaboration between researchers enabling “Big Team Science” (Forscher et al., 2022). Sharing research data, materials, and analysis code enables their re-use by researchers with less access to resources, thus facilitating initiatives to improve diversity in research (Grahe et al., 2020). Making the research process transparent and the resulting publications openly accessible also helps to promote ‘citizen science’, initiatives where the public or other stakeholders are invited to contribute to the direction, design, and process of research (Hecker et al., 2018). Considered collectively, the open research movement can be seen to represent an ethos of research, built around multiple definitions of the word ‘open’, and closely tied to moral concepts of integrity, inclusivity, and diversity.

The historical growth of open research as a grassroots movement means that reform advocates have typically promoted open practices through local engagement; for example, running training on open research technology and theory; organising talks, reading groups, and modelling behaviour of using open research practices to develop new cultural norms (Armeni et al., 2021; Nosek et al., 2020). This “bottom-up” promotion of open research targets the practical agency researchers have in the research process over how they choose to conduct and report research (Yarkoni, 2018), and is thus aimed at encouraging reform of individual researchers or groups at the micro-level of the research system. ‘Grassroots’ does not mean impoverished, and many of these initiatives have been well-resourced by both funders and institutions. However, there has been a growing recognition of the importance of holistic change of the entire research eco-system to enable and encourage widespread, sustainable adoption of open research practices (Cuevas Shaw et al., 2022; Knowledge Exchange, 2019; Nosek et al., 2015). There is an acknowledgement of the need to reform the “top-down” policies of the meso-level actors in the research system, in order to target the incentives that currently make conducting closed research (enabling QRPs) desirable and rewarding. The potential reforms of university policies are important elements of this, and will here be considered within the framework of academic capitalism, to frame the university’s position and priorities in potential top-down changes. The following sections will discuss how open research reforms may oppose academic capitalist principles, whilst simultaneously exacerbating some of their manifestations.

The principles of transparency and openness at first seem at odds with many of the principles of academic capitalism and the changes it has had on research governance. Open research reforms have often been explicitly linked to Mertonian norms, appealing to traditional ethical values for governing research (e.g., Cohoon & Howison, 2021; Vazire, 2018). The previously explored manifestations of academic capitalism provide clear evidence of practices which promoted Mitroff’s ‘counter-norms’: specifically self-interestedness and solitariness. Where such manifestations are realized, the argument of the opposition of open research to academic capitalism may appear to be justified. The reality is more complicated, as open principles do not map perfectly onto the values espoused by Mertonian norms (Hosseini et al., 2022), even though many open practices appear to support interpretations of them. Nevertheless, on a conceptual level there are ways in which open research reforms may oppose the principles of academic capitalism, and in some cases the manifestations of these previously discussed.

Communalism vs. Entrepreneurship

Sharing results, data, materials, and other artefacts of the research process are practices in clear alignment with Mertonian norms of disinterestedness and communalism which state that research should be conducted for the benefit of society more broadly (rather than for private interests) and that knowledge should be shared freely with other researchers and the public. Indeed, the open principles of timely sharing of resources and active inclusivity may represent even more extreme interpretations of these values than originally intended by Merton (Hosseini et al., 2022). On a theoretical level, these principles oppose the priorities of the capitalist university aiming to act as an entrepreneur and commodify knowledge it produces. Practically, adopting open research principles of publicly sharing all data and results may hamper universities’ efforts to set up industry collaborations, where findings may be commercially sensitive, or negative results supressed to avoid reputational damage (Fernández Pinto, 2020). For example, collaborations with industry partners typically involve nondisclosure agreements or require approval to publish negative results (Czarnitzki et al., 2015), meaning an insistence on open research would block such collaborations and cut off potential sources of funding and research development. Additionally, publicly sharing the products and results of research (without restrictions) is antithetical to commercializing them via intellectual property agreements: for example, the development of a piece of research software or a psychometric questionnaire, which could either be provided for free as open-source or licensed to other researchers for a fee. The concepts of openness and entrepreneurship have been attempted to be bridged with terms such as “Open Innovation” (Beck et al., 2022), however these utilise an impoverished definition of openness that merely refers to new types of collaboration or knowledge production; “open” only in contrast to traditional confidential knowledge generation in industry. Universities may attempt to overcome tensions between open methodological principles and entrepreneurship by implementing selective open research policies and governance procedures: encouraging and supporting open research practices only for research that is publicly funded, does not have commercial potential, or is not part of an industry collaboration. Such selective policies arguably conflict with the ethos and ‘ideal’ of open research, but ostensibly allow the university to be seen to support openness by providing practical support and infrastructure for open research activities (Fernández Pinto, 2020).

Sharing Resources vs. Competitive Self-Interestedness

Making research plans, materials, data, and analysis code open allows their re-use by other researchers, saving time and resources. Practically, these are powerful symbols of anti-competitive behaviour, and clearly in opposition to counter-norms of self-interestedness and solitariness promoted by hyper-competition under academic capitalism. However, the extent to which these behaviours actually disrupt the micro-level competition of individual researchers and meso-level competition of universities is limited. For a researcher, improving rival academics’ competitiveness by arming them with valuable resources, such as data with potential for further publications, represents a clear moral stance against the pressures that hyper-competition has on researchers’ behaviour (Anderson et al., 2007). It has been suggested that widespread adoption of open practices may therefore facilitate the development of a more communal and collaborative culture among researchers (e.g., Ignat & Ayris, 2020). However, such behaviours merely target the symptoms of a competitive culture, and do not challenge or change the root cause of the neoliberal competition for limited academic positions, and the link between performance and employment conditions. In the meso-level research market, universities have far more to gain from a culture of sharing resources than they have to lose, as the quantity and significance of their own researchers’ shared resources is far outweighed by the new shared resources from their global competitors that their own researchers can now exploit. For this reason, collaboration with researchers at other institutions and sharing resources is something that universities generally encourage. Providing rival university faculty with useful resources created by its researchers also has the benefit of raising an institution’s profile and improving its reputation, which feed into university rankings, and are often much more difficult to influence, especially in a global competition (Marginson, 2013). Overall, the sharing of resources generates a superficially communal culture but does not target the structural manifestations of academic capitalism.

Big Team Science and Shared Authorship vs. Neoliberal Research Market

A significantly more disruptive influence on academic capitalism and the principles underlying the meso-level research market is an increase in open authorship and big team science facilitated by open methodology. Large, collaborative projects have clear ‘communitarianism’ principles (Uygun-Tunç et al., 2021), which build upon the Mertonian communalist ethos of shared ownership of knowledge. The organization of such projects often mimics the democratic, collective decision-making and shared responsibility for governance that are hallmarks of the traditional research governance model, challenging the fundamental individualist and competitive tenets of neoliberalism (Uygun-Tunç et al., 2021). This can be seen specifically in the intractable effect these projects have on neoliberal performance metrics. University rankings and research evaluation activities depend on assigning the quantitative value of research publications to universities via the affiliations of the publications’ authors. A specific piece of research can be used by multiple institutions in their performance measures if it is authored by at least one of their academics. However, the logic of this system breaks down when faced with research conducted using a big team science ethos, where a publication may have hundreds if not thousands of authors, from a similar number of institutions. For example, Wang et al. (2021) has 455 authors from 389 different institutions. The unique contributions of the authors were explained in the publication using open authorship guidelines, but for the purposes of assigning credit in rankings and evaluations, each is treated as an equal ‘author’. This presents a direct challenge to the neoliberal logic of market competition between universities, as such research no longer acts as a signal for which specific universities are performing well and deserve funding or credit. It also contradicts the artificial ‘zero-sum game’ logic of rankings, which dictates that the ‘performance’ of one university is a quality that is ascribed to that particular institution compared to the performance of another, and not a shared entity (Brankovic et al., 2018).

In some cases, big team science papers have benefited universities, almost absurdly so: Brankovic (2021) describes how Bielefeld university jumped 120 places in global university rankings, in a large part due to the influence of a single author on a few highly cited big team science papers. Despite attempts to attribute this increase in ranking to the success of managerialism, the material facts show the potential powerful disruptive influence of such projects. The suggestion by Forscher et al. (2022) that the problems of integrating big team science in existing research infrastructure (including funding applications and university ethics and administration) could be overcome by developing new structures is a powerful political statement that challenges academic capitalist hegemony. For example, if funders decided to fund team science projects directly rather than through a single individual participants’ institution (as is currently the case), this would position big team science projects as a direct rival to universities in terms of accruing funding and prestige, as well as challenging their authority in organizing and administrating the research endeavour. It would also contest the relevance and mission of the current neoliberal rankings and research evaluation exercises.

The potential political threat that big team science poses to the neoliberal logic of the system is clear. The THE university rankings initially described big team science papers as “freakish” (Baty, 2015), and excluded them entirely from its ranking calculations; later changing to a ‘fractional’ approach to valuing the contributions of publications with over 1,000 authors (Times Higher Education, 2021). Likewise, the UK REF exercise states that for some publications with more than 15 authors, a statement must be provided to affirm a researcher’s “substantial contribution” to the paper, in order for it to be returned by that researchers’ institution (Research Excellence Framework, 2019). Anecdotal evidence suggests that hiring panels and promotion criteria explicitly dissuade or discount the involvement of researchers in big team science projects (Coles et al., 2022). The extent to which big team science publications are valued by the meso-level research market may therefore determine the capitalist university’s position towards supporting such research. However, big team science projects have other unique benefits that may appeal to universities. They facilitate building international networks, which are a key part of university research strategies (Rees, 2015). They also have the potential to produce new forms of knowledge and scientific breakthroughs that would be impossible for smaller, independent teams, such as running the Large Hadron Collider particle physics infrastructure (Coles et al., 2022; Koch & Jones, 2016). Such breakthroughs also bring the traditional qualitative prestige still valued by universities, through media coverage and impact case studies. It is therefore likely that the capitalist university must learn to adapt to, and leverage, big team research to work within the existing research system, to control its potential existential threat.

Overall, it is clear that in terms of principles and ethos, open research is opposed to many of the tenets of academic capitalism. However, the extent to which reforms practically oppose the manifestations, or fundamental mechanisms of academic capitalism, is limited. In the cases where reforms do challenge these elements, the capitalist university is likely to engage with reforms only selectively to the extent to which they align with its own priorities.

Like any decision regarding significant investment in infrastructure or a major change in policy, a university’s support for open research will be strategic based on its potential benefits to the university’s mission and market competitiveness. Therefore, the theory of academic capitalism can be used to explore the potential types of ‘top-down’ reforms that universities may employ (and their effects) in the context of growing pressure to engage with open research from both the micro-level (grassroots open research advocates) and the macro-level (research funder requirements). In some cases, these reforms may exacerbate the negative aspects of the present manifestations of academic capitalism.

The Development of Open Research Metrics

The open research movement has acknowledged the importance of incentives for influencing researchers’ behaviour, having observed the clear link between current metrics of research performance and their influence on the research process (e.g., in promoting QRPs). Consequently, a large body of research and policy has focused on the development of “next generation” metrics (European Commission et al., 2017) which measure (and thus can incentivise) researchers’ use of open practices such as pre-registration, data and materials sharing, and open access publishing, to supplement or replace traditional metrics. Performance on such metrics could be incentivised by incorporating their use in promotion or employment criteria (e.g., Kowalczyk et al., 2022; Munafò, 2019), and some specific proposals have already been developed. For example, Gärtner et al. (2022) have proposed quantitatively ‘scoring’ the use of open research practices, with overall candidate scores used in shortlisting decisions.

Given the success of traditional metrics at incentivising QRPs, as a practical solution to encourage open practices, utilising metrics is clearly an effective strategy. However, using metrics in this way represents a complicity with a neoliberal ethos of a market expectation of certain researcher behaviour, rather than a trust-based obligation. Concerns around the potential for open research metrics to be exploitative have already been raised (Knowledge Exchange, 2021) leading to the re-affirmation of an ethos of “responsible metrics” (Wilsdon et al., 2015) which argues for the contextual use of metric information, the use of a combination of metrics and qualitative judgement, and the continued development of high quality and diverse range of metrics to counter exploitation.

However, the use of novel metrics to promote open research also means propagating the negative effects that metrics have more generally on research culture. No matter how ‘high quality’ open research metrics may be, they are by definition reductionist and neglect at least some contextual information. For example, open data may be ‘measured’ incorrectly in cases where the ethics of sharing the data of a particular research project is contested. The use of open research metrics therefore increases the threat of “policy alienation” against open research principles more broadly (Lilja, 2021), encompassing disengagement and resistance when researchers perceive policies to be flawed, or when they lack power to influence them. Their use may contribute to a decline in professional trust with the university (Muller, 2018), particularly given the narrative origin of open research practices as targeting the use of QRPs and fraud. When metric performance is incentivised by rewards, it reduces intrinsic motivation for an activity, which may compromise grassroots bottom-up efforts to promote open research as a tool for intrinsically motivated researchers to improve the quality of their work (Allen & Mehler, 2019). Although there are calls for the use of a variety of open research metrics, their existence inherently cultivates standardization (Muller, 2018), leaving them inappropriate for different disciplines and modes of knowledge production. Initiatives to assess open research have attracted charges of ‘epistemic imperialism’ (Penders, 2022) and work against efforts of the grassroots organizers to advocate for a flexible and diverse approach to the adoption of different practices (Kathawalla et al., 2021; Whitaker & Guest, 2020).

Despite assertions by existing providers of open research metrics that “transparency cannot be gamed” (Curate Science, 2021), if such metrics are linked to employment incentives then it is inevitable that researchers will attempt to behave strategically to succeed on them. The discourse around open research metrics appears to accept the inevitability of Goodhart’s Law, which states that when a measure becomes a target, it ceases to be a good measure (Muller, 2018). Hence the acknowledgement that open research metrics will require continued development to prevent researchers exploiting them (European Commission et al., 2017; Knowledge Exchange, 2021). Even in the embryonic stages of the development of open metrics, there is evidence of researchers attempting to game them, by uploading incomplete data or unusable analysis code to receive credit for sharing these; behaviour dubbed “open-washing” (FORRT, 2021). To prevent gaming, the measurement of open research practices must become more complicated and nuanced, and further resources dedicated to improving and implementing metrics, as well as investigating their effects and correcting historical cases of gaming. These activities form much of the current work of the growing field of meta-research (Ioannidis, 2018). The adoption of open research metrics as performance incentives by universities and university rankings would therefore invite comparisons between meta-research and the historical appropriation of bibliometric research by neoliberal policy makers and the capitalist university (de Rijcke & Rushforth, 2015).

A final issue is the lack of an in-built limit to dedicating resources to combatting the cycle of gaming and improvement. Where metrics are employed as performance incentives in the private sector, there is a limit to the resources that can be invested in measuring performance and improving metrics to prevent gaming: eventually it begins to distract and limit the company’s core business of making profit (Muller, 2018). However, in research, given the lack of a true reliable indicator of ‘output value’ to contrast with input costs, this limiting effect is harder to establish. The huge growth in administrative roles in universities (Bozeman & Jung, 2017) is partly evidence of the time and resources already spent on recording and implementing metrics. To the extent that this administration and the field of meta-research have a genuine effect on improving the quality of research and advancing human knowledge (through metricizing open practices or otherwise), investment in these is justified. However, it should be recognized that this investment (as well as the additional demands on researchers of complying with new metrics) also redirects time, resources, and money away from the actual process of research that they are supposed to be improving.

Open Research in University Research Governance and Administration

If open research practices become incentivised or mandated by a university’s research strategy, a logical step would be to incorporate their assessment into existing university research administration workflows and systems of compliance (Shepherd, 2008). Two potential examples of this occurring are with pre-registration and data/materials sharing. For pre-registration, there have been suggestions to include this in existing university ethical review procedures. This has naïvely been described as merely “a few additional steps” to existing ethical review (Nosek et al., 2018), but existing processes often already present an unnecessary administrative burden for researchers. Within the open research movement, there are active debates around the value of pre-registration for certain types of research (e.g., qualitative or exploratory research, Coffman & Niederle, 2015; Szollosi et al., 2020), and concerns about the poor quality of some pre-registrations, which must be “exhaustive” in order to properly constrain opportunities for researcher bias and reveal QRPs (Bakker et al., 2020). This complicates the design of an efficient system of compliance for pre-registration, the likely result of which is potential overcompliance and problematic robotic bureaucracy, asking academics to self-administer their research plans (or explain why such plans are not necessary) using centralized systems which inevitably lack nuance. Given that existing examples of administrative burden in the ethical review process stem from similar conceptual issues (Bozeman & Youtie, 2020), there is a high potential for a university to implement additional pre-registration procedures that increase unnecessary bureaucracy for researchers.

A second example of potential increased bureaucracy comes from the tensions between open data and materials and entrepreneurship. This can be seen where there is debate over the extent of openness of different elements of a research project, for example about the timing of releasing open materials to maintain a competitive or commercial edge, or when a piece of software has been developed as part of collaboration and the ownership of different components of it is unclear (N. Levin & Leonelli, 2017). In these cases where universities must decide to selectively engage in open research practices, further administration is necessary to select and record which projects (or elements of projects) will be made open (and when, and to what extent). In large collaborations it may be necessary to develop complex legal agreements to ensure that all parties are united on the extent and timing of openness of different elements of a project. This is particularly likely in cases of collaboration with industry, who may be ideologically opposed to openness (Fernández Pinto, 2020). Even with the support of research professionals, these agreements will inevitably entail more administrative work for academics.

The incorporation of open research into existing university administration would accelerate the entrenchment of openness as a newly integral (rather than additional) part of the process of research itself. This may lead not only to new practical burdens on researchers, such as an increase in workload (Hostler, 2023), but to moral pressures to perform open research, as part of the reformed scientific bureaucracies and procedures that distinguish “good” open research from undesirable closed research (Penders, 2022). This is problematic, as what counts as “good” research can vary significantly depending on the academic discipline and mode of knowledge production. Cementing ‘openness’ as a quality criteria in existing research bureaucracies perpetuates epistemic orthodoxies and inappropriately ‘shames’ research that fails to adopt such practices, even if it may not be beneficial for that specific project (e.g., pre-registration for qualitative research).

In his seminal book on ethical review boards “The Censor’s Hand”, Schneider (2015) describes a system of bureaucracy with a number of features. It is a system created and justified by a perceived crisis in unethical or dishonest research; its initial remit and purpose are expanded by “bureaucratic turf-grabbing”, and the determined work of administrators who are dependent on its success; it employs a costly and time-consuming “event-licensing” model which determines whether every single piece of research (no matter how small or unimportant) complies with certain standards; it lacks a coherent or legible framework for decisions, it inappropriately views most research through the norms and definitions of a positivist, scientific epistemology; and it tends towards methodological orthodoxy and is unaccountable. Although it is not inevitable, it is also not difficult to imagine a situation in which the appropriation of open research administration by the university leads to a system with similar features for mandating open research.

Open Research and the Reorganisation of Academic Labour

The potential for open research reforms to influence the organization, form, and exploitation of academic labour have been discussed since Moore (2007) framed initiatives to share data as a “functionalist” rather than altruistic reform, embedded in the capitalist logic of cost-effectiveness and pressure to exhaust returns from investment and resources (including academic labour). This observation was prescient to appeals to “maximise” and “extract” value from research and “minimize waste” in discourses commonly used in the justification of open practices (Ali-Khan et al., 2018; Inkpen et al., 2021; S. L. K. Stewart et al., 2022). More recently, Callard (2022) has argued that open research’s aim of improving the “efficiency” of research has implications for the workload of academics and that open research discourse has neglected to discuss the expectation of increased academic labour to achieve this. These concerns echo those of Peterson & Panofksy (2020, 2021) who suggest that the framing of open research in terms of “economic language”, and a focus on the extrinsic incentives that motivate scientists betray the collusion of open research with neoliberal principles. However, Uygun-Tunç et al. (2021) argue it is a fallacy to politically link open research and neoliberalism directly through shared language of “efficiency”: open research is primarily concerned with increasing the efficient accumulation of human knowledge by redirecting resources only towards epistemically reliable research (without reference to the conditions of academic labour); the capitalist university is concerned with the efficient accumulation of knowledge which improves its metric performance (epistemically reliable or not) through controlling the distribution (and potentially exploitation) of its resources and academic labour. Put simply, efficiency is a relative concept and here the objects of the arguments are distinct. Arguing that open research reforms are neoliberal because they do not focus on changing existing labour exploitation is inappropriate if this is not their purpose.

A key consideration though, is whether both conceptualizations of efficiency can be simultaneously true, and whether the top-down incentivisation of open research causes this commonality. In other words: Is the accumulation of epistemically reliable research also made more efficient by greater exploitation of academic labour? And does incentivising open research within the current framework of a neoliberal and hyper-competitive research market facilitate this?

These questions must be considered within the context of current exploitation of academic labour and the demands of open research. It is widely acknowledged that conducting open research comes with additional time and resource demands on researchers (Allen & Mehler, 2019). Completing the tasks of pre-registration, data-sharing, and materials-sharing to a high standard is time consuming and often requires the cultivation of specialist knowledge in statistics, data sharing legislation, or other data science skills. Simultaneously, there is a growing recognition of increasing workloads for academics (Beatson et al., 2021), and the use of “workload intensification” as a strategy by universities to meet institutional needs without expending greater resources (Papadopoulos, 2017). Changes to research expectations are often seen as additional demands on academics, introduced to workloads that are already at capacity, in the form of “workload creep” (Long et al., 2020). Given that workloads are already at “untenable” levels (Beatson et al., 2021), and universities are unlikely to see changes to pressures to maximise efficiency and remain competitive, it is probable that expectations to conduct open research will be included in workload creep. This would manifest in expectations for researchers to practice open research activities, but without explicitly downgrading expectations on other performance measures such as level of funding acquisition or rate of publication (Hostler, 2023).

A common suggestion for mitigating increased workload from open research has been to support the organization of research along post-academic science principles of interdisciplinarity, team-based research; specialization; and project-focus (Ziman, 2000). For example, Stewart et al. (2021) acknowledge that “It is an unrealistic goal for researchers to be software engineers in addition to being experts in their discipline”, and that “there should be wider support, recognition and reward of team-based research”. Similarly, Holcombe (2019) argues that a positive change of adopting open “contributor” authorship practices will be that “the allocation of scientific resources will shift to more effective combinations of researchers” (rather than individuals). However, such team-based organization already generates opportunities for exploitation, through the development of a “gig economy” job market (G. Nelson et al., 2020), in which workers lack the security or benefits associated with traditional full-time permanent contracts. Employing researchers only for specific research tasks (such as data collection) is a practice already used to justify underpaid, temporary contractual arrangements (Ivancheva, 2015).

From an open research perspective, reorganizing research to team-based projects fosters improvements in diversity and fairness. Using open authorship practices provides recognition and reward to research labour which is often not formally acknowledged, for example, the work of software engineers, statisticians, data managers, and other research professionals (S. L. K. Stewart et al., 2022). Although discussion of labour conditions is generally lacking in open research discourse (Callard, 2022), there have been some explicit calls against exploitative practices. For example, the suggestion that specialists should be “core-funded” by institutions (rather than project-funded) and given “clear routes for career progression and promotion” (S. L. K. Stewart et al., 2022), and the need to consider which types of labour are currently over- and under-valued in reforms to incentive systems (Ledgerwood et al., 2022). However, any changes to the employment of researchers introduces the potential for exploitation as well as to improvements in labour conditions. Whilst specialization often confers higher employment status and salaries, the qualitative change from an ‘all-round’ researcher to one who only contributes to a single element of research also brings with it the potential argument that such roles should therefore be valued less. Ironically, universities could justify exploitative changes by harnessing existing prejudices from traditional ‘principal investigator’ academics against the worth of research professionals and specialists (Teperek et al., 2022).

The adoption of open authorship has been framed as an opportunity for “universities… to make better decisions by analysing the sorts of teams that deliver the best research” (Holcombe, 2019). The question is whether the “better decisions” made by the university will be in the interests of researchers, or those of academic capitalism. Transparency illuminates the previously opaque ‘black box’ of research labour, giving universities the opportunity to employ new forms of control and managerialism, including more micro-managed workloads and the ‘Taylorisation’ of research labour, analogous to changes which have occurred in university teaching (McCarthy et al., 2017). Universities could use contributor information to change the existing terms of employment of academics and unwillingly shift them on to specialized, or ‘atypical’ contracts, in the same way as is currently done in the redeployment of underperforming researchers to teaching-only contracts. Differing valuations of research roles could subsequently lead to researchers gaming contributorship guidelines.

Whilst the motives may be different, there is clear alignment in discourse and practice between the open research movement’s promotion of specialized team science, and the manifestation of academic capitalism in the re-organisation of academic labour along similar principles. This may help to facilitate the adoption of open research by universities, but also positions open research as hostile towards many of the norms and values of research under the ‘traditional’ research governance model. There is a clear adversarial position in some areas to what is viewed as the “de-skilling” of academic labour by the growth in specialist research roles (Macfarlane, 2011, 2021; McCarthy et al., 2017). From this perspective, the shift to post-academic science represents an imposed redefinition of research as an activity, and an act of de-legitimization of other types of research based around disciplinary norms, philosophical reflection, and broad intellectual engagement (Macfarlane, 2021). For these researchers, the ‘unbundling’ of their labour through enforced specialization and the increase in research professionals may represent a profound threat to their “academic identity” (Neary & Winn, 2016). More broadly, it has been argued a shift to post-academic science presents a threat to the concept of academic citizenship (Beatson et al., 2021), to be replaced by a shift to “market citizenship”, and behaviours directed toward maintaining a neoliberal research system (McCarthy et al., 2017).

Finally, a shift towards teams of differentiated, specialized researchers may contribute to the breakdown of the faculty as a unified labour force. This may occur practically, as a result of the conditions of working within competing, insular teams (Braun, 2014); but also philosophically, from the dissolution of a conventional academic identity. An academic identity based around shared practice, experience, and values is integral to the development of a common “epistemological standpoint” (Fisher, 2021) which in Marxist theory is thought to be a necessary precondition to acting as a unified labour force to challenge working conditions. Thus, a move from a broad community of scholars to specialist, collectivized research teams (partly due to the requirements of open research reforms) may deprive the faculty of one of its only weapons of resistance against the negative effects of academic capitalism.

Utilising the theory of academic capitalism, this paper has considered the potential socio-political consequences of open research reforms in the context of a neoliberal research market, and the priorities and behaviours of the university as a key actor in this ecosystem. Using this lens, it can be seen that whilst the ethos and values of the open research movement are in opposition to the norms promoted by academic capitalism (such as competitive self-interestedness), methodological reforms do little to challenge the actual manifestations or fundamental mechanisms of academic capitalism. Rather, the uncritical incentivisation of open research within an existing framework of academic capitalism and a neoliberal research market invites accusations of endorsement and may enable the appropriation of open research reforms by capitalist universities as a tool of control, increasing bureaucracy and workload for researchers.

Integrating the assessment and incentivization of open research practices within existing research bureaucracies is likely to accelerate the standardization of the movement, something that grassroots open research advocates have already been critical of. There is still significant work to be done to understand the value of open practices (such as preregistration) across different epistemic contexts. Pursuing the development of administrative and assessment processes that reward or require open research without considering these questions would reinforce accusations of ‘epistemic imperialism’ already aimed at the movement (Penders, 2022).

Open research and academic capitalist discourses closely align in the promotion of some of the principles of “post-academic science”, such as specialization and collectivization of researchers. Many academics may therefore oppose open research reforms on the grounds that they facilitate the unbundling of academic labour and represent an existential threat to traditional academic identity. Whilst open authorship and the increased transparency of academic work provide an opportunity to reorganise academic labour to improve diversity and reward under-valued work, they also offer the university new opportunities to exploit academic labour, potentially by harnessing existing prejudices against research professionals and specialists.

The open research movement cannot ignore the fact that universities’ institutional response to reforms will be a strategic one, based on the extent to which they align with existing and future priorities. The university’s behaviour must be considered in the context of a global neoliberal research market which influences their approaches to research governance, and relationships with academics. Developing solutions to the potential problematic consequences of reforms requires acknowledgement of existing problems as well as of the conflicting priorities and values of academics and universities. Previously apolitical proponents of open research reforms must engage with potentially contentious issues regarding the future form(s) of academic labour, which may bring them into conflict with either the capitalist university or with advocates of a traditional “all round” academic identity. Likewise, those who argue for open research as a values-based ethos for research must critically engage with how such values can be practically upheld and promoted within a system that is increasingly influenced by academic capitalism.

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

The Author declares that there is no conflict of interest.

Ali-Khan, S. E., Jean, A., … MacDonald, E. (2018). Defining Success in Open Science. MNI Open Research, 2, 2. https://doi.org/10.12688/mniopenres.12780.2
Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLOS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246
Anderson, M. S., Ronning, E. A., … De Vries, R. (2007). The Perverse Effects of Competition on Scientists’ Work and Relationships. Science and Engineering Ethics, 13(4), 437–461. https://doi.org/10.1007/s11948-007-9042-5
Armeni, K., Brinkman, L., … Carlsson, R. (2021). Towards wide-scale adoption of open science practices: The role of open science communities. Science and Public Policy, 48(5), 605–611. https://doi.org/10.1093/scipol/scab039
Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6), 543–554. https://doi.org/10.1177/1745691612459060
Bakker, M., Veldkamp, C. L. S., … van Assen, M. A. L. M. (2020). Ensuring the quality and specificity of preregistrations. PLOS Biology, 18(12), e3000937. https://doi.org/10.1371/journal.pbio.3000937
Baty, P. (2015). World University Rankings blog: dealing with freak research papers. Times Higher Education. https:/​/​www.timeshighereducation.com/​blog/​world-university-rankings-blog-dealing-freak-research-papers
Beatson, N. J., Tharapos, M., O’Connell, B. T., & et al. (2021). The gradual retreat from academic citizenship. Higher Education Quarterly, hequ.12341. https://doi.org/10.1111/hequ.12341
Beck, S., Bergenholtz, C., … Bogers, M. (2022). The Open Innovation in Science research field: a collaborative conceptualisation approach. Industry and Innovation, 29(2), 136–185. https://doi.org/10.1080/13662716.2020.1792274
Bem, D. J. (2011). Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality and Social Psychology, 100(3), 407–425. https://doi.org/10.1037/a0021524
Bozeman, B. (1993). A theory of government ‘Red tape.’ Journal of Public Administration Research and Theory, 3(3), 273–303.
Bozeman, B., & Jung, J. (2017). Bureaucratization in Academic Research Policy: What Causes It? Annals of Science and Technology Policy, 1(2), 133–214. https://doi.org/10.1561/110.00000002
Bozeman, B., & Youtie, J. (2020). Robotic Bureaucracy: Administrative Burden and Red Tape in University Research. Public Administration Review, 80(1), 157–162. https://doi.org/10.1111/puar.13105
Brankovic, J. (2021). The Absurdity of University Rankings. LSE Impact Blog. https:/​/​blogs.lse.ac.uk/​impactofsocialsciences/​2021/​03/​22/​the-absurdity-of-university-rankings/​
Brankovic, J., Ringel, L., & Werron, T. (2018). How Rankings Produce Competition: The Case of Global University Rankings. Zeitschrift für Soziologie, 47(4), 270–288. https://doi.org/10.1515/zfsoz-2018-0118
Braun, D. (2014). Governance of Universities and Scientific Innovation. In C. Musselin & P. Teixeira (Eds.), Reforming Higher Education: Public Policy Design and Implementation (pp. 145–173). Springer Dordrecht. https://doi.org/10.1007/978-94-007-7028-7_8
Brink, C. (2018). The Soul of a University: Why Excellence Is Not Enough. Bristol University Press. https://doi.org/10.46692/9781529200355
Bruton, S. V., Medlin, M., … Brown, M. (2020). Personal Motivations and Systemic Incentives: Scientists on Questionable Research Practices. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00182-9
Callard, F. (2022). Replication and Reproduction: Crises in Psychology and Academic Labour. Review of General Psychology, 108926802110556. https://doi.org/10.1177/10892680211055660
Carter, S., Carlson, S., … Crockett, J. (2019). The Role of Research Development Professionals in Supporting Team Science. In K. L. Hall, A. L. Vogel, & R. T. Croyle (Eds.), Strategies for Team Science Success (pp. 375–388). Springer International Publishing. https://doi.org/10.1007/978-3-030-20992-6_28
Chen, G., & Chan, L. (2021). University rankings and governance by metrics and algorithms. In E. Hazelkorn & G. Mihut (Eds.), Research Handbook on University Rankings: Theory, Methodology, Influence and Impact. Edward Elgar Publishing Ltd. https://doi.org/10.4337/9781788974981.00043
Cidlinska, K., Nyklova, B., Machovcova, K., & et al. (2022). “Why I don’t want to be an academic anymore?” When academic identity contributes to academic career attrition. Higher Education. https://doi.org/10.1007/s10734-022-00826-8
Coffman, L. C., & Niederle, M. (2015). Pre-Analysis Plans Have Limited Upside, Especially Where Replications Are Feasible. Journal of Economic Perspectives, 29(3), 81–98. https://doi.org/10.1257/jep.29.3.81
Cohoon, J., Howison, J. (2021). Norms and Open Systems in Open Science. Information Culture, 56(2), 115–137. https://doi.org/10.7560/IC56201
Coles, N. A., Hamlin, J. K., … Sullivan, L. L. (2022). Build up big-team science. Nature, 601(7894), 505–507. https://doi.org/10.1038/d41586-022-00150-2
Collini, S. (2012). What Are Universities For? Penguin.
Cuevas Shaw, L., Errington, T. M., Mellor, D. T. (2022). Toward Open Science: Contributing to Research Culture Change. Science Editor, 14–17. https://doi.org/10.36591/SE-D-4501-14
Curate Science. (2021). Science requires *minimum transparency*, conceptually ethically, but because it’s NOT yet enforced, most research is still not reported transparently. [Twitter post]. https:/​/​twitter.com/​curatescience/​status/​1371927234899017731
Czarnitzki, D., Grimpe, C., Toole, A. A. (2015). Delay and secrecy: does industry sponsorship jeopardize disclosure of academic research? Industrial and Corporate Change, 24, 251–279. https://doi.org/10.2139/ssrn.1759433
de Groot, A. D. (2014). The meaning of “significance” for different types of research (E.-J. Wagenmakers, D. Borsboom, J. Verhagen, R. Kievit, M. Bakker, A. Cramer, D. Matzke, D. Mellenbergh, H. L. J. van der Maas, Trans.). Acta Psychologica, 148, 188–194. https://doi.org/10.1016/j.actpsy.2014.02.001
de Rijcke, S., Rushforth, A. (2015). To intervene or not to intervene; is that the question? On the role of scientometrics in research evaluation. Journal of the Association for Information Science and Technology, 66(9), 1954–1958. https://doi.org/10.1002/asi.23382
European Commission, University of Sheffield, Bar Ilan University, et al. (2017). Next-Generation Metrics: Responsible Metrics and Evaluation for Open Science. Publications Office. https://doi.org/10.2777/337729
Fecher, B., Friesike, S. (2014). Open Science: One Term, Five Schools of Thought. In S. Bartling S. Friesike (Eds.), Opening Science (pp. 17–47). Springer International Publishing. https://doi.org/10.1007/978-3-319-00026-8_2
Fernández Pinto, M. (2020). Open Science for private Interests? How the Logic of Open Science Contributes to the Commercialization of Research. Frontiers in Research Metrics and Analytics, 5, 588331. https://doi.org/10.3389/frma.2020.588331
Finkin, M. W., Post, R. C. (2011). For the Common Good: Principles of American Academic Freedom. Yale University Press.
Fisher, M. (2009). Capitalist Realism - Is There No Alternative? Zero Books.
Fisher, M. (2021). Postcapitalist Desire: The Final Lectures. Repeater Books.
Fochler, M., Felt, U., Müller, R. (2016). Unsustainable Growth, Hyper-Competition, and Worth in Life Science Research: Narrowing Evaluative Repertoires in Doctoral and Postdoctoral Scientists’ Work and Lives. Minerva, 54(2), 175–200. https://doi.org/10.1007/s11024-016-9292-y
FORRT. (2021). Open Washing. FORRT Glossary. https:/​/​forrt.org/​glossary/​open-washing/​
Forscher, P. S., Wagenmakers, E.-J., … Coles, N. A. (2022). The Benefits, Barriers, and Risks of Big Team Science. PsyArXiv. https://doi.org/10.31234/osf.io/2mdxh
Gärtner, A., Leising, D., Schönbrodt, F. D. (2022). Responsible Research Assessment II: A specific proposal for hiring and promotion in psychology (Version 1). https://doi.org/10.31234/osf.io/5yexm
Gelman, A., Vazire, S. (2021). Why Did It Take So Many Decades for the Behavioral Sciences to Develop a Sense of Crisis Around Methodology and Replication? Journal of Methods and Measurement in the Social Sciences, 12(1). https://doi.org/10.2458/jmmss.3062
Giner-Sorolla, R. (2012). Science or Art? How Aesthetic Standards Grease the Way Through the Publication Bottleneck but Undermine Science. Perspectives on Psychological Science, 7(6), 562–571. https://doi.org/10.1177/1745691612457576
Godbout, J. T., Caillé, A. (1998). The World of the Gift. McGill-Queens University Press. https://doi.org/10.1515/9780773567320
Gopalakrishna, G., ter Riet, G., … Vink, G. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLOS ONE, 17(2), e0263023. https://doi.org/10.1371/journal.pone.0263023
Grahe, J. E., Cuccolo, K., … Leighton, D. C. (2020). Open Science Promotes Diverse, Just, and Sustainable Research and Educational Outcomes. Psychology Learning Teaching, 19(1), 5–20. https://doi.org/10.1177/1475725719869164
Greenwald, A. G. (Ed.). (1976). An editorial. Journal of Personality and Social Psychology, 33(1), 1–7. https://doi.org/10.1037/h0078635
Harvey, L. (2020). Research fraud: a long-term problem exacerbated by the clamour for research grants. Quality in Higher Education, 26(3), 243–261. https://doi.org/10.1080/13538322.2020.1820126
Haven, T. L., Tijdink, J. K., … Martinson, B. C. (2019). Perceptions of research integrity climate differ between academic ranks and disciplinary fields: Results from a survey among academic researchers in Amsterdam. PLOS ONE, 14(1), e0210599. https://doi.org/10.1371/journal.pone.0210599
Hecker, S., Haklay, M., … Bowser, A. (2018). Citizen Science - Innovation in Open Science, Society and Policy. UCL Press. http:/​/​library.oapen.org/​handle/​20.500.12657/​28178
Herschberg, C., Benschop, Y., van den Brink, M. (2018). Precarious postdocs: A comparative study on recruitment and selection of early-career researchers. Scandinavian Journal of Management, 34(4), 303–310. https://doi.org/10.1016/j.scaman.2018.10.001
HLS47. (2022). HLS47 - About. HLS47. https:/​/​www.hls47.co.uk/​about/​
Hoffman, S. G. (2021). A story of nimble knowledge production in an era of academic capitalism. Theory and Society, 50(4), 541–575. https://doi.org/10.1007/s11186-020-09422-0
Hogan, J. (2011). Is higher education spending more on administration and, if so, why? Perspectives: Policy and Practice in Higher Education, 15(1), 7–13. https://doi.org/10.1080/13603108.2010.532316
Holcombe, A. (2019). Farewell authors, hello contributors. Nature, 571(7764), 147–147. https://doi.org/10.1038/d41586-019-02084-8
Hooley, T. (2021). What is going on at the University of Leicester? Career Guidance For Social Justice. https:/​/​careerguidancesocialjustice.wordpress.com/​2021/​04/​29/​what-is-going-on-at-the-university-of-leicester/​comment-page-1/​
Hosseini, M., Hidalgo, E. S., Horbach, S. P. J. M., Güttinger, S., Penders, B. (2022). Messing with Merton: The Intersection between open science practices and Mertonian values. Accountability in Research, 31(5), 428–455. https://doi.org/10.1080/08989621.2022.2141625
Hostler, T. J. (2023). The Invisible Workload of Open Research. Journal of Trial Error, 4(1). https://doi.org/10.36850/mr5
Ignat, T., Ayris, P. (2020). Built to last! Embedding open science principles and practice into European universities. Insights the UKSG Journal, 33, 9. https://doi.org/10.1629/uksg.501
Inkpen, R., Gauci, R., Gibson, A. (2021). The values of open data. Area, 53(2), 240–246. https://doi.org/10.1111/area.12682
Ioannidis, J. P. A. (2018). Meta-research: Why research on research matters. PLOS Biology, 16(3), e2005468. https://doi.org/10.1371/journal.pbio.2005468
Ivancheva, M. (2015). The age of precarity and the new challenges to the academic profession. Studia Europaea, LX(1), 39–47.
Jessop, B. (2017). Varieties of academic capitalism and entrepreneurial universities: On past research and three thought experiments. Higher Education, 73(6), 853–870. https://doi.org/10.1007/s10734-017-0120-6
Jessop, B. (2018). On academic capitalism. Critical Policy Studies, 12(1), 104–109. https://doi.org/10.1080/19460171.2017.1403342
John, L. K., Loewenstein, G., Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
Kathawalla, U.-K., Silverstein, P., Syed, M. (2021). Easing Into Open Science: A Guide for Graduate Students and Their Advisors. Collabra: Psychology, 7(1), 18684. https://doi.org/10.1525/collabra.18684
Kellogg, D. (2006). Toward a Post-Academic Science Policy: Scientific Communication and the Collapse of the Mertonian Norms. International Journal of Communications Law Policy Special Issue, Access to Knowledge.
Kelly, A., Burrows, R. (2011). Measuring the value of sociology? Some notes on performative metricization in the contemporary academy. The Sociological Review, 59, 130–150. https://doi.org/10.1111/j.1467-954X.2012.02053.x
Kerr, N. L. (1998). HARKing: Hypothesizing After the Results are Known. Personality and Social Psychology Review, 2(3), 196–217. https://doi.org/10.1207/s15327957pspr0203_4
Kirkpatrick, I. (2016). Hybrid Managers and Professional Leadership. In The Routledge Companion to the Professions and Professionalism. Routledge.
Klikauer, T. (2013). What Is Managerialism? Critical Sociology, 41(7–8), 1103–1119. https://doi.org/10.1177/0896920513501351
Knowledge Exchange. (2017). Knowledge Exchange Approach Towards Open Scholarship. Zenodo. https://doi.org/10.5281/ZENODO.826643
Knowledge Exchange. (2019). Open Scholarship and the Need for Collective Action. Zenodo. https://doi.org/10.5281/ZENODO.3454688
Knowledge Exchange. (2021). Openness Profile: Modelling research evaluation for open scholarship. https://doi.org/10.5281/ZENODO.4581490
Koch, C., Jones, A. (2016). Big Science, Team Science, and Open Science for Neuroscience. Neuron, 92(3), 612–616. https://doi.org/10.1016/j.neuron.2016.10.019
Kowalczyk, O. S., Lautarescu, A., … Blok, E. (2022). What senior academics can do to support reproducible and open research: a short, three-step guide. BMC Research Notes, 15(1), 116. https://doi.org/10.1186/s13104-022-05999-0
Lakens, D., Evers, E. R. K. (2014). Sailing From the Seas of Chaos Into the Corridor of Stability: Practical Recommendations to Increase the Informational Value of Studies. Perspectives on Psychological Science, 9(3), 278–292. https://doi.org/10.1177/1745691614528520
Langfeldt, L., Reymert, I., Aksnes, D. W. (2021). The role of metrics in peer assessments. Research Evaluation, 30(1), 112–126. https://doi.org/10.1093/reseval/rvaa032
Larson, R. C., Ghaffarzadegan, N., Xue, Y. (2013). Too Many PhD Graduates or Too Few Academic Job Openings: The Basic Reproductive Number R 0 in Academia: The Basic Reproductive Number R 0 in Academia. Systems Research and Behavioral Science, 31(6), 745–750. https://doi.org/10.1002/sres.2210
Ledgerwood, A., Hudson, S. T. J., … Lewis, N. A. (2022). The Pandemic as a Portal: Reimagining Psychological Science as Truly Open and Inclusive. Perspectives on Psychological Science, 174569162110366. https://doi.org/10.1177/17456916211036654
Lee, Y.-N., Walsh, J. P. (2022). Rethinking Science as a Vocation: One Hundred Years of Bureaucratization of Academic Science. Science, Technology, Human Values, 47, 1057–1085. https://doi.org/10.1177/01622439211026020
Levin, J. (2011). The Emergence of the Research-Development Professional. The Chronicle of Higher Education. https:/​/​www.chronicle.com/​article/​the-emergence-of-the-research-development-professional/​
Levin, N., Leonelli, S. (2017). How Does One “Open” Science? Questions of Value in Biological Research. Science, Technology, Human Values, 42(2), 280–305. https://doi.org/10.1177/0162243916672071
Lilja, E. (2021). Threat of policy alienation: Exploring the implementation of Open Science policy in research practice. Science and Public Policy, 47(6), 803–817. https://doi.org/10.1093/scipol/scaa044
Lim, M. A. (2019). Governing Higher Education: The PURE Data System and the Management of the Bibliometric Self. Higher Education Policy, 34(1), 238–253. https://doi.org/10.1057/s41307-018-00130-0
Loftus, G. R. (1993). A picture is worth a thousandp values: On the irrelevance of hypothesis testing in the microcomputer age. Behavior Research Methods, Instruments, Computers, 25(2), 250–256. https://doi.org/10.3758/BF03204506
Long, D. W., Barnes, A. P. L., … Northcote, P. M. (2020). Accounting Academic Workloads: Balancing Workload Creep to Avoid Depreciation in the Higher Education Sector. Education, Society and Human Studies, 1(2), 55. https://doi.org/10.22158/eshs.v1n2p55
Lupia, A. (2021). Practical and Ethical Reasons for Pursuing a More Open Science. PS: Political Science Politics, 54(2), 301–304. https://doi.org/10.1017/S1049096520000979
Macfarlane, B. (2011). The Morphing of Academic Practice: Unbundling and the Rise of the Para-academic: The Morphing of Academic Practice. Higher Education Quarterly, 65(1), 59–73. https://doi.org/10.1111/j.1468-2273.2010.00467.x
Macfarlane, B. (2021). The spirit of research. Oxford Review of Education, 47(6), 737–751. https://doi.org/10.1080/03054985.2021.1884058
Manchester Metropolitan University. (2017). Research and Knowledge Exchange Strategy. https:/​/​www2.mmu.ac.uk/​media/​mmuacuk/​content/​documents/​strategy-lp-17/​manchester-met-rke-strategy.pdf
Marginson, S. (2013). The impossibility of capitalist markets in higher education. Journal of Education Policy, 28(3), 353–370. https://doi.org/10.1080/02680939.2012.747109
Mau, S. (2019). The Metric Society. Polity Press.
McCarthy, G., Song, X., Jayasuriya, K. (2017). The proletarianisation of academic labour in Australia. Higher Education Research Development, 36(5), 1017–1030. https://doi.org/10.1080/07294360.2016.1263936
Merton, R. (1973). The Sociology of Science: Theoretical and Empirical Investigations. Chicago University Press.
Mills, J. L. (1993). Data Torturing. New England Journal of Medicine, 329(16), 1196–1199. https://doi.org/10.1056/NEJM199310143291613
Mirkowski, P. (2018). The future(s) of open science. Social Studies of Science, 48(2), 171–203. https://doi.org/10.1177/0306312718772086
Mitroff, I. I. (1974). Norms and Counter-Norms in a Select Group of the Apollo Moon Scientists: A Case Study of the Ambivalence of Scientists. American Sociological Review, 39(4), 579. https://doi.org/10.2307/2094423
Moore, N. (2007). (Re)Using Qualitative Data? Sociological Research Online, 12(3), 1–13. https://doi.org/10.5153/sro.1496
Morrison, H. (2013). Economics of scholarly communication in transition. First Monday, 18(6). https://doi.org/10.5210/fm.v18i6.4370
Mulkay, M. J. (1976). Norms and ideology in science. Social Science Information, 15, 637–656. https://doi.org/10.1177/053901847601500406
Muller, J. Z. (2018). The Tyranny of Metrics. Princeton University Press. https://doi.org/10.23943/9781400889433
Munafò, M. R. (2019). Raising research quality will require collective action. Nature, 576(7786), 183–183. https://doi.org/10.1038/d41586-019-03750-7
Munafò, M. R., Nosek, B. A., … Bishop, D. V. M. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1). https://doi.org/10.1038/s41562-016-0021
Münch, R. (2014). Academic Capitalism: Universities in the Global Struggle for Excellence. Routledge. https://doi.org/10.4324/9780203768761
Neary, M., Winn, J. (2016). Against academic identity. Higher Education Research Development, 35(2), 409–412. https://doi.org/10.1080/07294360.2015.1094201
Nedeva, M., Barker, S., Ali Osman, S. (2014). Policy Pressures and the Changing Organization of University Research. In C. Musselin P. Teixeira (Eds.), Reforming Higher Education: Public Policy Design and Implementation (pp. 175–188). Springer Dordrecht. https://doi.org/10.1007/978-94-007-7028-7_9
Nelson, G., Monson, M. J., Adibifar, K. (2020). The gig economy comes to academia: Job satisfaction among adjunct faculty. Cogent Education, 7(1), 1786338. https://doi.org/10.1080/2331186X.2020.1786338
Nelson, N. C., Chung, J., … Ichikawa, K. (2021). Psychology Exceptionalism and the Multiple Discovery of the Replication Crisis. Review of General Psychology, 108926802110465. https://doi.org/10.1177/10892680211046508
Nosek, B. A., Alter, G., Banks, G. C., et al. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
Nosek, B. A., Corker, K. S., … Krall, T. (2020). NSF 19-501 AccelNet Proposal: Community of Open Scholarship Grassroots Networks (COSGN). MetaArXiv. https://doi.org/10.31222/osf.io/d7mwk
Nosek, B. A., Ebersole, C. R., … DeHaven, A. C. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114
Nosek, B. A., Spies, J. R., Motyl, M. (2012). Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability. Perspectives on Psychological Science, 7(6), 615–631. https://doi.org/10.1177/1745691612459058
Nyhagen, G. M., Baschung, L. (2013). New organisational structures and the transformation of academic work. Higher Education, 66(4), 409–423. https://doi.org/10.1007/s10734-013-9612-1
Oancea, A. (2019). Research governance and the future(s) of research assessment. Palgrave Communications, 5(1), 27. https://doi.org/10.1057/s41599-018-0213-6
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. https://doi.org/10.1126/science.aac4716
Papadopoulos, A. (2017). The mismeasure of academic labour. Higher Education Research Development, 36(3), 511–525. https://doi.org/10.1080/07294360.2017.1289156
Penders, B. (2022). Process and Bureaucracy: Scientific Reform as Civilisation. Bulletin of Science, Technology, Society, 42, 107–116. https://doi.org/10.1177/02704676221126388
Peterson, D., Panofsky, A. (2020). Metascience as a scientific social movement. SocArXiv. https://doi.org/10.31235/osf.io/4dsqa
Peterson, D., Panofsky, A. (2021). Arguments against efficiency in science. Social Science Information, 60(3), 350–355. https://doi.org/10.1177/05390184211021383
Powell, W. W., Snellman, K. (2004). The Knowledge Economy. Annual Review of Sociology, 30(1), 199–220. https://doi.org/10.1146/annurev.soc.29.010202.100037
Rees, T. (2015). Developing a Research Strategy at a Research Intensive University: A Pro Vice Chancellor’s Perspective. In R. Dingwall M. McDonnell (Eds.), The SAGE Handbook of Research Management (pp. 565–580). SAGE Publications Ltd. https://doi.org/10.4135/9781473914933.n40
Reichman, H. (2019). The Future of Academic Freedom. John Hopkins University Press. https://doi.org/10.1353/book.66177
Research Excellence Framework. (2019). Research Excellence Framework 2021 Panel Criteria and Working Methods. https:/​/​www.ref.ac.uk/​media/​1084/​ref-2019_02-panel-criteria-and-working-methods.pdf
Romero, F. (2019). Philosophy of science and the replicability crisis. Philosophy Compass, 14(11). https://doi.org/10.1111/phc3.12633
Schneider, C. E. (2015). The Censor’s Hand: The Misregulation of Human-Subject Research. MIT Press. https://doi.org/10.7551/mitpress/9780262028912.001.0001
Science and Technology Committee. (2018). Research Integrity - Sixth Report of Session 2017–19. House of Commons Science and Technology Committee. https:/​/​publications.parliament.uk/​pa/​cm201719/​cmselect/​cmsctech/​350/​350.pdf
Shepherd, S. (2008). Managerialism: an ideal type. Studies in Higher Education, 43(9), 1668–1678. https://doi.org/10.1080/03075079.2017.1281239
Slaughter, S., Rhoades, G. (2009). Academic Capitalism and the New Economy. John Hopkins University Press.
Spellman, B. A. (2015). A Short (Personal) Future History of Revolution 2.0. Perspectives on Psychological Science, 10(6), 886–899. https://doi.org/10.1177/1745691615609918
Spezi, V., Wakeling, S., … Pinfield, S. (2018). “Let the community decide”? The vision and reality of soundness-only peer review in open-access mega-journals. Journal of Documentation, 74(1), 137–161. https://doi.org/10.1108/JD-06-2017-0092
Stewart, A. J., Farran, E. K., … Grange, J. A. (2021). Improving research quality: the view from the UK Reproducibility Network institutional leads for research improvement. BMC Research Notes, 14(1), 458. https://doi.org/10.1186/s13104-021-05883-3
Stewart, S. L. K., Pennington, C. R., … da Silva, G. R. (2022). Reforms to improve reproducibility and quality must be coordinated across the research ecosystem: the view from the UKRN Local Network Leads. BMC Research Notes, 15(1), 58. https://doi.org/10.1186/s13104-022-05949-w
Szollosi, A., Kellen, D., … Navarro, D. J. (2020). Is Preregistration Worthwhile? Trends in Cognitive Sciences, 24(2), 94–95. https://doi.org/10.1016/j.tics.2019.11.009
Teperek, M., Cruz, M., Kingsley, D. (2022). Time to re-think the divide between academic and support staff. Nature. https://doi.org/10.1038/d41586-022-01081-8
Tight, M. (2019). The neoliberal turn in Higher Education. Higher Education Quarterly, 73(3), 273–284. https://doi.org/10.1111/hequ.12197
Tijdink, J. K., Verbeke, R., Smulders, Y. M. (2014). Publication Pressure and Scientific Misconduct in Medical Scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64–71. https://doi.org/10.1177/1556264614552421
Times Higher Education. (2021). World University Rankings 2022: methodology. Times Higher Education. https:/​/​www.timeshighereducation.com/​world-university-rankings/​world-university-rankings-2022-methodology
Uygun-Tunç, D., Tunç, M. N., Eper, Z. B. (2021). Is Open Science Neoliberal? PsyArXiv. https://doi.org/10.31234/osf.io/ft8dc
van de Schoot, R., Winter, S. D., … Griffioen, E. (2021). The Use of Questionable Research Practices to Survive in Academia Examined With Expert Elicitation, Prior-Data Conflicts, Bayes Factors for Replication Effects, and the Bayes Truth Serum. Frontiers in Psychology, 12, 621547. https://doi.org/10.3389/fpsyg.2021.621547
Vazire, S. (2018). Implications of the Credibility Revolution for Productivity, Creativity, and Progress. Perspectives on Psychological Science, 13(4), 411–417. https://doi.org/10.1177/1745691617751884
Waaijer, C. J. F., Teelken, C., … Wouters, P. F. (2018). Competition in Science: Links Between Publication Pressure, Grant Pressure and the Academic Job Market. Higher Education Policy, 31(2), 225–243. https://doi.org/10.1057/s41307-017-0051-y
Wagenmakers, E.-J., Wetzels, R., … Borsboom, D. (2011). Why psychologists must change the way they analyze their data: The case of psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100(3), 426–432. https://doi.org/10.1037/a0022790
Wang, K., Goldenberg, A., … Dorison, C. A. (2021). A multi-country test of brief reappraisal interventions on emotions during the COVID-19 pandemic. Nature Human Behaviour, 5(8), 1089–1110. https://doi.org/10.1038/s41562-021-01173-x
Weller, M. (2011). The Digital Scholar - How Technology Is Transforming Scholarly Practice. Bloomsbury Academic. https://doi.org/10.5040/9781849666275
Whitaker, K., Guest, O. (2020). #bropenscience is broken science. The Psychologist, 33, 34–37.
Wilholt, T. (2013). Epistemic Trust in Science. The British Journal for the Philosophy of Science, 64(2), 233–253. https://doi.org/10.1093/bjps/axs007
Wilsdon, J., Allen, L., … Belfiore, E. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. Unpublished. https://doi.org/10.13140/RG.2.1.4929.1363
Yarkoni, T. (2018). No, it’s not The Incentives—it’s you. [Citation Needed]. https:/​/​www.talyarkoni.org/​blog/​2018/​10/​02/​no-its-not-the-incentives-its-you/​
Young, N. S., Ioannidis, J. P. A., Al-Ubaydli, O. (2008). Why Current Publication Practices May Distort Science. PLoS Medicine, 5(10), e201. https://doi.org/10.1371/journal.pmed.0050201
Ziman, J. M. (2000). Real Science: What It Is and What It Means. Cambridge University Press. https://doi.org/10.1017/CBO9780511541391
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material