This paper provides guidance and tools for conducting open and reproducible systematic reviews in psychology. It emphasizes the importance of systematic reviews for evidence-based decision-making and the growing adoption of open science practices. Open science enhances transparency, reproducibility, and minimizes bias in systematic reviews by sharing data, materials, and code. It also fosters collaborations and enables involvement of non-academic stakeholders. The paper is designed for beginners, offering accessible guidance to navigate the many standards and resources that may not obviously align with specific areas of psychology. It covers systematic review conduct standards, pre-registration, registered reports, reporting standards, and open data, materials and code. The paper is concluded with a glimpse of recent innovations like Community Augmented Meta-Analysis and independent reproducibility checks.

As more and more studies are conducted on psychology topics, the body of relevant research literature grows. This is excellent news for practicing psychologists from various fields, such as clinical, educational, school, industrial and organizational, health, sports, that want their practice to be evidence-based. However, this increase in research output creates new, higher demands to gather, analyze, and synthesize the vast amount of available information. As such, conducting reliable, well-organized systematic reviews to synthesize findings is essential.

At the same time, demands are universally increasing for research to become open and reproducible. Scholars, journals, grant agencies, UNESCO, as well as legislators, are all driving for change into more open and reproducible science (Chambers et al., 2014; Lindsay, 2017; UNESCO, 2021; White House Office of Science and Technology Policy (OSTP), 2022). While this is a very important and positive change, the typical psychology researcher interested in conducting a systematic review is now facing the daunting task of meeting various, sometimes conflicting, standards.

Paradoxically, resources and guidance for open science practices are more available than ever, yet navigating them can be overwhelming, particularly as most resources were not developed with psychology research specifically in mind. Furthermore, the available resources are not always accessible to psychology researchers today due to jargon and specific terms from other disciplines, and they do not always tackle methodological issues common in psychology research. Indeed, psychology is methodologically diverse and heterogeneous, both across and within subdisciplines.

The purpose of the present paper is to provide guidance and tools to researchers interested in conducting open and reproducible systematic reviews in psychology. This focus is deliberately narrow in order to make it accessible for beginners (e.g., a PhD student conducting their first systematic review), and our intent is to help navigate the available resources and standards rather than simply adding one more tricky tutorial to the pile. In a sense, this paper is thus an introductory meta-guide, helping the reader find the right resources for their research area. Those resources will then become a thorough introduction to systematic reviews and meta-analysis.

Table 1.
Different types of literature reviews
Type of review Definition 
Systematic review A systematic review is a type of literature review that aims to evaluate and synthesize the available evidence to answer a specific research question. It follows strict standards of planning, conducting, and reporting in order to minimize the bias that is inherent to traditional literature reviews. Evidence can be synthesized either quantitatively or qualitatively. 
Scoping review A scoping review is a type of literature review that aims to map the key concepts, types of evidence, and research gaps in a research area. It shares the methodological rigor of systematic reviews, but is broader in scope. 
Rapid review A rapid review is a type of systematic review that uses a simplified method in order to be completed fast. 
Meta-analysis Meta-analysis is a statistical method to combine results from several studies. Traditional literature reviews that combine results using meta-analytical techniques are commonly referred to as a “Meta-analysis”. Meta-analysis is also a common technique for quantitative evidence synthesis in systematic reviews 
Traditional literature review/narrative review The traditional or narrative literature review involves an expert who carefully reads and summarizes the literature. The lack of a systematic approach means that there is a high risk of bias in what evidence is included and how it is evaluated and combined. 
Type of review Definition 
Systematic review A systematic review is a type of literature review that aims to evaluate and synthesize the available evidence to answer a specific research question. It follows strict standards of planning, conducting, and reporting in order to minimize the bias that is inherent to traditional literature reviews. Evidence can be synthesized either quantitatively or qualitatively. 
Scoping review A scoping review is a type of literature review that aims to map the key concepts, types of evidence, and research gaps in a research area. It shares the methodological rigor of systematic reviews, but is broader in scope. 
Rapid review A rapid review is a type of systematic review that uses a simplified method in order to be completed fast. 
Meta-analysis Meta-analysis is a statistical method to combine results from several studies. Traditional literature reviews that combine results using meta-analytical techniques are commonly referred to as a “Meta-analysis”. Meta-analysis is also a common technique for quantitative evidence synthesis in systematic reviews 
Traditional literature review/narrative review The traditional or narrative literature review involves an expert who carefully reads and summarizes the literature. The lack of a systematic approach means that there is a high risk of bias in what evidence is included and how it is evaluated and combined. 

The paper has the following outline: first, we pose the important question of whether systematic review is the appropriate review method for your research question. In doing so, we highlight the strengths and limitations of systematic reviews. Next, we guide you through the process of making a systematic review open and reproducible, step by step, following current best practices (e.g., pre-registration of protocols, reporting standards). A brief glimpse of the future of open and reproducible systematic reviews concludes the paper

Figure 1.
Visualization of the decision process

Note. Step-by-step visualization of the process and decisions involved in conducting an open and reproducible systematic review in psychology, from research question to publication. Abbreviations: SR = Systematic review; RQ = Research question.

Figure 1.
Visualization of the decision process

Note. Step-by-step visualization of the process and decisions involved in conducting an open and reproducible systematic review in psychology, from research question to publication. Abbreviations: SR = Systematic review; RQ = Research question.

Close modal

Before starting a systematic review, you should ask yourself if this type of review is a good fit with your research question (see Step 1 in Figure 1).

Literature reviews have always been an important part of scholarly research; historically, narrative reviews were written by subject experts. While already immensely useful in collating research findings, such reviews were limited by the inherent subjectivity in the selection of included literature and in the synthesis into conclusions. To combat this, Glass (1976) introduced a way to perform quantitative synthesis of research literature: the meta-analysis. The next great milestone was when the Cochrane Collaboration developed conduct standards for systematic reviews of clinical trials in the 1990s. Their goal was to remove subjectivity in the review process and to reduce biases in the synthesis, due to, for example, “cherry picking” studies to fit one’s conclusions. If we fast-forward to 2024, systematic reviews are today the norm for clinical literature reviews and are steadily becoming embraced as the most reliable and accurate way to synthesize findings by other disciplines, such as psychology.

Systematic reviews are characterized by following detailed, standardized methods on how to formulate specific and clear research questions, search the literature, assess the literature for quality and risk of bias, extracting and coding of data, and synthesizing the extracted data. How the extracted data is synthesized depends on the data type. For quantitative data, meta-analysis can often be used. Sometimes the data cannot be meta-analyzed but instead displayed graphically, or in a table. On the other hand, qualitative data can be synthesized using qualitative/narrative (yet still systematic!) methods. Systematic reviews thus have a very specific purpose and are not intended to replace other types of reviews, such as scoping reviews that aim to give a broad overview of a larger topic (see e.g., JBI; Peters et al., 2020).

Systematic reviews enable researchers to ask highly specific questions about the literature. For example, in the case of intervention studies, the review question is formulated using the PICO framework, where the Population, Intervention type, Comparison, and Outcomes are precisely defined. This requires a razor-sharp research question and strong foundational knowledge about the research literature. However, even if you do not plan on conducting an intervention review, considering the PICO framework can be very helpful (see e.g., Nishikawa-Pacher, 2022); it can clarify the review question and save both time and effort in preparing the search and screening strategies. If your research question is too vague to be turned into a precise review question, a systematic review is probably not your best option.

A systematic review also takes considerable time to complete (at least a year; e.g., Higgins, Thomas, et al., 2023), so if you do not have that much time available (and are prepared to sacrifice some rigor) a rapid review (Garritty et al., 2021)) might be more suitable. Furthermore, a systematic review is not a one-person show; it requires a team of at least one content expert, one systematic review method expert, and one literature retrieval expert (i.e., a university librarian).

It is also recommended that at least screening, data extraction, and coding is performed by a minimum of two researchers (e.g., Higgins, Thomas, et al., 2023). This can be a daunting task for junior researchers who are often faced with conducting literature reviews as a natural part of their research. A realistic approach is for a PhD student to collaborate with another junior researcher who can help with the large task of screening and data extraction, but to also include more senior researchers acting as content experts and method experts, as well as the local university librarian as the information retrieval expert, who can help with writing the protocol and overseeing the conduct and reporting. We would like to stress the importance of including a more senior researcher as the method expert. In our experience, it is far too common for experienced content researchers to underestimate the amount of time or resources a systematic review requires, and it can be difficult for early career researchers to be the sole advocate for this. To summarize, systematic reviews should not be viewed as easy alternatives to empirical research that saves time by not having to collect “real data”, but as a sophisticated and advanced research task similar to conducting a randomized controlled trial.

Provided that you have made sure that you do, indeed, wish to conduct a systematic review, our guide will help you ensure that it is also open, transparent and reproducible.

The next thing you should ask yourself is what type of conduct standards for systematic reviews you will want to follow (see Step 2 in Figure 1). We recommend one of the following three conduct standards: the Cochrane Collaboration’s MECIR (Higgins, Lasserson, et al., 2023), the Campbell Collaboration’s MECCIR (The Methods Group of the Campbell Collaboration [Campbell Collaboration], 2016), or NIRO-SR (Topor et al., 2023).

Most readers likely know of the Cochrane Handbook for Systematic Reviews of Interventions (Higgins, Thomas, et al., 2023), regardless of working with healthcare interventions or not. The Cochrane Handbook is the official (and very detailed) guide to the process of conducting and maintaining clinical systematic reviews. In order to fulfill the standard, each Cochrane Intervention Review has to meet the Methodological Expectations for Cochrane Intervention Review (MECIR; Higgins, Lasserson, et al., 2023). If your research question concerns a health intervention, MECIR is often a great choice, even though it is not necessarily tailored specifically to research in psychology. However, most of clinical and health psychology is methodologically similar enough for researchers to find the resources useful and easy to apply to their research questions.

The equivalent guide and methodological expectations for social science interventions is provided by the Campbell Collaboration. The Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR; Campbell Collaboration, 2016) is heavily based on the Cochrane Handbook but is developed to suit intervention reviews that are not health or medicine related. If your research question pertains to any type of educational or social intervention, MECCIR is an excellent choice.

While the two options above cover systematic reviews of interventions, many topics studied in psychology do not easily lend themselves to intervention studies. If you want to conduct a systematic review of non-interventional psychology studies, you will need different guidelines and conduct standards. Luckily, the framework of Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR; Topor et al., 2023) was recently proposed for this exact purpose. It is developed to be accessible to authors who have little experience with systematic reviews. Importantly, NIRO-SR places great focus on open and reproducible methodology and it is compatible with other common evidence synthesis standards. If your research question calls for a non-intervention systematic review, we strongly recommend following NIRO-SR.

Table 2.
Different types of pre-registration terms
Pre-⁠registration term Definition 
Pre-registration Pre-registration involves writing down the research question and/or hypotheses, the research methods, and (ideally) the analysis plan (e.g., what type of statistical tests to conduct). This is done before collecting the data and registered in a trusted and public repository that is time-stamped. This improves transparency of the research process as it is possible to see if a hypothesis has been changed after the results are known (HARKing), or if the method has changed in a way that could have biased the result (e.g., changing the inclusion criteria after seeing the results). When an analysis plan is also pre-registered, it is possible to see if the statistical analysis has been adjusted to produce a statistically significant result (i.e., p-hacking). 
Registered reports Registered reports are an extended form of pre-registration where a scientific journal helps with it. It involves submitting the entire introduction and method part of a research article before conducting the study. It is then peer-reviewed and revised until the editor can give the authors in-principle-acceptance (IPA) and greenlight to conduct, analyze and report on the study. This allows peer reviewers an opportunity to examine to what extent authors have followed the original plan or deviated from it. Importantly, IPA means that the article will be published regardless of outcome, as long as the pre-approved quality checks are met (e.g., adherence to the original plan, reproducibility). 
Review protocol A review protocol consists of the introduction and methods part of a systematic review or scoping review and is thus similar to the first stage of the registered report. Unlike registered reports, protocols are published stand-alone after peer-review by a journal, or independently pre-registered by the authors. Because the main purpose of a review protocol is to provide step-by-step reproducible instructions for conducting (and updating) the review, the introduction part is sometimes shortened, and the method section is sometimes expanded to include minor details that would normally be put in an appendix. 
Pre-⁠registration term Definition 
Pre-registration Pre-registration involves writing down the research question and/or hypotheses, the research methods, and (ideally) the analysis plan (e.g., what type of statistical tests to conduct). This is done before collecting the data and registered in a trusted and public repository that is time-stamped. This improves transparency of the research process as it is possible to see if a hypothesis has been changed after the results are known (HARKing), or if the method has changed in a way that could have biased the result (e.g., changing the inclusion criteria after seeing the results). When an analysis plan is also pre-registered, it is possible to see if the statistical analysis has been adjusted to produce a statistically significant result (i.e., p-hacking). 
Registered reports Registered reports are an extended form of pre-registration where a scientific journal helps with it. It involves submitting the entire introduction and method part of a research article before conducting the study. It is then peer-reviewed and revised until the editor can give the authors in-principle-acceptance (IPA) and greenlight to conduct, analyze and report on the study. This allows peer reviewers an opportunity to examine to what extent authors have followed the original plan or deviated from it. Importantly, IPA means that the article will be published regardless of outcome, as long as the pre-approved quality checks are met (e.g., adherence to the original plan, reproducibility). 
Review protocol A review protocol consists of the introduction and methods part of a systematic review or scoping review and is thus similar to the first stage of the registered report. Unlike registered reports, protocols are published stand-alone after peer-review by a journal, or independently pre-registered by the authors. Because the main purpose of a review protocol is to provide step-by-step reproducible instructions for conducting (and updating) the review, the introduction part is sometimes shortened, and the method section is sometimes expanded to include minor details that would normally be put in an appendix. 

It is becoming more and more endorsed to pre-register one’s planned study when conducting primary empirical research. Not surprising, as there are plenty of benefits, both for the author(s) and for the research field as a whole (Nosek et al., 2018). Pre-registering prevents several types of questionable research practices that can increase false positives (see John et al., 2012 for an overview), including optional stopping, p-hacking, and hypothesizing after the results are known (HARKing).

One might think that pre-registrations are redundant for systematic reviews; after all, the data already exists so there is not really a hypothesis that needs to be pre-registered before the data collection. The authors are not entirely blind to what the data will show as they have already read many of the most influential studies before deciding to conduct a systematic review. Furthermore, the point of a systematic review is to find all relevant studies and synthesize them, so how could there be any risk of bias? We would like to argue that systematic reviews are among the most important studies to pre-register. The reason is that there are an infinite amount of small choices when conducting a systematic review, all of which will shape the final synthesis. Small changes to the inclusion criteria in population (e.g., age cut-off), type of intervention (e.g., short or medium length), comparison (e.g., types of control groups), outcomes (e.g., single or aggregate measures) and study types (e.g., randomized control trials or quasi-experimental studies) all combine to a staggering amount of researcher’s degrees of freedom to reach any conclusion they want, unless the literature is crystal clear, which is rarely the case. It is this realization that has led to pre-registration (in this context often called prospective registration) to become best practice for systematic reviews (Stewart et al., 2012), and also increasingly more common, with over 30 000 registrations already in 2018 (Page et al., 2018). Similarly, since the introduction in 2023, the Generalized Systematic Review Registration Form (Akker et al., 2023), has already been used to pre-register over four thousand systematic reviews on the Open Science Framework (see https://osf.io/search for the current count).

Pre-registration is not only useful to prevent the risk of bias when conducting a systematic review, but it is also tremendously helpful for you as the author. Systematic reviews are, unlike primary studies, meant to be updated as new studies almost inevitably are published. By the time you have conducted your first batch of screening, data extraction, and analysis, reaching the moment to submit your publication, it is likely that enough time has passed that you need to update the searches and screen the new studies found. By committing to writing and registering a clear, detailed pre-registration from the very beginning, you ensure that you have an easy-to-follow recipe to follow whenever you want to update your review. The pre-registration process is outlined in Step 3 in Figure 1.

Select a Pre-registration route

When conducting a systematic review, there are three different routes to pre-registration (see Figure 1 and Table 2): registered reports, published protocol, or independent pre-registration.

The strongest route is a registered report. This involves you submitting your entire introduction and method section as planned and then getting editor and reviewer feedback before conducting any searches or screening of the literature. After adjustments based on their feedback, you will get an in-principle-acceptance before you conduct the review, meaning that you are ensured to publish once the systematic review is done. When you submit the finished report, the reviewers will check for any deviations from your stage 1 submission.

The upside of registered reports is that it not only prevents the above-mentioned researcher’s degrees of freedom, but also a lot of publication bias, as the publisher cannot reject your systematic review due to lack of findings. Furthermore, the feedback does not only serve to prevent bias, but can also substantially improve the systematic review. The downside is that the process from registered report to published review takes a long time, and it might not be feasible to pause your review when awaiting reviewer feedback. Furthermore, you have to follow the standards the publisher has set out for registered reports. These might be more adapted to empirical studies, so you should make sure to submit to a journal that is also specialized in publishing systematic reviews. An interesting new way of publishing a registered report is submitting to the Peer Community In Registered Reports. The entire process is then handled by a peer community, and once the systematic review is finished, you get to pick from a list of journals which to publish in.

The second route is to write a full systematic review protocol and formally publish it before conducting the review. Some journals allow this (e.g., Campbell Systematic Reviews, Meta-Psychology). This route is quite similar to doing a registered report, but you do not always have the same guarantee that the journal will also accept your final systematic review once it is done. Additionally, similar to registered reports, this process can be quite slow, and you will have to wait until you have published your protocol to begin conducting the review. A clear upside is that you will get to publish your systematic review protocol in a scientific journal. This can be a good experience and strong motivator, especially for early career researchers. Another benefit is the rigorous standards and formats, which can provide support to researchers feeling unsure about the methodology.

The third route is to independently pre-register a systematic review protocol in an online registry (such as the Open Science Framework, OSF). On the one hand, this route allows the flexibility to conduct the review at your own pace, and it gives you the freedom to select which standards and templates are best suited for your specific systematic review. The downside is that this route does not guarantee you a publication. If your results are not deemed interesting by reviewers, your systematic review might be hard to publish. Furthermore, whereas pre-registering an empirical study sometimes only involves a few paragraphs of hypothesis, design and statistical analyses, pre-registrations for systematic reviews requires a full protocol. This typically requires a short introduction as well as a full method section. It thus resembles what you would submit as a registered report if you were to conduct an empirical study. Indeed, once you have written your protocol, you might find yourself in a position where it makes more sense to submit this to a journal than to independently pre-register it.

While the first two routes (registered reports and published protocols) are the ideal approaches in terms of transparency and minimizing bias, we acknowledge that the third route of independent pre-registrations is for many authors and many review topics the most realistic as a starting point. It is also the route with the most options open for the authors, and thus has the biggest need for support. As such, it is the route that the remainder of this article is focused on. If you decide on one of the first two routes, all later steps in the process will be outlined in detail by the publisher or journal.

Write the protocol

When writing your protocol that you aim to pre-register, you should select a template that matches the conduct standard for your systematic review (see Figure 1). If you follow the Cochrane Collaboration’s MECIR (Higgins, Lasserson, et al., 2023), we recommend using the PROSPERO (Booth et al., 2011) protocol template, as they are designed to work together. It consists of a number of items that aids you in planning your review for each stage (searches, screening, data extraction, etc.), and together with the conduct standards, you can be sure that you have written a useful protocol that is both easy for you to follow, and easy for others to understand.

If you are following MECCIR (Campbell Collaboration, 2016), we instead recommend using the Generalized Systematic Review Registration Form (Akker et al., 2023), which is available as a pre-registration template on OSF. The Generalized Systematic Review Registration Form is quite similar to the PROSPERO protocol template, but considerably more flexible, detailed and exhaustive (e.g., 65 vs 40 items to consider). This is especially useful when you’re investigating an area that is more complex or heterogeneous than clinical trials. It has purposefully been designed to work for any type of systematic review, in any academic subject, quantitative or qualitative, and is thus a good default when specialized templates aren’t available for your research topic. It carefully avoids using terms in PROSPERO that are specific to healthcare, and specific designs used in medical trials, and thus offers a more inclusive alternative for a broad range of research. Importantly, the protocol template’s instructions start with the caveat that a more specialized form should be used when it is a better fit with the research question (e.g., PROSPERO or NIRO-SR), but that it can serve as a fallback alternative. Because many areas still lack a specialized protocol template, it will often be the best alternative.

If you instead follow the NIRO-SR conduct standard (Topor et al., 2023), there is a specific protocol template created for it that you should follow, and that can also be registered on OSF. It provides a comprehensive list of elements that should be specified within a systematic review protocol. The NIRO-SR template also includes a short list of answers to commonly asked questions for those authors who are new to protocol pre-registration. This protocol template is in many ways similar to the Generalized Systematic Review Registration Form, with the main difference being its focus on non-intervention research. In other words, much like PROSPERO, the NIRO-SR template is a specialized template that should be preferred over the more general, fallback alternative the Generalized Systematic Review Form serves as.

Regardless of which protocol template you choose, your pre-registration will include detailed inclusion and exclusion criteria (e.g., what population to search for) as well as clear instructions on how to conduct the review (e.g., screening instructions, data instructions). It is important to also create the materials you will use when conducting the review. In particular, you should set up the data extraction sheets (or similar tool), and pre-register them as well. The same goes for the risk of bias tools that will be used. Finally, before pre-registering, it is important to always pilot your protocol and the materials.

It is essential to pre-register your protocol in a trusted public repository specifically designed for this purpose. This ensures it is time-stamped and that it cannot be altered without the changes being tracked. The PROSPERO protocol should be registered in the PROSPERO registry (Booth et al., 2011). Note that the PROSPERO registry does not allow any systematic review protocols that do not have some type of health as their outcome. For example, if you are conducting an intervention study for students with ADHD and your primary outcome is school achievement, it cannot be pre-registered in PROSPERO. In that case, and any other case, you should instead pre-register it on OSF. This is also the registry we recommend for Generalized Systematic Review Report Form (Akker et al., 2023) and NIRO-SR (Topor et al., 2023).

After pre-registering your protocol, it is now time to conduct your systematic review (see Step 4 in Figure 1). Similar to conducting original research, systematic reviews begin with data collection, which entails a systematic search in selected databases and other repositories. To ensure reproducibility, it is necessary to provide detailed search strategies for each database. PRISMA-S extension (Rethlefsen et al., 2021) provides a checklist of information you need to report for a reproducible literature search. Keep in mind that databases have various levels of precision when it comes to reference search and not all search strategies will always retrieve identical numbers of references. To mitigate this, it is best that you save raw files of downloaded references and upload them to the repositories along with the search strategies. That way, anyone can reproduce the rest of your review on the same dataset. Keeping a detailed track of your search strategies also simplifies the process of updating your search and enables other researchers to re-use your material.

The next step involves screening titles and abstracts. Screening can be done however one prefers, but today there are many screening tools which streamline this process and allow transparency in reporting. Sadly, many of them are paid proprietary software. As of now, we recommend Rayyan (Ouzzani et al., 2016) as the free alternative, as it allows selecting exclusion criteria, annotating, and labeling studies and exporting this data in various spreadsheet and reference formats that facilitates transparency of the screening stage. If you are multiple screeners (which we highly recommend that you are), your individual decisions will be recorded and not just the final verdict. We recommend exporting separate files for excluded and included studies and uploading both as supplemental materials.

The next phase involves reading the full articles. At the time of writing, we are not aware of any free tools that we can recommend. However, for most research questions, the number of full texts to read is going to be less than a hundred, and thus this stage can be successfully documented using any sheet software (such as Google Sheets, Excel, etc.) and the free-to-use reference tool Zotero (2023), which has a built-in PDF reader. We have relied on that option in a project with around two thousand fulltext and it has still worked smoothly.

Next, you will extract data from the full text articles and enter into data extraction sheets. In order to make data extraction reproducible and transparent, you should apply the same logic of doing things in small steps, as the systematic review itself rests on. For example, during data extraction, it might be tempting to simply copy an effect size that is reported in the right format (e.g., Cohen’s d). However, this puts the calculation of the effect size into a black box. Instead, you should copy the means and standard deviations, noting the page number in the article from which you retrieved them, and calculate it yourself.

We strongly recommend using reproducible R-scripts for any data wrangling or calculations of effect sizes. They also allow you to do reproducible meta-analyses. See the appendix for a detailed example of how you can share reproducible R-scripts. Also, see Lakens et al. (2016) for further guidance on the reproducibility of meta-analyses.

Through the process of conducting your systematic review you are almost guaranteed to arrive at situations where your protocol does not give clear guidance, or where it is simply impossible to follow. You might even realize that you have made a mistake in the protocol. There is no reason to panic. Deviations from pre-registrations are sometimes needed (Lakens, 2023). Simply make a note of every deviation (what it is and why) immediately when they arise. If they are major, we recommend pausing your review and updating your pre-registration with a new version that explains and justifies the deviation. However, small deviations can simply be mentioned in the final paper.

After successfully having completed the systematic review, it’s time to decide what to report in the manuscript. As the systematic review method is characterized by rigor, so too is the reporting. The reporting standards ensure that the systematic review itself is open and transparent enough to be helpful for readers (e.g., what authors have done exactly), and that it can be adequately scrutinized by a third party. In addition, it guides review teams in what to report of the conducted review, making sure that no stone is left unturned.

The most notable standard is PRISMA, The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (Page et al., 2021); likely the most used standard today. It was originally created to counteract poor reporting of systematic reviews. It also has an extension, PRISMA-S (Rethlefsen et al., 2021), which covers even more details to increase reproducibility. We recommend review teams to follow PRISMA, and to use the item checklist for preparing the manuscript. Because PRISMA is so widely disseminated, it is also compatible with many pre-registration templates. This is important as this ensures that all necessary items in the pre-registration are cared for when time has come for the reporting of the systematic review.

The Generalized System Review Registration Form (Akker et al., 2023) is compatible with PRISMA and even shows the equivalent PRISMA item for each item in the registration form. Likewise, the pre-registration protocol following NIRO-SR is also PRISMA compliant and will make sure that you do not miss something in the pre-registration stage that later should be reported.

Researchers more accustomed to APA may instead choose the reporting standard APA style JARS Quantitative Meta-Analysis Article Reporting Standards Information Recommended for Inclusion in Manuscripts Reporting Quantitative Meta-Analyses (Appelbaum et al., 2018). Some journals might also require this. Fortunately, it is very similar to PRISMA and you should not have any problems following APA JARS reporting if you have pre-registered your systematic review with either The Generalized Systematic Review Registration Form or NIRO-SR.

A downside to pre-registering and conducting systematic reviews independently is the risk of publication bias. Simply put, your review might not be published because it did not find what reviewers and editors find interesting. An inconclusive meta-analysis, for example, might be viewed as something that should be revisited after more studies have been conducted, rather than be published in its current form. Publication bias of this kind is detrimental to science, and the best way to avoid it is to make sure to publish your article as a pre-print prior to submission. Any systematic review in psychology can be published for free in PsyArxiv. Furthermore, there now exist several journals committed to policies to ignore the findings and novelty of articles, where you can also publish your article (e.g., Collabra Psychology, Meta-Psychology)

Share the materials and data

Together with your article, it is important to share the materials and data openly. This ensures that other researchers can reproduce your findings, which also increases its trustworthiness. It also ensures that other researchers can update your review when new studies are published. Uploading your data and materials to public repositories like OSF allows it to be cited and shared by the scientific community, which in the end benefits everyone.

Unlike primary research, systematic reviews rely on secondary data. This means that obstacles to sharing data otherwise common in psychology research (such as patient anonymity) are eliminated. Most often reviews aim to synthesize aggregated data and extract descriptions available in research reports, which means that the data does not have to be anonymized and there are no ethical constraints to data sharing. While data is generally considered “shared” in systematic reviews and meta-analyses by providing summary tables or forest plots of effect sizes, this is often not enough to reproduce the analyses. Instead, you should aim to share as much as possible of your search strings, search results, screening results, the raw data extraction sheets, the risk of bias tool, the code to wrangle data and/or calculate effect sizes. You should also publish the code to reproduce the meta-analyses if you have done any. Simply put, share everything that allows you to reproduce your entire research process. See the appendix for a detailed example on how to share this.

Make sure to publish your materials and data in a trusted repository (we recommend OSF) that provides DOIs (digital object identifiers), include meta-data that describes your dataset, and allows for a data dictionary (Buchanan et al., 2021) that explains the contents of the data. This will ensure that your data is FAIR: Findable, Accessible, Interoperable and Reusable (Wilkinson et al., 2016).

In this article we have provided you with guidance on how to follow open science practices when conducting a systematic review in psychology. In doing so, you will reduce the risk of bias in your systematic review and make it much easier for you, or someone else, to update your systematic review. Open science practices improve the trustworthiness of research by allowing anyone, including researchers, practitioners, science writers, or patients, to not have to take your word for it and be able to verify your research methods. By opening up the research process to anyone, the threshold for non-researchers to participate is lowered, and this lays the grounds for citizen science, where non-researchers take part in the process. This is particularly important for systematic reviews that tend to strongly influence policy and decision-making. Thanks to open science practices, you can invite stakeholders to join your team and give input throughout the entire process.

What we have described in this article are the beginner steps to improve open science practices in systematic reviews. We are confident that these small steps will make a huge difference, and that the first steps are the most important to take.

As open science practices become more common, we will likely see more mature and advanced practices develop. Independent reproducibility checks by an external party (e.g., by a data editor at a journal) are made possible with open science practices and will make sure that the review is not only transparent, but also reproduces without errors. Some journals already require it (e.g., Meta-Psychology), and some universities have started doing this as support to their researchers.

With the advent of new software solutions, the entire systematic review can be made into a fully reproducible report where the reader can repeat any stage. In Community Augmented Meta-Analyses (CAMAs; Tsuji et al., 2014), for example, the review is uploaded to an online repository and made reproducible and interactive through a web application. This means that in addition to verifying the process behind your conclusions, they can also interact and “play around” with the analyses. For example, if a systematic review was about bullying’s effect on anxiety ages 7 - 18, and an interested practitioner wants to get the results specifically for children 7 - 12, it can be done with just a few clicks in the web app, without requiring any extensive knowledge of statistics. Furthermore, it is possible to update the review (i.e., to augment it) when new data comes in. With the rise of better AI tools, we might, in the not so far future, even see this become partially automated.

In conclusion, our hope is that this simple guide provides psychology researchers with the first building blocks for an inspiring future. A future where systematic reviews are not only transparent and reproducible, but also invites researchers and practitioners from across the world to help improve them to the highest standard. A standard which, most importantly, is essential for trustworthy and safe evidence-based decisions when practicing psychology.

RC is the first author of this article. The rest of the authors are presented in alphabetical order. We detail our author contributions using the CRediT standard, using the Tenzig web application.

Conceptualization: RC

Funding acquisition: RC

Visualization: NH

Writing - original draft: RC, LB, NH, AK, TN, and MT

Writing - review & editing: RC and NH

The present research was supported by a grant from the Swedish Research Council, 2020-03430.

We declare no financial conflict of interest. We do note that the authors are active systematic review methodologists and have co-authored or otherwise contributed to some of the standards and templates recommended in the article.

This appendix serves as a technical companion to the article “A Beginner’s Guide to Open and Reproducible Systematic Reviews in Psychology” and gives detailed and technical examples on how to share the parts of a systematic review.

To ensure a systematic review is open and reproducible, it is necessary to share the following:

  1. The search strings and search results

  2. The screening results

  3. The raw data extraction sheets

  4. The risk of bias tool / other quality tools used

  5. The code to wrangle data and/or calculate effect sizes (if any), and the code to conduct the meta-analysis (if any)

1. The search strings and the search results

For the example of saving the retrieved references and used search strategy, we will use Web of Science. Disclaimer: this example is just to show more concrete steps, the databases and search interfaces evolve within and between each other so quickly, that following these exact steps might become impossible in a matter of months. Nonetheless, the logic pertains to all databases and updates, and one should remain flexible to ensure they save the strategies as detailed as possible. Please remember to always consult your information retrieval specialist at this stage.

Once you have created your search strings according to recommendations (e.g., Cochrane and Campbell guides; you can always test the quality of your search using the PRESS guidelines, McGowan et al., 2016), you will conduct the search by running that search string in the database.

Once you run the search, you will see your search string, the number of retrieved documents, along with database-specific limiters that sometimes may not be part of your search string.

To have a reproducible search (strategy), you should:

  1. save the exact search string you used in a format that makes it easy to re-run (e.g., copy paste the string in a separate file),

  2. explain all of the limiters that were used to reach the final number of the retrieved documents as detailed as possible (e.g., document type, date of publication, language, peer-review status, etc.),

  3. download and save the unedited (raw) files of retrieved references.

Keep in mind that certain interfaces/databases allow you to download full search strategies as editable documents. This is the preferred way of saving your search strategies and you should always carefully read the instructions for each database to ensure you are using it correctly.

The final report containing all search strategies could look something like this:
Provider: Clarivate (Access provided by X University) 
Database: Web of Science Core Collection 
Search link:
https://www.webofscience.com/wos/woscc/summary/xxxxxx 
Search string:
(ALL=("cognitive dissonance" OR "cognitive conflict*" OR "psychological inconsistency" OR "attitude change")) AND ALL=("self-perception" OR "self-affirmation" OR "self-concept" OR "self-identity") AND ALL=("experimental stud*" OR "laboratory research" OR "behavioral experiment*" OR "controlled trial*" OR "intervention stud*" OR "psychological assessment" OR "questionnaire" OR "survey research") 
Timespan: All years (1975 - 2024) 
Results: 20 
Date of search: 2024-09-03 
Entitlements:
All Editions:
Science Citation Index Expanded
(SCI-EXPANDED)--1975-present
Social Sciences Citation Index
(SSCI)--1975-present
Arts & Humanities Citation Index
(AHCI)--1975-present
Conference Proceedings Citation Index – Science
(CPCI-S)--1990-present
Conference Proceedings Citation Index – Social Science & Humanities
(CPCI-SSH)--1990-present
Emerging Sources Citation Index
(ESCI)--2005-present
*No additional limiters have been used 
Provider: Clarivate (Access provided by X University) 
Database: Web of Science Core Collection 
Search link:
https://www.webofscience.com/wos/woscc/summary/xxxxxx 
Search string:
(ALL=("cognitive dissonance" OR "cognitive conflict*" OR "psychological inconsistency" OR "attitude change")) AND ALL=("self-perception" OR "self-affirmation" OR "self-concept" OR "self-identity") AND ALL=("experimental stud*" OR "laboratory research" OR "behavioral experiment*" OR "controlled trial*" OR "intervention stud*" OR "psychological assessment" OR "questionnaire" OR "survey research") 
Timespan: All years (1975 - 2024) 
Results: 20 
Date of search: 2024-09-03 
Entitlements:
All Editions:
Science Citation Index Expanded
(SCI-EXPANDED)--1975-present
Social Sciences Citation Index
(SSCI)--1975-present
Arts & Humanities Citation Index
(AHCI)--1975-present
Conference Proceedings Citation Index – Science
(CPCI-S)--1990-present
Conference Proceedings Citation Index – Social Science & Humanities
(CPCI-SSH)--1990-present
Emerging Sources Citation Index
(ESCI)--2005-present
*No additional limiters have been used 

Download the retrieved references as a file with the .bib or .ris extension to ensure they can be imported by most reference managers or opened as a note file. For clarity and consistency, we suggest you name the saved retrieved documents as follows: raw-file_database_date-of-retrieval_number-of-items.bib. In our example, the name would thus be: raw-file_web-of-science_2024-09-02_20-items.bib

Always keep a copy of raw downloaded references to be able to go back to the beginning in case an issue ensues from the deduplication process, or any other step in the screening/data extraction process.

Select all searches (note that some databases have limits on large exports so they might have to be done in batches), and export them as BibTex, or Plain text file. You should primarily export the references as a file that is compatible with whichever software you will be using for the rest of the systematic review, but for archiving the raw file, BibTex or Plain text file are the more universal options. If you do not plan to use a reference manager or screening softwares, you can also save the file as Excel, but not all databases provide all alternatives.

2. Screening results

After screening the results, you should share the results of the screening. In this example we are using Rayyan. We then go to review settings and select export.

Then we check what we want to export.

Here it is important to select which references to share. We recommend creating a dataset of “all references” but depending on the size of your datasets, it might make sense to also use filters and share only the included studies in a separate file.

It is important to share the abstract as well as the decisions. However, you might not want to share user notes and labels, depending on how you have used them while working. For example, you might have created notes that are uninteresting to others or possibly even offensive (e.g., “remember to read this weird study later lmao”). In contrast, if you have used labels or notes to make informative comments on your decisions, you should share them too.

You will be getting a zip-file from Rayyan (over email) and you should unzip it and inspect the contents (to ensure you have not shared something you do not want to share). One is a bibtex file of the studies and their decisions, and the other is a log file. You do not normally need to share the log file as it is often meaningless redundant information for anyone other than yourself as the log file is not interpretable without contextual information. However, you should keep it as it is very useful to check for errors.

3. The raw data extraction sheets

You should share your raw data extraction sheets. It is good practice to use spreadsheet editors that have functions such as drop-downs, colored cells etc., but you should keep in mind that such information should only augment the sheet and that it should still be fully machine-readable and possible to share as a .csv. We recommend sharing it both as the original sheet (e.g., .xls) and exported as a simple .csv file.

There are, as of today, no clear standards on how they should look like, but a good starting point is to have each row designate an include study and each column an extracted variable (i.e., the wide format). We also recommend using an interpretable ID for each included study in the short cite format: e.g., Smith 2017, Smith & Tell (2022), Smith et al. (2023), Smith et al. (2023b) as this will then double as the variable that can be used to identify studies on graphs.

4. The risk of bias tool / other quality tools used

Depending on what tool you use, there will be an exporting function you can use to get the decisions. However, in many cases this will be a simple spreadsheet with the decisions you have made for each study on various characteristics (e.g. blinding present).

5. The code

We recommend using R (R Core Team, 2023) for data-wrangling, effect size calculation and meta-analysis (as applicable). There is an increased number of resources and templates for reproducible effect size calculation and data extraction, and even entire registered report templates for meta-analyses (see Jané et al., 2023; Gilad Feldman’s meta-analysis templates). Generally, that is the level of reproducibility we should strive for, but it can also be daunting for someone who is just starting with conducting meta-analyses. We provide an example of a simple R script that can be used to extract and wrangle data for a meta-analysis. The code from the script can easily be a part of a Quarto or RMarkdown as well if one wishes to write fully reproducible result sections in markdown. The code is available as a supplement both below in text, and as the R-file: https://doi.org/10.17605/OSF.IO/BV7G2.

The script is composed of three sections: setup, data extraction, and data wrangling. This is a preparatory script, and mostly adapted to the metafor R package (Viechtbauer, 2010), which is most commonly used for conducting meta-analyses. Therefore, variables are following the same naming convention as the functions in metafor.

  1. The setup includes installing necessary packages, and can include importing in any data from other sources (e.g., if you extract effect sizes in a spreadsheet).

  2. Data extraction section shows code used to create datasets “from scratch” inside R, by extracting numbers from the studies and converting them into R objects. Note that you can comment information like page or table number to reference where the numbers come from. This makes for easier validation by a second reviewer and facilitates reproducibility.

  3. Data wrangling step is more flexible, and depends more on what you extracted and what you need as the final form before analysis. In the example script, we show how to recalculate certain effects, and combine objects into data frames compatible with metafor.

  4. Meta-analysis - the analysis itself can be in a separate script to keep the legibility of the code, or in the same script if the code is not long.

# ===========================================
# Meta-Analysis Script Template
# ===========================================

# ---------- Setup Section ----------
# Install and load required packages
# Uncomment the lines below to install packages 
    if not already installed
# install.packages("tidyverse")
# install.packages("metafor")

library(metafor)
library(tidyverse)

# Optional: Import any additional data from external 
   sources (e.g., spreadsheets)
# Example:
# your_data 
   <- readxl::read_xlsx("path_to_your_file.xlsx"), 
   sheet = 1)
# keep in mind to remove absolute paths 
   (such as C:\\.)  when sharing scripts as 
   they will break reproducibility
# it is enough to just read in the file as 
   readxl::read_xlsx(file.xslx)

# ---------- Data Extraction Section ----------
# In this section, extract data from studies and 
   convert them into R objects (data frames or 
   tibbles).
# Comment references (e.g., page/table number) for 
   easier  validation and reproducibility.

# Example Study 1 - extraction of data for odds 
   ratio data was extracted from Author et al. 
   (2021)  [Table 1](254530)  on page 32 keep in mind that 
   the naming of  statistical terms are according 
   to metafor  naming convention

study_1 <- data.frame(id = "Author 2021", # create 
   an object that references the study in a simple 
   way

 ai = 30, # extract the number of participants 
     in responded yes to the control
 bi = 10, # extract the number of participants 
     that responded yes to the stimuli
 ci = 15, # extract the number of participants 
     that responded yes to both conditions
 di = 500) # extract the number of participants 
     that responded no to both conditions

# Example Study 2 - extraction of data for SMD 
   for  two independent groups data was extracted
   from  Author et al. (2023), page 32., paragraph 
   4 keep in mind that the naming of statistical 
   terms are  according to metafor naming 
   convention

study_2 <- data.frame(id = "Author 2023", 

# create  an object that references the study
   in a simple way

 m1i = 201.5, # mean value of group 1
 m2i = 225.6, # mean value of group 2
 sd1i = 22.53, # SD value of group 1
 sd2i = 19.59, # SD value of group 2
 n1i = 96, # sample size of group 1
 n2i = 96 # sample size of group 2
 )

# Additional studies can be added here 
   following the same pattern.
# Keep in mind these are examples of 
   different  effect sizes, but you should
   keep them the  same throughout.

# ---------- Data Wrangling Section ----------
# In this section, you can process and format 
   the data  to make it compatible for 
   analysis  (e.g., for metafor).
# Example of Wrangling: Combine multiple 
   datasets into one
# Binding rows of extracted data

combined_data <- bind_rows(study_1, study_2)

# Example Wrangling Step: Modify data to meet 
   specific analysis needs
# This part depends on your requirements, e.g., 
   recalculating variables or filtering rows
# In this example, you might filter based on 
   certain conditions or create new columns.
# Example transformation (modify as needed):

combined_data <- combined_data %>%
 mutate(new_column = ai + bi) %>% # Add a new 
     calculated column
 filter(new_column > 20) # Filter based on 
     specific criteria

# create a dataframe with values needed for 
   the meta-analysis

effect_data \<- escalc("OR", ai = ai, bi = bi, 
   ci = ci, di = di, data = combined_data)

# Save the final dataset (optional)
# write.csv(effect_data, 
   "path_to_save_final_data.csv")
# The final `effect_data` object can now be 
   used in  further meta-analysis workflows.

# --------------------------------------------
# Meta-Analysis -- this section can be in 
   its own script as well
# --------------------------------------------
# Repeat the setup section in case you open a 
   new script
# library(tidyverse)
# library(metafor)
# effect_data \<- read_csv("effect_data.csv")
# Conduct the meta-analysis according to the 
   pre-registered plan
# Note that data frames created with escalc 
   already contain values needed to model
# simple RMA
# Step 1: Random-Effects Meta-Analysis

meta_analysis \<- rma(yi = yi, # effect
 vi = vi, # variance
 data = effect_data,
 method = "REML") # select the planned model

# Print meta-analysis results

print(meta_analysis)

# Forest Plot

forest(meta_analysis,
 slab = effect_data$id, # Labels for 
     the studies
 xlab = "Effect Size",
 main = "Random-Effects Meta-Analysis 
     Forest Plot")

# Funnel Plot

funnel(meta_analysis,
 xlab = "Effect Size",
 main = "Funnel Plot of Effect Sizes")

References for the appendix

Jané, M., Xiao, Q., Yeung, S., *Ben-Shachar, M. S., *Caldwell, A., *Cousineau, D., *Dunleavy, D. J., *Elsherif, M., *Johnson, B., *Moreau, D., *Riesthuis, P., *Röseler, L., *Steele, J., *Vieira, F., *Zloteanu, M., & ^Feldman, G. (2024). Guide to Effect Sizes and Confidence Intervals. http://dx.doi.org/10.17605/OSF.IO/D8C4G

McGowan, J., Sampson, M., Salzwedel, D. M., Cogo, E., Foerster, V., & Lefebvre, C. (2016). PRESS peer review of electronic search strategies: 2015 guideline statement. Journal of Clinical Epidemiology, 75, 40–46. https://doi.org/10.1016/j.jclinepi.2016.01.021

Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1-48. https://doi.org/10.18637/jss.v036.i03

R Core Team (2023). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. <https://www.R-project.org/>.

Akker, O. R. van den, Peters, G.-J. Y., Bakker, C. J., Carlsson, R., Coles, N. A., Corker, K. S., Feldman, G., Moreau, D., Nordström, T., Pickering, J. S., Riegelman, A., Topor, M. K., van Veggel, N., Yeung, S. K., Call, M., Mellor, D. T., & Pfeiffer, N. (2023). Increasing the transparency of systematic reviews: Presenting a generalized registration form. Systematic Reviews, 12(1), 170. https:/​/​doi.org/​10.1186/​s13643-023-02281-7
Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3–25. https:/​/​doi.org/​10.1037/​amp0000191
Booth, A., Clarke, M., Ghersi, D., Moher, D., Petticrew, M., & Stewart, L. (2011). An international registry of systematic-review protocols. The Lancet, 377(9760), 108–109. https:/​/​doi.org/​10.1016/​S0140-6736(10)60903-8
Buchanan, E. M., Crain, S. E., Cunningham, A. L., Johnson, H. R., Stash, H., Papadatou-Pastou, M., Isager, P. M., Carlsson, R., & Aczel, B. (2021). Getting Started Creating Data Dictionaries: How to Create a Shareable Data Set. Advances in Methods and Practices in Psychological Science, 4(1), 251524592092800. https:/​/​doi.org/​10.1177/​2515245920928007
Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. J. (2014). Instead of “playing the game” it is time to change the rules: Registered Reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–17. https:/​/​doi.org/​10.3934/​neuroscience.2014.1.4
Garritty, C., Gartlehner, G., Nussbaumer-Streit, B., King, V. J., Hamel, C., Kamel, C., Affengruber, L., & Stevens, A. (2021). Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. Journal of Clinical Epidemiology, 130, 13–22. https:/​/​doi.org/​10.1016/​j.jclinepi.2020.10.007
Glass, G. V. (1976). Primary, Secondary, and Meta-Analysis of Research. Educational Researcher, 5(10), 3–8. https:/​/​doi.org/​10.2307/​1174772
Higgins, J. P. T., Lasserson, T., Thomas, J., Flemyng, E., & Churchill, R. (2023). Methodological Expectations of Cochrane Intervention Reviews. Cochrane.
Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (Eds.). (2023). Cochrane Handbook for Systematic Reviews of Interventions version 6.5 (updated August 2024). Cochrane. http:/​/​www.training.cochrane.org/​handbook
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science, 23(5), 524–532. https:/​/​doi.org/​10.1177/​0956797611430953
Lakens, D. (2023). When and How to Deviate from a Preregistration. https:/​/​doi.org/​10.31234/​osf.io/​ha29k
Lakens, D., Hilgard, J., & Staaks, J. (2016). On the reproducibility of meta-analyses: Six practical recommendations. BMC Psychology, 4(1), 24. https:/​/​doi.org/​10.1186/​s40359-016-0126-3
Lindsay, D. S. (2017). Sharing Data and Materials in Psychological Science. Psychological Science, 28(6), 699–702. https:/​/​doi.org/​10.1177/​0956797617704015
Nishikawa-Pacher, A. (2022). Research Questions with PICO: A Universal Mnemonic. Publications, 10(3), 21. https:/​/​doi.org/​10.3390/​publications10030021
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https:/​/​doi.org/​10.1073/​pnas.1708274114
Ouzzani, M., Hammady, H., Fedorowicz, Z., & Elmagarmid, A. (2016). Rayyan-a web and mobile app for systematic reviews. Systematic Reviews. https:/​/​doi.org/​10.1186/​s13643-016-0384-4
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., … Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https:/​/​doi.org/​10.1136/​bmj.n71
Page, M. J., Shamseer, L., & Tricco, A. C. (2018). Registration of systematic reviews in PROSPERO: 30,000 records and counting. Systematic Reviews, 7(1), 32. https:/​/​doi.org/​10.1186/​s13643-018-0699-4
Peters, M. D., Godfrey, C., McInerney, P., Munn, Z., Tricco, A. C., & Khalil, H. (2020). Chapter 11: Scoping Reviews (2020 version) (E. Aromataris & Z. Munn, Eds.). JBI. https:/​/​doi.org/​10.46658/​JBIMES-20-12
Rethlefsen, M. L., Kirtley, S., Waffenschmidt, S., Ayala, A. P., Moher, D., Page, M. J., Koffel, J. B., Blunt, H., Brigham, T., Chang, S., Clark, J., Conway, A., Couban, R., de Kock, S., Farrah, K., Fehrmann, P., Foster, M., Fowler, S. A., Glanville, J., … PRISMA-S Group. (2021). PRISMA-S: An extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Systematic Reviews, 10(1), 39. https:/​/​doi.org/​10.1186/​s13643-020-01542-z
Stewart, L., Moher, D., & Shekelle, P. (2012). Why prospective registration of systematic reviews makes sense. Systematic Reviews, 1(1), 7. https:/​/​doi.org/​10.1186/​2046-4053-1-7
The Methods Group of the Campbell Collaboration. (2016). Methodological expectations of Campbell Collaboration intervention reviews: Conduct standards. The Campbell Collaboration. https:/​/​doi.org/​10.4073/​cpg.2016.3
Topor, M., Pickering, J. S., Barbosa Mendes, A., Bishop, D. V. M., Büttner, F., Elsherif, M. M., Evans, T. R., Henderson, E. L., Kalandadze, T., Nitschke, F. T., Staaks, J. P. C., Van Den Akker, O. R., Yeung, S. K., Zaneva, M., Lam, A., Madan, C. R., Moreau, D., O’Mahony, A., Parker, A. J., … Westwood, S. J. (2023). An integrative framework for planning and conducting Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR). Meta-Psychology, 7. https:/​/​doi.org/​10.15626/​MP.2021.2840
Tsuji, S., Bergmann, C., & Cristia, A. (2014). Community-Augmented Meta-Analyses: Toward Cumulative Data Assessment. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 9(6), 661–665. https:/​/​doi.org/​10.1177/​1745691614552498
UNESCO. (2021). UNESCO Recommendation on Open Science [Programme and meeting document]. UNESCO.
White House Office of Science and Technology Policy (OSTP). (2022). Desirable Characteristics of Data Repositories for Federally Funded Research. Executive Office of the President of the United States. https:/​/​doi.org/​10.5479/​10088/​113528
Wilkinson, M. D., Dumontier, M., Aalbersberg, Ij. J., Appleton, G., Axton, M., Baak, A., Blomberg, N., Boiten, J.-W., da Silva Santos, L. B., Bourne, P. E., Bouwman, J., Brookes, A. J., Clark, T., Crosas, M., Dillo, I., Dumon, O., Edmunds, S., Evelo, C. T., Finkers, R., … Mons, B. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific Data, 3(1), 160018. https:/​/​doi.org/​10.1038/​sdata.2016.18
Zotero. (2023). Zotero (Version 6.0) [Computer software].
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material