Registered reports are a new publication workflow where the decision to publish is made prior to data collection and analysis and thus cannot be dependent on the outcome of the study. An increasing number of journals have adopted this new mechanism, but previous research suggests that submission rates are still relatively low. We conducted a census of journals publishing registered reports (N = 278) using independent coders to collect information from submission guidelines, with the goal of documenting journals’ early adoption of registered reports. Our results show that the majority of journals adopting registered reports are in psychology, and it typically takes about a year to publish the first registered report after adopting. Still, many journals have not published their first registered report. There is high variability in impact of journals adopting registered reports. Many journals do not include concrete information about policies that address concerns about registered reports (e.g., exploratory analysis); however, those that do typically allow these practices with some restrictions. Additionally, other open science practices are commonly encouraged or required as part of the registered report process, especially open data and materials. Overall, many journals did not include many of the fields coded by the research team, which could be a barrier to submission for some authors. Though the majority of journals allow authors to be anonymous during the review process, a sizable portion do not, which could also be a barrier to submission. We conclude with future directions and implications for authors of registered reports, journals that have already adopted registered reports, and journals that may consider adopting registered reports in the future.

First adopted in 2013 by Cortex and Perspectives in Psychological Science, registered reports are a new publication workflow where the decision about acceptance at the journal occurs before data is collected, or, in the case of secondary data, before researchers have analyzed the data (Chambers, 2013). This process differs from preregistration, where researchers submit their hypotheses and data analysis plan to a registry, rather than a journal, prior to collecting data (Wagenmakers et al., 2012). A registered report involves submission of an introduction, methods, and analysis plan before data collection begins, or in the case of secondary analysis, before data is analyzed. These materials are sent out for Stage 1 review, where peer reviewers evaluate the proposal and based on these reviews the editor recommends rejection, revision, or offers an in-principle acceptance. This in-principle acceptance is a commitment from the journal to publish the results of the study, regardless of the outcome, assuming the authors have followed their proposed protocol. Once the authors receive an in-principle acceptance they can begin data collection or analysis. After data collection, the researchers prepare the final manuscript, which is a complete research article, and submit for Stage 2 review. Stage 2 review is primarily used to evaluate whether the Stage 1 proposal was adhered to, and whether the results are presented clearly and consistently, rather than evaluating the value of the study (which was already done at Stage 1).

Registered reports have two defining characteristics: 1) peer review prior to data collection, and 2) acceptance that is not contingent on results. By receiving peer review feedback prior to data collection, flaws in the design of the study can be identified and corrected before data is collected. Additionally, peer review prior to data collection shifts the focus of peer review away from the results and towards the methods and the importance of the research question. This process is meant to eliminate publication bias, the tendency of journals and authors to favor publishing results that show evidence of an effect (Hardwicke & Ioannidis, 2018; Nosek & Lakens, 2014; Scheel et al., 2021), and also disincentivize researchers from questionable research practices, since they no longer have to find evidence of an effect to get a publication (Chambers & Tzavella, 2020; Hardwicke & Ioannidis, 2018). By altering when the decision is made to accept a manuscript for publication, the incentives for the researchers shift away from finding evidence for effects and towards designing informative research studies. Previous research suggests that authors of registered reports invest more up-front to research planning as compared to non-registered reports (Bloomfield et al., 2018).

Registered reports started in 2013 at a few journals, but have picked up quickly throughout academic fields such as psychology, agriculture, business, medicine, neuroscience, and social sciences (Hardwicke & Ioannidis, 2018). In early 2015, only 11 journals accepted registered reports (6 in psychology), but by 2018, 91 journals had adopted registered reports (38 in psychology; Hardwicke & Ioannidis, 2018). By early 2021, estimates include approximately 280 journals accepting registered reports. The current research aims to provide a snapshot of these journals that have adopted registered reports, and explore the variability in policies surrounding registered reports. A better understanding of the current requirements of registered reports at these various journals can provide clear documentation of the early adoption of registered reports across academic journals. Ultimately, we hope for this study to assist researchers in adopting this new publication mechanism; though the primary focus of this manuscript is on documentation of journal policies, and not the effect of journal policies on submission rates.

Registered Reports So Far

One of the primary benefits of registered reports to the scientific community is that they reduce publication bias by publishing null results at a higher rate than non-registered reports (Allen & Mehler, 2019; Scheel et al., 2021). Scheel and colleagues (2021) compared published registered reports to a random sample of papers that tested hypotheses in psychology, and found that non-registered reports contain only 3.95% null findings, as compared to 56.34% for registered reports. While one proposed reasoning behind this might be that registered report authors pursue more risky research questions, Soderberg and colleagues (2020) found that there were minimal differences between registered reports and non-registered reports on novelty and creativity. Registered reports may also contribute to the quality and transparency of the research in other ways. Obels and colleagues (2020) demonstrated that registered reports in psychology had much higher computational reproducibility than other studies with open data and code, including papers from Cognition (Hardwicke et al., 2018) and papers from Science (Stodden et al., 2018).

Registered reports are not the only new practice that aims at improving the quality and transparency of scientific research. In the last decade a suite of practices have emerged including preregistration, open data, and open materials. Preregistration is a similar process to registered reports of registering a study’s hypotheses and data analysis plan prior to collecting data, but differs in the sense that preregistrations are not submitted directly to a journal, do not undergo peer review, and no in-principle acceptance is offered. Open data and open materials refer to the data and materials from a study being made publicly available, and these practices are recognized and incentivized through Open Science Badges (Center for Open Science, Badges, n.d.). Though most journals, not just those publishing registered reports, require data to be made open and available “upon request,” only 19% of such requests are fulfilled (Stodden et al., 2018; Vines et al., 2014); however, responses to these requests are becoming more common (Kidwell et al., 2016). Open data and materials themselves (without requiring a request) are also fairly rare (2% and 14% respectively; Hardwicke et al., 2021). Over the last decade, there has been increased attention to replication of previous research. While open science practices improve transparency of the research being conducted, replications of previous work are one mechanism by which science can self-correct. Unfortunately, replications within psychology are very rare (1.6% [Makel et al., 2012]; 5% [Hardwicke et al., 2021]). A particular strength of registered reports is that they provide the opportunity to use all of (or some of) these practices in concert. Registered reports have been proposed as a way to incentivize replication research (Chambers, 2013; Chambers et al., 2014; Chambers & Tzavella, 2020; Nosek & Lakens, 2014), and journals can require external preregistration, open data, and open materials as part of the registered reports process. While these practices have been recommended to be implemented alongside registered reports (Nosek & Lakens, 2014), little has been done to document whether these recommendations have been adopted at the journal level. The current research explores this question.

Registered reports are increasingly viewed as high quality and impactful, and authors who have published registered reports seem to have a favorable opinion of the process. In a comparison of registered reports to non-registered reports selected to match based on article type, journal, and publication timeline, registered reports had a higher citation rate and greater Altmetric attention score (i.e., social media attention; Hummer et al., 2017). Similarly, registered reports were rated more favorably than non-registered reports across a wide range of criteria, but the largest effects were in rigor of methodology and analysis as well as overall paper quality (Soderberg et al., 2020). Some universities (e.g., Virginia Commonwealth University, Indiana University-Purdue University Indianapolis) have also started to include open science practices as part of their promotion and tenure evaluation, so early career researchers can benefit by publishing registered reports (Allen & Mehler, 2019; Nosek, 2017). AERA Open published a series of registered reports in 2020, and each authorship team also published a brief reflection on the process. Overall, the responses were very positive (e.g., Cimpian & Timmer, 2020; Lu et al., 2020). While some journals that temporarily tested registered reports did not find it worthwhile (Ansell & Samuels, 2016; Bloomfield et al., 2018), editors and reviewers for registered reports have reported generally positive reactions (DeHaven et al., 2019; Reich et al., 2020).

Existing Barriers

While the rate of adoption among journals has been nearly exponential, recent research suggests that author adoption may lag behind. Data from the first adopting journals show that there was approximately a two-year lag between adopting registered reports and publishing the first registered report at a journal (Chambers & Tzavella, 2020). The lag between adoption and publication is highly dependent on time to complete the research, and varies greatly based on field. For example, an eLife special issue on The Reproducibility Project in Cancer Biology had 1,122 days between in-principle acceptance and publication; whereas, Cortex had a median time of 473 days (Hardwicke & Ioannidis, 2018). Recent reports suggest that approximately one-third of journals that accept registered reports have received no registered report submissions (DeHaven et al., 2019; Hardwicke & Ioannidis, 2018). In 2020, the Journal of Psychiatric and Mental Health Nursing discontinued registered reports, citing a lack of submissions as the primary reason (Elliott, 2021; Gray et al., 2020).

Because registered reports are still relatively new, researchers, editors, and reviewers may be unfamiliar with the details of the process and this can act as a barrier for new registered report authors. Concerns have arisen around whether exploratory analyses are allowed (Allen & Mehler, 2019; Chambers, 2019; Chambers et al., 2014; Parker et al., 2019), how secondary data is handled (Chambers et al., 2014; Parker et al., 2019; Syed & Donnellan, 2020), and how multiple studies and pilot studies are handled (Chambers, 2019; Chambers et al., 2014; Mehlenbacher, 2019). These concerns can be summed up in the second most common challenge reported by editors at journals with registered reports, which is a “misunderstanding with the format” (the first being attracting submissions; DeHaven et al., 2019). While existing articles have described how journals could address these concerns for registered reports (e.g., Chambers & Tzavella, 2020; Parker et al., 2019), the current research documents how many journals have addressed these concerns by including additional information in their submission guidelines.

Another common concern is about the quality of journals that accept registered reports (Chambers et al., 2014; Hummer et al., 2017). Based on the academic incentive system, it is not currently incentivized for researchers to shift away from their most commonly published-in journals to publish registered reports, especially if this comes at a cost of impact or field relevance. Some high impact journals in psychology and behavioral science offer registered reports (e.g., Psychological Science, Nature Human Behaviour); however, adoption of registered reports is not yet ubiquitous and could be a limitation if journals publishing registered reports are not perceived as high quality.

Another barrier in the registered reports process could be concerns about anonymity in the peer review process. Some research suggests that when an author's identity is revealed during the peer review process, this can disadvantage women, minorities, and individuals from less prestigious universities (Knobloch-Westerwick et al., 2013; Manchikanti et al., 2015; Tomkins et al., 2017); however, there are more mixed effects with respect to women (Cox & Montgomerie, 2019; Webb et al., 2008). In a registered report during Stage 1 review, the study has not been implemented yet, so if the author's identity is known to the reviewers, the peer review process has the opportunity for reviewers to rely on their “reputation” (Chambers et al., 2014; Scott, 2013). This could manifest in biases against researchers at smaller universities, women, and underrepresented minorities.

The Impact of Journal Policies

Journal policies are particularly important for communicating information about registered reports to researchers, given their very recent adoption. Previous research on non-registered reports has pin-pointed journal submission guidelines as a potential source of information (or lack thereof) related to peer review and preprints (Klebel et al., 2020). Updating policies at journals can affect the practices of the researchers publishing in those journals. Journal policies have the potential to broadly influence the field, as seen with policies promoting open data (Klein et al., 2018). One study on journal data sharing policies described journal policies to promote data sharing as “extremely effective,” noting that with a simple policy change in the Journal of Decision Making the percentage of articles with accessible open data rose from 8.6% to 87.4% after the policy went into effect in 2011 (Nuijten et al., 2017). The journal Bioinformatics updated its peer review policy to prevent reviewer-coerced citation, which is where a reviewer requests their own papers be added and cited in a manuscript they are reviewing, and found this change to be very effective (Wren et al., 2019). Policies are a low-cost way to promote fieldwide changes, and to communicate new information to authors. Open science badges increased awareness of open science, with the journal Psychological Science seeing a gradual yet accelerating increase of articles providing open data from 3% to over 39% after badges were introduced (Kidwell et al., 2016). Journal policies can be a catalyst to affect the field in adopting other new publication practices, such as registered reports.

Research Questions

Although the adoption of registered reports at journals is by definition required for researchers to adopt registered reports, a journal adopting registered reports does not guarantee authors will begin publishing registered reports in that journal. Journal policies provide much needed information to researchers about what registered reports are in general, and how they are implemented at that journal. While previous work has focused on encouraging journals to adopt registered reports as a publication mechanism (Chambers & Mellor, 2018; Nosek & Lakens, 2014), the current work focuses on reporting the current documented practices journals have adopted for registered reports. We document various aspects of journal adoption of registered reports, as well as describe how common different policies are amongst journals publishing registered reports according to online submission guidelines.

Overall this research study aims to answer the following questions:

  1. Adoption: What journals have adopted registered reports, and what are the academic fields of those journals?

  2. Policies: How frequently do journals include explicit “adjustment” policies to help inform researchers about ways registered reports can accommodate different research practices (e.g., research that includes exploratory analyses)?

  3. Impact: How impactful are the journals publishing registered reports?

  4. Open Science: How frequently are journals that adopt registered reports including recommendations for other open science practices as part of these reports? And if they are requiring open science practices, which are most common?

  5. Review Masking: How frequently do journals that have adopted registered reports allow for author masking during the review process?

Disclosures

This study was not pre-registered. Data, materials, and additional resources can be found at https://osf.io/4yvu9/. We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study (see Simmons et al., 2011). No ethical approval was required for the completion of the study as there were no human or animal subjects used for the conduct of the research.

To create our comprehensive list of journals publishing registered reports, we used the Open Science Framework (OSF) repository for journals publishing registered reports. Data was collected in three rounds. A list of 141 journals was collected on October 24th, 2018. A second round of 102 additional journals was collected on April 21st, 2020. A third round of 35 journals was collected on January 13th, 2021. Each wave of collection aligned with a revision of the manuscript.

Coding

Two coders independently double-coded the journals on all of the variables in three rounds of data collection. Coders used links listed on the Center for Open Science’s page for journals publishing registered reports (Center for Open Science, 2018) and any additional sources required to find all the information listed in the variables. If information was unavailable, the field was marked as missing. The two coders met approximately on a weekly basis (about every 15-20 journals) with the first author to resolve disagreements in the coding. Disagreements were resolved by returning to the web pages, evaluating the information as a group, and unanimous consensus. The most common disagreements were when one coder did not find relevant information, and so listed a field as “missing” whereas the other coder found the information. Overall agreement among the coders was assessed using Cohen’s κ because it accounts for base-rates of different categories (whereas percent agreement does not; Cohen, 1960). This measure of interrater reliability was good [average κ = 0.79; range of κ = 0.44 (scientific discipline) to κ = 0.98 (withdraw after Stage 1)]. After coding was complete, a final dataset was compiled by combining the responses of the two coders and recording them in a consistent way across all entries. This was done because while the coders responses may have had the same meaning, their responses were not always the same (e.g., 5000 words vs 5,000 words). Preference was given to responses that were more in-depth. For example, “allows multiple studies” vs “allows multiple studies and uses incremental registration,” the latter would be retained as the final response. Each coder’s original data, as well as the combined dataset are available on OSF (https://osf.io/4yvu9/).

Variables

For this study, 18 variables were collected to focus on adoption, journal policies, journal impact, open science, and review masking. These variables were generated by the first and second authors in an attempt to both create variables that would be easy to identify and to answer the questions of interest. Below are the variables grouped and organized by the research question for which they are relevant.

Adoption

A general indicator of scientific discipline (e.g., psychology, biology, medicine) was recorded for each journal. For journals that were classified as “psychology,” we classified psychology research areas based on a list from Lumen Learning adapted from Gurung and colleagues (2016; Lumen Learning, n.d.). The research areas included biological; cognitive; developmental; social and personality; mental and physical health; research methods and statistics; and general psychology. Subareas of biological psychology were biopsychology/neuroscience, sensation, and consciousness. Subareas of cognitive psychology were perception, thinking, intelligence, memory, and cognition. Subareas of developmental psychology were learning and lifespan development. Subareas of social and personality psychology were social, personality, emotion, and motivation. Subareas of mental and physical health were abnormal; therapies; and stress, lifestyle, and health. There were no subareas of research methods and statistics. Coders were allowed to choose up to three research areas and subareas and were instructed to select the highest level areas that appropriately captured the topic of the journal. Coders sometimes selected categories outside of this list. The first research area was used to categorize the journal, and if the first area was not in these categories, the second research area was used, etc. If all three research areas were not included in these categories, they were listed as “Other,” which included business, economics, human behavior, language, law, linguistics, marketing, media, music, religion, and school psychology.

The year that registered reports were adopted at the journal was recorded. In an attempt to verify the years that journals officially began to accept registered reports, we cross checked our dates with the dates used by Hardwicke and Ioannidis (2018). The dates were made accessible through their supplemental materials online. Through using their data set we were able to double check the adoption year of registered reports for 91 journals (all in Wave 1). We were able to verify that the year of adoption collected by both datasets was the same for 42 (46.15%) of the journals. There was a discrepancy in the year of adoption for 49 journals. For 20 of these journals (31.97%) we kept our date. The year of adoption of the remaining 29 (31.86%) journals were changed to match the Hardwicke and Ioannidis (2018) data because ours were either missing or incorrect.

Coders also recorded whether registered reports are (or were) being accepted for a special issue or for general submission. This designation was aided by the listing by the Open Science Framework (Center for Open Science, 2018), which divides journals based on this designation. Some journals had an initial special issue, and later adopted registered reports for general submissions. The most recent information was used for the coding.

To find the date a journal first published a registered report, we used information from a database of published registered reports www.zotero.org/groups/479248/osf/collections/KEJP68G9 supplemented by curated lists of registered reports to find the first published paper from each journal accepting registered reports. This information was collected and updated at each wave of data collection. From this we calculated the difference (in years) between the announcement of adoption of registered reports and time to first publication. A few journals had conflicting information about adoption and first publication, where the first publication was prior to the reported year of adoption (JMIR Diabetes, JMIR Human Factors, JMIR Medical Informatics, JMIR mHealth and uHealth, and JMIR Rehabilitation and Assistive Technologies). These journals were coded as 0 on time between adoption and publication.

Policies

These variables were generated as a list of information we believe researchers would want to know about the submission requirements for registered reports and that aligned with some of the common concerns around registered reports. These included the type of studies accepted: novel and/or replications, along with if and how exploratory analyses, multiple studies, secondary data analysis, and preliminary data are included in the manuscript. Additionally, coders examined if and how withdrawal after Stage 1 acceptance and changes to the introduction for Stage 2 submission are handled, and if there is a deadline for Stage 2 submission. Coders recorded information about each policy as either “Yes” (practice is allowed) or “No” (practice is not allowed). Policies were listed as missing if coders were unable to find concrete information about whether each practice was allowed or not allowed. If additional information was available about how the policy was implemented, this was also recorded (e.g., incremental registration for multiple studies).

We also examined required word limits. Coders looked for specific information for word limits on registered reports in the registered reports guidelines as well as the general author submission guidelines. If word limits were not available for registered reports specifically, coders reported word limits for general research articles. And if neither were found, the coders marked the information as missing.

Coders recorded whether there were statistical power requirements and what those requirements were. In the case of power analysis, many journals mentioned that studies should be adequately powered, but did not provide concrete information about what is considered “adequately powered.” Unless the journal provided concrete numerical information for power analyses (e.g., 0.9 for frequentist analyses and Bayes Factor > 6 for Bayesian analyses), or specifically said they do not have a numerical requirement for power, the field was listed as missing.

Impact

Two-year impact factors (2017), five-year impact factor (2017), and h-index were all collected. Impact factors and h-indexes were collected from the journal website if available or from www.scimagojr.com or www.academic-accelerator.com1 if not available directly from the journal.

Open Science Practices

As part of open science policies, we coded whether external preregistration, open data, and open materials were required or encouraged for registered reports or at the journal in general.2 Information on these policies was pulled specifically from the registered report guidelines; however, if journals had policies that applied to all papers submitted to that journal (e.g., all papers must have open data) these were also coded. In these cases, coders did not differentiate between journal-wide and registered report specific policies in the coding. If information about these policies could not be found for registered reports or the journal more broadly, the policy was coded as missing.

At a reviewer's suggestion, we cross-checked the journals accepting registered reports with the list of Transparency and Openness Promotion (TOP) signatories. The TOP guidelines describe signatories as “expressing their support of the principles of openness, transparency, and reproducibility, expressing interest in the guidelines and commit to conducting a review within a year of the standards and levels of adoption.” www.cos.io/our-services/top-guidelines. We recorded the total TOP score and the specific Registered Report & Publication Bias score www.topfactor.org/ for each journal accepting registered reports (this was done for all journals in the census at the most recent wave of data collection). The Registered Report & Publication Bias score evaluates the journal's policy of conducting registered reports outside of replications, where a score of 1 indicates “Journal states that significance or novelty are not a criterion for publication decisions,” 2 indicates “Journal will review studies blinded to results,” and 3 indicates “Journal accepts Registered Reports for novel studies as a regular submission option.” Journals that did not have a TOP score were treated as missing on these variables.

Peer Review

Coders recorded the journal's policy for masking during peer review and open peer review (where reviews are published with the article). Typically, this information was not specific to registered reports, but rather a broader journal policy. The coders included the most direct language to reflect the journal's reviewing policies, and after data collection the policies were grouped based on comparable policies (though the language from the journals varied widely. See Table S1 for a complete list of original language and their assigned categories).

Calculated Variables

Some variables were created based on the coding results. This included whether a journal was missing any information and what proportion of information was listed as missing by the coders. Coders listed information as missing when they were unable to find a policy or information related to the variable. These variables were calculated for the overall information, and specifically for the subset of the policy variables, as these variables seemed most relevant for an author interested in submitting a registered report.

Analysis

No analyses were preregistered for this study, and as such all analyses are considered exploratory. Importantly, because the dataset acts as a census (a complete population) we only report descriptive statistics (means, standard deviations, and proportions) and visualizations. We examine and report the proportions of journals in each category under each variable (e.g., What proportion of journals allow exploratory analyses?), and proportions of missing data. Cross tabulations were analyzed in an exploratory manner, guided by initial results.

Adoption

From our study 278 journals have adopted registered reports3. Of these journals, 137 come from Psychology, 51 from Medicine, 17 from Biology, 15 from Political Science, and the remaining 58 spread across numerous fields. Breaking down the psychology journals by the first specified research area, 34 come from Biological fields, 29 from Cognitive fields, 25 from Social & Personality, 18 from Developmental fields, 14 from Mental & Physical Health, four from Research Methods and Statistics, seven from General Psychology, and the remaining six from other fields. Journals may adopt registered reports in multiple formats: special issue or general submission. We found that 36 journals (12.95%) accepted registered reports for a special issue.

We estimate the median lag between adopting registered reports at the journal level and publishing the journal’s first registered report is approximately one year. However, there is quite a bit of variance (ranging 0 to 5 years), and among journals that have not yet published a registered report it has been a median of three years since adopting the registered reports format (see Figure 1). As can be seen, with a large portion of journals adopting in the last two years, most of these journals have not yet produced a published registered report (71.58%).

Figure 1. Journal Lag to First Publication and Years Since Adoption
Figure 1. Journal Lag to First Publication and Years Since Adoption
Close modal

Missing Information

Based on the variables that we selected to code, most journals (89.93%) were missing at least one piece of coded information. Across 18 variables, the average number of missing fields was 6.42 (SD = 4.63) and a median of 6. Table 1 shows the rates of missing information across all variables. A subset of variables that we identified as relating to policy information made up 12 of these 18 variables indicated in Table 1. These policy information variables had lower rates of missing information, but still 82.37% of journals were missing at least one of the 12 variables. An average of 5.15 policy variables were missing (SD = 3.96) with a median of 5. Variables with the highest missing data rates were the handling of multiple studies (59.71%), power requirements (59.71%), and secondary data (53.24%).

Table 1. Missing Data Rates for Coded Variables
Variable Number Missing (%) 
Author Masked Review 35 (12.59) 
Changes to Introduction* 140 (50.36) 
Deadline* 87 (31.29) 
Exploratory Analysis* 89 (32.01) 
External Preregistration 127 (45.68) 
Multiple Studies* 166 (59.71) 
Novel Studies* 109 (39.21) 
Open Data 25 (8.99) 
Open Materials 79 (28.42) 
Power* 166 (59.71) 
Preliminary Data* 96 (34.53) 
Replication* 114 (41.01) 
Replication from Journal* 115 (41.37) 
Secondary Data* 148 (53.24) 
Special Issue 1 (0.36) 
Withdraw* 138 (49.64) 
Word Limits* 64 (23.02) 
Year RR Implemented 86 (30.94) 
Variable Number Missing (%) 
Author Masked Review 35 (12.59) 
Changes to Introduction* 140 (50.36) 
Deadline* 87 (31.29) 
Exploratory Analysis* 89 (32.01) 
External Preregistration 127 (45.68) 
Multiple Studies* 166 (59.71) 
Novel Studies* 109 (39.21) 
Open Data 25 (8.99) 
Open Materials 79 (28.42) 
Power* 166 (59.71) 
Preliminary Data* 96 (34.53) 
Replication* 114 (41.01) 
Replication from Journal* 115 (41.37) 
Secondary Data* 148 (53.24) 
Special Issue 1 (0.36) 
Withdraw* 138 (49.64) 
Word Limits* 64 (23.02) 
Year RR Implemented 86 (30.94) 

Note. *indicates Policy Variable

Policies

Journals can implement policies to handle some of the common concerns about registered reports. We coded whether policies on these and other issues are made clear in the submission guidelines. Table 2 includes a summary of these variables.

No journals explicitly prohibit exploratory analyses or having multiple studies. Very few journals do not allow secondary analysis (1.44%) or preliminary studies (0.72%). However, even when journals allow for some of these practices, they may restrict them in certain ways. For example, of the 189 journals that allow some form of exploratory analyses, 170 of them specify that the analysis must be included in a separate subsection to be differentiated from confirmatory analyses. Similarly, for the 126 journals that allow secondary data analysis, 94 of the journals require proof of no prior access to the data. Changes to the introduction between Stage 1 and Stage 2 review are much more restrictive. The most common policies only allow making minor stylistic changes (58) or “small updates” (5). Other policies for changes to the introduction included contacting the journal in advance of Stage 2 submission (1), a section addressing the differences between the stages (1), or specifying that you cannot change hypotheses (1). Overall most of the policies seem to allow practices like exploratory analyses, secondary data, preliminary studies, and changes to the introduction, but there are restrictions about how these practices are implemented.

Power requirements were adopted in 110 (39.57%) of journals. The power requirements that were implemented varied on two factors: (1) acknowledging frequentist and Bayesian statistical methods, and (2) level of power required. Of the 110 journals that did have power requirements, three did not mention frequentist power requirements, but only 70 included requirements for Bayes factors. For frequentist power levels, 12 journals require a power of 0.8 or higher, 93 journals require a power of 0.9 or higher, and two journals require power of at least 0.95. Seven journals require a Bayes Factor of 3 or greater, one journal requires a Bayes Factor of 4 or greater, 56 journals require a Bayes Factor of 6 or greater, three journals require a Bayes Factor of 10 or greater, and three journals mentioned Bayes factors but ask the authors to specify a required level in Stage 1 submission.

Impact

Journal impact was recorded using two-year impact factor, five-year impact factor, and h-index. There were 97 journals (34.89%) that did not have a two-year impact factor. The average two-year impact factor was 2.74 (SD = 1.85) with a median of 2.32. The majority of journals did not have a five-year impact factor (218, 78.42%). Of the journals with five-year impact factors, the average was 3.95 (SD = 2.29) with a median of 3.36. Ninety-nine journals did not have h-indexes (35.61%). Of the journals that have an h-index, the average was 76.87 (SD = 60.99) with a median of 65. Figure 2 provides histograms of the two-year and five-year impact factors and h-index broken down by all journals and psychology journals.

Open Science Practices

Table 3 describes the frequency with which the different open science practices were required, encouraged, or missing at the journals. Most journals either required or encouraged some open science practices. Open data (91.01%) and materials (71.22%) were the most commonly required or encouraged practices. External preregistration of registered reports was also encouraged or required by the majority of journals, but at a lower rate (53.24%). These results show that registered reports are commonly paired with other open science practices.

Table 2. Frequency of Journal Policies
Variable Yes N(%) No N(%) Missing N(%) 
Changes to Introduction 69 (24.82) 69 (24.82) 140 (50.36) 
Exploratory Analysis Allowed 189 (67.99) 0 (0.00) 89 (32.01) 
Multiple Studies 112 (40.29) 0 (0.00) 166 (59.71) 
Preliminary Studies 180 (64.75) 2 (0.72) 96 (34.53) 
Power 110 (39.57) 2 (0.72) 166 (59.71) 
Secondary Data Allowed 126 (45.32) 4 (1.44) 148 (53.24) 
Variable Yes N(%) No N(%) Missing N(%) 
Changes to Introduction 69 (24.82) 69 (24.82) 140 (50.36) 
Exploratory Analysis Allowed 189 (67.99) 0 (0.00) 89 (32.01) 
Multiple Studies 112 (40.29) 0 (0.00) 166 (59.71) 
Preliminary Studies 180 (64.75) 2 (0.72) 96 (34.53) 
Power 110 (39.57) 2 (0.72) 166 (59.71) 
Secondary Data Allowed 126 (45.32) 4 (1.44) 148 (53.24) 

Note. “Yes” indicates that the practice is allowed and a “No” indicates that the practice is not allowed. “Missing” indicates that no clear policy was found. A “Yes” for Power indicates that there are numerical requirements for power analysis, “No” indicates the journal explicitly states there are no numerical requirements for power analysis, and “Missing” indicates that numerical requirements for power analysis were not mentioned.

Figure 2. Two-year and five-year impact factors and h-index distributions for all journals (left column) and psychology journals (right column).
Figure 2. Two-year and five-year impact factors and h-index distributions for all journals (left column) and psychology journals (right column).
Close modal
Table 3. Open Science Practices
Variable Yes N(%) No N(%) Encouraged N(%) Missing N(%) 
Access to Data 165 (59.35) 0 (0.00) 88 (31.6) 25 (8.99) 
Access to Materials 128 (46.04) 1 (0.36) 70 (25.18) 79 (28.42) 
External Preregistration 134 (48.20) 0 (0.00) 17 (6.12) 127 (45.68) 
Novel Studies 158 (56.83) 11 (3.96) 0 (0.00) 109 (39.21) 
Replications Allowed 155 (55.76) 9 (3.24) 0 (0.00) 114 (41.01) 
Replications from Journal Only 3 (1.08) 158 (56.83) 2 (0.72) 115 (41.37) 
Variable Yes N(%) No N(%) Encouraged N(%) Missing N(%) 
Access to Data 165 (59.35) 0 (0.00) 88 (31.6) 25 (8.99) 
Access to Materials 128 (46.04) 1 (0.36) 70 (25.18) 79 (28.42) 
External Preregistration 134 (48.20) 0 (0.00) 17 (6.12) 127 (45.68) 
Novel Studies 158 (56.83) 11 (3.96) 0 (0.00) 109 (39.21) 
Replications Allowed 155 (55.76) 9 (3.24) 0 (0.00) 114 (41.01) 
Replications from Journal Only 3 (1.08) 158 (56.83) 2 (0.72) 115 (41.37) 

Note. “Yes” indicates that the practice is required and a “No” indicates that the practice is not required or encouraged. “Encouraged” indicates the practice is not required, but encouraged or recommended. “Missing” indicates that no clear policy was found. For the variable Replication from Journal Only, “Yes” indicates the study being replicated must be published in the same journal, and “No” indicates the study being replicated does not need to be from the same journal.

Previous research has demonstrated that replication studies are relatively rare in psychology, but registered reports have been suggested as a way to incentivize them. The majority of journals with information about both types of studies allow both replications and novel studies (138, 49.64%). More than half of journals allow replication studies (55.76%) and a similar number allow novel studies (56.83%). Only nine (3.24%) journals allow novel studies but do not allow replications, and 10 (3.60%) journals allow replications but not novel studies. These results show how registered reports are a useful mechanism to encourage and publish replication studies. Additionally, registered reports are not only being used for replications, as many journals allow novel studies to be published as registered reports.

It is more difficult to interpret what is implied when information about what types of studies (replications and novel studies) is not reported by the journal. For example, many journals mention neither novel studies nor replications (106, 38.13%). This could imply that they accept both types of articles, or perhaps the editors have not considered what types of studies they accept. Replications are missing at eight journals where novel studies are explicitly allowed; whereas, novel studies are missing at three journals where replications are explicitly allowed. Notably, it seems reasonable to assume the novel studies are allowed unless otherwise mentioned; however, the same logic may not apply to replications. Overall, journals could be more explicit about whether or not they allow replications and/or novel studies.

Of the 278 journals that currently accept registered reports, 96 (34.53%) of them have signed on as signatories for TOP, and of that only 55 have received a TOP Factor score. Of journals that have received a score the average overall TOP score is 10.45 (SD = 6.39; observed range: 0-27), with the maximum possible score being 30. The average Registered Report & Publication Bias score is 2.20 (SD = 1.32; range: 0-3), with a potential score between 0 and 3.

Peer Review Policies

Labels regarding peer review policies varied considerably from journal to journal. We were unable to find explicit masking policies for 45 journals (16.19%). The most common policy was for journals to anonymize both reviewers and authors, (102 journals; 36.69%), and 12 of these journals (all published by the Journal of Medical Internet Research) reveal reviewers at publication. Seventy-five (26.98%) journals opt to keep reviewers anonymous but not authors in a single-masked review, and only five (1.80%) journals have no masking, where the identities of both the reviewers and the authors are known. Optional masking for either authors or authors and reviewers is available at 35 journals (12.59%). Other policies included triple masking: editor, authors, and reviewers (Auditory Perception and Cognition) and partial triple-masked review (Comparative Political Studies). Finally, 14 journals (5.04%) had ambiguous masking policies, detailed in the Table S1.

Another consideration for the peer review process is whether or not the journal offers open reviews. In an open review, the reviewer comments are available with the publication. We found that 11 journals (3.96%) participate in open review. One of these journals (Journal of Research in Reading) offers the option of open review. The remaining 267 journals (96.04%) did not mention open review policies.

To revisit our research questions posed at the beginning of the study, we consider the major findings for adoption, policies, impact, open science, and review masking. In line with the results of Hardwicke and Ioannidis (2018), our study finds that journals adopting registered reports are primarily in Psychology and Medicine, though other fields are adopting as well. Publishing the journal's first registered report typically takes about 1 year, though many journals have not yet published their first registered report. There is quite a bit of missing information from the journal submission guidelines that could help clarify some of the common concerns about registered reports, but those journals that do provide information on these policies show how these concerns can be accommodated within the bounds of a registered report. Adopting journals vary greatly on impact, with the typical 2-year impact factor being about 2.74 and typical 5-year impact factor being 3.95. Each researcher likely holds their own internal representation of what they believe to be “high impact,” and thus Figure 2 may be convincing to some that high impact journals are publishing registered reports, and unconvincing to others. These results may also be useful to examine how this distribution changes over time with more journals adopting registered reports. Journals frequently pair other open science practices with registered reports, particularly open data and materials, but also preregistration and replication studies. Finally, during peer review, the most common policy is for reviewers and authors to be anonymous, but a non-trivial proportion of journals allow the authors’ identity to be known, which could result in bias against certain groups during the review process.

General Policies and Open Science

Overall, journals publishing registered reports have accommodated many common practices used in non-registered reports including exploratory data analysis, secondary data analysis, preliminary studies, and multiple studies. Most journals allow multiple studies either through incremental registration, or submitting initial studies as “pilot” data and only the final study following the registered reports format. Additionally, not a single journal explicitly restricts researchers to only their initial analysis plan, and many journals allow secondary data analysis. If researchers expect that these practices are not allowed in registered reports, then informing researchers about the policies of journals and improving the clarity of submission guidelines may encourage them to consider registered reports. However, more empirical research is needed to evaluate the effect of submission policies on submission rates more directly.

Our census found that the majority of journals that have adopted registered reports are also implementing open science practices as part of registered reports. In particular, open data and open materials are frequently required or encouraged as part of a registered report. The low rate of requiring external preregistration aligns with previous research that found that very few journals require a permanent record of Stage 1 submissions for registered reports (Hardwicke & Ioannidis, 2018). This record is a very important element of registered reports if Stage 1 submissions are not published by the journal. With a public preregistration, readers can compare the original submission to the final manuscript (much like preregistration), ensuring transparency. However, it may be that journals request authors to preregister in the Stage 1 in-principle acceptance letter, rather than include this information in the submission guidelines. Notably, we did not compare rates of open science practice adoption with journals that have not adopted registered reports or with submission guidelines for non-registered reports. Therefore, the current data does not provide a comparison of whether registered reports tend to be paired with open science practices more-so than non-registered reports or journals not publishing registered reports.

A Call for Clearer Journal Policies

Researchers may be unfamiliar with the expectations of registered reports, especially when submission guidelines are unclear. The data from this study suggests that journals could include more detail in their policies, especially regarding some of the concerns that researchers may have (e.g., exploratory analyses, multiple study papers, pilot data). Templates are available for new journals interested in adopting registered reports, which largely include the information coded in the present study (Chambers, 2018). However, journals should not feel constrained to the template policies, and some information that would be useful to authors is not contained in these templates. During data collection, we observed that some journals used various templates without providing journal specific information. For example, many journals’ policies included the phrase from the Open Science Framework template suggesting that authors “may be asked to upload their raw data, digital study materials, and laboratory log to a publicly accessible file-sharing service, if required by the journal.” Without clarifying whether the journal requires open data explicitly, this phrase in the template policy is ambiguous. Journals adopting registered reports should clarify their specific policies.

We recommend that journals that have already adopted registered reports should expand their policies to provide at least all of the information listed under the policy category (as indicated in Table 1). Additionally, journals adopting registered reports for the first time would benefit from reviewing these categories and ensuring this information is included in their submission guidelines. Even a policy of having no policy can ease the concerns of potential authors of registered reports. For example, journals could mention explicitly that they do not require power analyses to be powered at a specific level, but rather that the power analyses should be tailored to the specific population of interest and research question and the sufficiency of the power analysis will be evaluated as part of the peer review process. Doing so could alleviate concerns of new registered report authors, who may continue looking for this information or be concerned that there may be some unwritten rules of registered reports. Table S2 provides examples of language for both sides of each policy (when available), which could be useful for journals looking to adjust and clarify their policies or adopt registered reports for the first time. In similar work on non-registered reports, Klebel et al. (2020) demonstrated a lack of information in journal policies regarding preprints and peer review, and emphasized the detrimental impact this could have on early career researchers. Without informative guidelines, researchers must rely on alternative approaches to getting information (e.g., contacting editors). By leaving submission guidelines ambiguous, journals may be systematically excluding early career researchers who may be newer to the submission process and may be more tentative about contacting editors.

Policies with Potential Unintended Consequences

Power requirements are implemented for many registered reports, and though 80% seems to be a typical threshold for power in non-registered reports (Bakker et al., 2019), 90% power was the most frequently required frequentist power level for registered reports. This suggests that registered reports are largely being held to a higher standard than non-registered reports. Power requirements are important for registered reports, which should be informative whether the results are positive or null. However, these policies may have an unintended consequence of systematically excluding areas of research that rely on very difficult or expensive samples to collect (e.g., brain imaging, developmental psychology, research on minority populations). Journals that aim to appeal to researchers in areas with difficult to collect or expensive samples, might consider either loosening the requirements for statistical power, facilitating alternative ways to help researchers reach higher power requirements (e.g., collaborating through Psychological Accelerator, Moshontz et al., 2018), or both.

For most journals, authors are anonymous through the review process, allowing for a more objective review of the scientific study proposed. Previous research on non-registered reports suggests that when authors are revealed, women, minorities, and individuals from less prestigious universities are disadvantaged in the review process (Knobloch-Westerwick et al., 2013; Tomkins et al., 2017). There have been concerns raised that these types of biases may be more pronounced for registered reports, as the decision to accept the article may be based on less information about the study and more of an evaluation of the ability of the research team to conduct the research (Scott, 2013). Some journals that accept registered reports have explicitly enacted masked review for registered reports to avoid potential biases during the Stage 1 review phase (e.g., AIMS Neuroscience; Chambers et al., 2014). Even if these biases don't occur, if authors believe they exist this could affect the decision to submit a registered report. Additional empirical work is required to evaluate how an author-revealed review may impact the willingness to submit registered reports and potential biases in the review process.

Constraints on Generality

We collected a complete population of journals publishing registered reports through January 13th, 2021. However, as more journals adopt registered reports, the general characteristics of this population may change. In addition, we collected information from submission guidelines. We expect that other methods of collecting similar information (e.g., from editors directly) may result in less missing information.

Future Directions

We found there were a number of journals that adopted registered reports for a special issue, but also had adopted them for general submission. This could be a common mechanism for journals to take registered reports for a “test-drive” using a special issue, and then decide whether or not to adopt based on the experience. Some journals that have tested registered reports in this format then go on to adopt them for general submissions. For example, AERA Open published a special topics section on registered reports in May 2020, and approved registered reports for general submissions in June 2020. Other journals have tried registered reports as part of a special issue, and decided against implementing them for general submissions (e.g., Comparative Political Studies; Ansell & Samuels, 2016). Future research could examine what characteristics of the special issue process (e.g., submission rates, citation rates, review time, etc.) might predict general submission adoption.

Our results suggest that many journals that offer registered reports also require or encourage other open science practice for registered reports. However, in this study, coders did not differentiate between journal-wide and registered report specific policies on open science. This means it would be difficult to differentiate between journals that adopt registered reports, and then open sciences practices more broadly, from those that have adopted open science practices and then adopt registered reports. Future work should examine whether there are differences between journals that adopt open science practices in these ways, and whether author submission rates or other characteristics of submissions differ between these two patterns of adoption. Similar questions could be asked on the author level: for example, what are the experiences of authors who adopt other open science practices then try registered reports, compared to authors who start with registered reports and then learn about other open science practices?

While the current research demonstrates that there is variability in the degree to which journals adopting registered reports have accommodated additional concerns that arise with this new submission method, future research may explore whether this variability translates to differences in submission rates. In particular, one of the largest barriers that journals have expressed thus far is low submission rates (DeHaven et al., 2019). For instance, the Journal of Psychiatric Mental Health Nursing, a journal that was coded in this study, recently abandoned registered reports as a submission option citing low submission rates as the reason (Elliott, 2021; Gray et al., 2020). When coded, this journal was missing 11 fields overall and 10 policy variables, which was well above the median (6 and 5 respectively). If this pattern were explored more systematically by examining policy clarity and submission rates, this could help provide initial evidence that more complete submission guidelines would assist in increasing those submission rates. Some editors may be reluctant to expand on submission guidelines for an article format that is not currently very popular, but this can result in conundrum not unlike that of the chicken and the egg. Future research should document whether submission rates at journals improve after they have clarified their submission guidelines, and whether the “build it and they will come” philosophy applies in this circumstance.

Registered reports are being increasingly adopted both in psychology and other academic fields, but submission rates still seem relatively low. This census includes all journals that adopted registered reports between 2013 and 2020, and documents journal policies, impact, open science policies, and peer review masking using online submission guidelines and other online resources. The results of the study demonstrate that journals do not always provide information about policies that have been raised as concerns about registered reports (e.g., multiple studies, exploratory analysis), but when these policies are included they tend to allow these practices with certain restrictions. For example, no journals explicitly banned exploratory analyses; instead, there are often restrictions on how these practices are implemented (e.g., separate subsection for exploratory analyses). Additionally, journals publishing registered reports frequently require or encourage other open science practices as part of the registered report, in particular open data and materials. Many journals allow replication studies as registered reports, and this may be a particularly fruitful way to incentivize replication research. Though the most common peer review masking policy is for both authors and reviewers to remain anonymous, it was not uncommon for the author’s identity to be revealed during peer review, which could introduce biases in the review process. We call for current journals and new journals adopting registered reports to provide submission guidelines that explicitly address common concerns about registered reports. These changes could improve submission rates, though this remains to be supported by empirical research. Additional, metascientific research is needed to understand the interplay between registered reports, other open science practices, and author adoption of these practices.

Contributed to conception and design: AKM, WLDK. Contributed to acquisition of data: AKM, WLDK. Contributed to analysis and interpretation of data: AKM, WLDK, JLF. Drafted and/or revised the article: AKM, WLDK, JLF. Approved the submitted version for publication: AKM, WLDK, JLF.

Special thanks to Zach Loran and Charlotte Huang for coding the journals. Thanks to Aoife O’Mahony for providing carefully curated lists of registered reports. Thanks to the Society for Improving Psychological Science for hosting a talk on this topic at the Society for Personality and Social Psychology in 2019.

Funding for this research was provided by the National Science Foundation through Ethical and Responsible Research under award number 2024377. AKM and WLDK are Co-PIs, JLF receives partial support through this grant.

The authors declare that there were no competing interests with respect to the authorship or publication of this article.

All data and analysis scripts are available on the paper’s project page on the Open Science Framework. https://osf.io/4yvu9/

1.

During periods of the data collection (Wave 2 and 3) this website went offline and we were unable to recover 5-year impact factors for journals that did not include this information on their website.

2.

At onset, the coders were instructed to only include descriptions that “required” open science practices; however; after initial training it became clear that many journals heavily encouraged open science practices but did not require them. These journals seemed fundamentally different from those that did not mention open science practices at all, thus we created a third group where practices are encouraged. Some journals provided mixed information about whether certain practices were required or encouraged, so the coders relied on the most stringent policy mentioned. For example, if guidelines noted that open data is required in one place and encouraged in another, the open data was coded as required.

3.

Two journals, Psycho-Oncology and Politics in the Life Sciences, were coded twice (once as a special issue and once for general submissions). The general submissions entry was used.

Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLOS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246
Ansell, B., & Samuels, D. (2016). Journal editors and “results-free” research: A cautionary note. Comparative Political Studies, 49(13), 1809–1815. https://doi.org/10.1177/0010414016669369
Bakker, M., Veldkamp, C. L. S., Van den Akker, O., van Assen, M. A. L. M., Crompvoets, E. A. V., Ong, H. H., & Wicherts, J. M. (2019). Recommendations in pre-registrations and internal review board proposals promote formal power analyses but do not increase sample size. https://doi.org/10.31234/osf.io/b3uwd
Bloomfield, R., Rennekamp, K., & Steenhoven, B. (2018). No system is perfect: Understanding how registration‐based editorial processes affect reproducibility and investment in research quality. Journal of Accounting Research, 56(2), 313–362. https://doi.org/10.1111/1475-679x.12208
Center for Open Science. (2018, October 24). Registered Reports: Peer review before results are known to align scientific values and practices. Participating Journals. https://cos.io/rr/?_ga=2.51407150.648658355.1584466400-633361034.1536167408
Chambers, C. D. (2013). Registered reports: A new publishing initiative at Cortex. Cortex, 49, 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
Chambers, C. D. (2018). Template Reviewer and Author Guidelines. https://osf.io/pukzy/?_ga=2.77744340.1521591020.1560534646-633361034.1536167408
Chambers, C. D. (2019, September 10). What’s next for registered reports? Nature. https://www.nature.com/articles/d41586-019-02674-6
Chambers, C. D., Feredoes, E., Muthukumaraswamy, S. D., & Etchells, P. J. (2014). Instead of “playing the game” it is time to change the rules: Registered reports at AIMS Neuroscience and beyond. AIMS Neuroscience, 1(1), 4–7. https://doi.org/10.3934/neuroscience.2014.1.4
Chambers, C. D., & Mellor, D. T. (2018). Protocol transparency is vital for registered reports. Nature Human Behavior, 2, 791–792. https://doi.org/10.1038/s41562-018-0449-6
Chambers, C. D., & Tzavella, L. (2020). Registered reports: Past, present and future. https://doi.org/10.31222/osf.io/43298
Cimpian, J. R., & Timmer, J. D. (2020). Reflections on the registered report process for “Large-scale estimates of LGBQ-heterosexual disparities in the presence of potentially mischievous responders.” AERA Open, 6(2), 1–2. https://doi.org/10.1177/2332858420918535
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37–46. https://doi.org/10.1177/001316446002000104
Cox, A. R., & Montgomerie, R. (2019). The cases for and against double-blind reviews. PeerJ, 7, e6702. https://doi.org/10.7717/peerj.6702
DeHaven, A. C., Graf, C., Mellor, D. T., Morris, E., Moylan, E., Pedder, S., & Tan, S. (2019). Registered reports: views from editors, reviewers and authors. https://doi.org/10.31222/osf.io/ndvek
Elliott, L. (2021). Reply to: Why has theJournal of Psychiatric and Mental Health Nursingstopped publishing registered reports? Journal of Psychiatric Mental Health Nursing. https://doi.org/10.1111/jpm.12734
Gray, R., Thompson, D. R., Tong Chien, W., Jones, M., Jones, A., Moyo, N., Waters, A., & Brown, E. (2020). Why has the Journal of Psychiatric and Mental Health Nursing stopped publishing registered reports? Journal of Psychiatric Mental Health Nursing. https://doi.org/10.1111/jpm.12721
Gurung, R. A. R., Hackathorn, J., Enns, C., Frantz, S., Cacioppo, J. T., Loop, T., & Freeman, J. E. (2016). Strengthening introductory psychology: A new model for teaching the introductory course. American Psychologist, 71(2), 112–124. https://doi.org/10.1037/a0040012
Hardwicke, T. E., & Ioannidis, J. P. A. (2018). Mapping the universe of registered reports. Nature Human Behavior, 2, 793–796. https://doi.org/10.1038/s41562-018-0444-y
Hardwicke, T. E., Mathur, M. B., MacDonald, K., Nilsonne, G., Banks, G. C., Kidwell, M. C., Mohr, A. H., Clayton, E., Yoon, E. J., Tessler, M. H., Lenne, R. L., Altman, S., Long, B., & Frank, M. C. (2018). Data availability, reusability, and analytic reproducibility: Evaluating the impact of a mandatory open data policy at the journal Cognition. Open Science, 5(8). https://doi.org/10.1098/rsos.180448
Hardwicke, T. E., Thibault, R. T., Kosie, J. E., Wallach, J. D., Kidwell, M. C., & Ioannidis, J. P. A. (2021). Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014–2017). Perspectives in Psychological Science. https://doi.org/10.1177/1745691620979806
Hummer, L. T., Singleton Thorn, F., Nosek, B. A., & Errington, T. M. (2017). Evaluating registered reports: A naturalistic comparative study of article impact. https://doi.org/10.31219/osf.io/5y8w7
Kidwell, M. C., Lazarević, L. B., Baranski, E., Hardwicke, T. E., Piechowski, S., Falkenberg, L.-S., Kennett, C., Slowik, A., Sonnleitner, C., Hess-Holden, C., Errington, T. M., Fiedler, S., & Nosek, B. A. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14(5), e1002456. https://doi.org/10.1371/journal.pbio.1002456
Klebel, T., Reichmann, S., Polka, J., McDowell, G., Penfold, N., Hidle, S., & Ross-Hellauer, T. (2020). Peer review and preprint policies are unclear at most journals. https://doi.org/10.1101/2020.01.24.918995
Klein, O., Hardwicke, T. O., Aust, F., Breuer, J., Danielsson, H., Mohr, A. H., IJzerman, H., Nilsonne, G., Vanpaemel, W., & Frank, M. C. (2018). A practical guide for transparency in psychological science. Collabra: Psychology, 4(1). https://doi.org/10.1525/collabra.158
Knobloch-Westerwick, S., Glynn, C. J., & Huge, M. (2013). The Matilda effect in science communication: an experiment on gender bias in publication quality perceptions and collaboration interest. Science Communication, 35(5), 603–625. https://doi.org/10.1177/1075547012472684
Lu, X., Ostrow, K. S., & Heffernan, N. T. (2020). Reflections on the registered report process for “Save your strokes: Handwriting ineffective in Chinese second language instruction.” AERA Open, 6(2), 1. https://doi.org/10.1177/2332858420920831
Lumen Learning. (n.d.). The Five Psychological Domains. https://courses.lumenlearning.com/waymaker-psychology/chapter/psychological-perspectives/
Makel, M. C., Plucker, J. A., & Hegarty, B. (2012). Replications in psychology research: How often do they really occur? Perspectives on Psychological Science, 7(6), 537–542. https://doi.org/10.1177/1745691612460688
Manchikanti, L., Kaye, A. D., Boswell, M. V., & Hirsch, J. A. (2015). Medical journal peer review: process and bias. Pain Physician, 18(1), E1–E14.
Mehlenbacher, A. R. (2019). Registered reports: Genre evolution and the research article. Written Communication, 36(1), 38–67. https://doi.org/10.1177/0741088318804534
Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501–515. https://doi.org/10.1177/2515245918797607
Nosek, B. (2017). Are reproducibility and open science starting to matter in tenure and promotion review? https://cos.io/blog/are-reproducibility-and-open-science-starting-matter-tenure-and-promotion-review/
Nosek, B., & Lakens, D. (2014). Registered reports: A method to increase the credibility of published results. Social Psychology, 45(3), 137–141. https://doi.org/10.1027/1864-9335/a000192
Nuijten, M. B., Borghuis, J., Veldkamp, C. L. S., Dominguez-Alvarez, L., van Assen, M. A. L. M., & Wicherts, J. M. (2017). Journal data sharing policies and statistical reporting inconsistencies in psychology. Collabra: Psychology, 3(1), 31. https://doi.org/10.1525/collabra.102
Obels, P., Lakens, D., Coles, N. A., Gottfried, J., & Green, S. A. (2020). Analysis of open data and computational reproducibility in registered reports in psychology. Advances in Methods and Practices in Psychological Science, 3(2), 229–237. https://doi.org/10.1177/2515245920918872
Parker, T., Fraser, H., & Nakagawa, S. (2019). Making conservation science more reliable with preregistration and registered reports. Conservation Biology, 33(4), 747–750. https://doi.org/10.1111/cobi.13342
Reich, J., Gehlbach, H., & Albers, C. J. (2020). “Like upgrading from a typewriter to a computer”: Registered reports in education research. AERA Open, 6(2), 1–6. https://doi.org/10.1177/2332858420917640
Scheel, A. M., Schijen, M., & Lakens, D. (2021). An excess of positive results: Comparing the standard psychology literature with Registered Reports. Advances in Methods and Practices in Psychological Science, 4(2), 1–12. https://doi.org/10.31234/osf.io/p6e9c
Scott, S. K. (2013). Preregistration would put science in chains. Times Higher Education. https://www.timeshighereducation.com/comment/opinion/pre-registration-would-put-science-in-chains/2005954.article
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/0956797611417632
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J. G., Singleton Thorn, F., Vazire, S., Esterline, K. M., & Nosek, B. A. (2020). Initial evidence of research quality of registered reports compared to the traditional publishing model. https://doi.org/10.31222/osf.io/7x9vy
Stodden, V., Seiler, J., & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115, 2584–2589. https://doi.org/10.1073/pnas.1708290115
Syed, M., & Donnellan, M. B. (2020). Registered reports with developmental and secondary data: Some brief observations and introduction to the special issue. Emerging Adulthood, 8(4), 255–258. https://doi.org/10.1177/2167696820938529
Tomkins, A., Zhang, M., & Heavlin, W. D. (2017). Reviewer bias in single- versus double-blind peer review. Proceedings of the National Academy of Sciences, 114(48), 12708–12713. https://doi.org/10.1073/pnas.1707323114
Vines, T. H., Albert, A. Y., Andrew, R. L., Débarre, F., Bock, D. G., Franklin, M. T., Gilbert, K. J., Moore, J.-S., Renaut, S., & Rennison, D. J. (2014). The availability of research data declines rapidly with article age. Current Biology, 24(1), 94–97. https://doi.org/10.1016/j.cub.2013.11.014
Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7(6), 632–638. https://doi.org/10.1177/1745691612463078
Webb, T. J., O’Hara, B., & Freckleton, R. P. (2008). Does double-blind review benefit female authors? Trends in Ecology & Evolution, 23(7), 351–353. https://doi.org/10.1016/j.tree.2008.04.001
Wren, J. D., Valencia, A., & Kelso, J. (2019). Reviewer-coerced citation: Case report, update on journal policy and suggestions for future prevention. Bioinformatics, 35(18), 3217–3218. https://doi.org/10.1093/bioinformatics/btz071
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material