Has Replicability Improved?
Special Issue for Collabra: Psychology
Edited by Collabra: Psychology editors
Editor in chief: Simine Vazire
Senior editor for social psychology: Yoel Inbar
Senior editor for personality psychology: M. Brent Donnellan
Psychologists have identified challenges with research practices that might interfere with the credibility and replicability of findings such as low powered designs, questionable research practices, p-hacking, and selective reporting. These concerns are associated with accumulating evidence from systematic replication efforts that reported weaker evidence for findings compared with the original studies. During the last 10 years, some researchers and journals have adopted reforms that are believed to increase the credibility and replicability of findings such as increasing sample size, directly investigating heterogeneity with multi-site studies, improvements to experimental design, preregistration, altering selection of research questions, conducting replications, and reporting null outcomes. Have they worked? There are active metascience inquiries about the effectiveness and limitations of such reforms. Complementing those controlled and focused investigations, this special issue will accumulate evidence about a basic question: Has replicability improved?
It is not possible to conduct a randomized trial to assess whether replicability in social psychology has changed over time. Nevertheless, relevant evidence can be gathered about whether the culture of research practice, and the behavioral changes that have occurred between 2010 and 2020, are associated with higher replicability. Research teams will be invited to submit proposals in a Registered Report format for this special issue of replication studies with the following characteristics:
● Proposals should be to conduct replications of multiple findings from social-personality psychology. Proposals should justify the importance of the findings examined and demonstrate very high rigor and quality of research design.
● Proposals should be to conduct replications of findings in at least two groups: findings published from 2009 to 2012 (Time 1) and findings published from 2019 to 2022 (Time 2).
● Proposals should identify a population of studies from which you randomly select a sample of studies to replicate. That population could be anything--topical area, methodology, journal, etc.--but it should be formulated explicitly for how it affects the understanding of potential findings and their applicability. Selection rules for studies to replicate should maximize comparability of studies between Time 1 and Time 2 as much as possible. In relevant cases, the proposal should include a flow diagram describing how eligible studies are selected from the population.
● Proposals should include research designs that are highly powered so they can make precise estimation of effect sizes and detect effects that are smaller than the original findings. Proposals should report power to detect primary outcomes of different effect sizes, provide context of the original effect sizes, and consider precision to draw conclusions about evidence of absence.
● Proposals should include at least 2 findings to replicate from each of the two time periods. Proposals that can define a compelling topic and sampling plan and conduct more rigorous replications per time frame will be highly competitive.
● When relevant, proposals should include multi-site data collections (Many Labs style) to incorporate estimation of heterogeneity, or otherwise have very large, diverse samples. Inclusive team science proposals are encouraged.
● Proposals can have additional comparison groups with recognition that with studies as the unit of analysis, it is unlikely that any individual proposal will be able to make meaningful inferences about comparisons by publication date or by other comparison factor. Nevertheless, each project will contribute to the accumulation of evidence that may reveal insights when aggregated.
It is possible that proposals deviating from one or more of these criteria could be included in the special issue, but the Editors strongly recommend that proposals that will deviate from any of them reach out before preparing a full proposal to determine whether the differences would exclude the project from consideration for this special issue.
Here are a few examples that could be a good fit for this special issue. The examples emphasize the inclusion of additional comparison groups, not because that is important to include, but just to illustrate diverse possibilities:
● Team A proposes to identify all known papers that used data from the public Project Implicit datasets and randomly select 6 findings to replicate from 2009 to 2012 (Time 1) publications and 12 findings to replicate from 2019 to 2022 (Time 2). Replication studies will be conducted using Project Implicit data from years not included in the original studies, if the original finding is expected to recur across time periods. Half of the Time 2 findings will be from preregistered studies and half will not have been preregistered.
● Team B proposes to identify all known papers in a psychology journal that reported using Mechanical Turk or other popular Internet source for sampling. They randomly select 8 studies from each time period for replication with a constraint that, for each time frame, the primary outcome for 4 studies has a p-value between .005 and .05 and the primary outcome for the other 4 studies has a p-value <.005.
● Team C identifies a specific line of research that is important but has a relatively small number of studies and difficult to conduct research designs. They select the two most cited studies that are feasible to replicate from Time 1 and Time 2 and propose a multi-site research design to evaluate replicability and heterogeneity of the findings across samples and settings.
● Team D randomly samples studies from each time period that include a substantive keyword of interest and a particular methodology. They randomly select 6 findings to replicate from each time period, three with a sample size of <100 and three with a sample size of >200.
Even with best efforts, with studies as the unit of analysis, no particular causal relationship or even association between replicability and individual research practice (sample size, preregistration) will be identifiable with high confidence. The primary contribution of these papers will be the evidence accumulated about the specific studies replicated. Individual reports should focus on the substance of the replication studies and avoid making any strong inferences about comparisons over time and potential causes of differences in replicability unless the design warrants such claims.
A secondary contribution of the papers will be as exploratory case examples to stimulate hypothesizing about the causal roles of research reforms that occurred between the time periods. Papers will have more latitude in their discussion sections to develop hypotheses of these more general interests and suggest next steps for investigating them systematically.
The most meaningful evidence for the motivating question “Has replicability improved?” will come from the combining of evidence across the papers. The special issue will include a meta-analysis of all replication studies in the special issue to obtain initial correlational evidence about replicability rates between time periods.
One proposal will be accepted to conduct a meta-analysis of all conducted studies. The meta-analysis team will coordinate closely with the replication teams to align analysis strategies and reporting standards for effective meta-analysis. The meta-analysis team will also provide a standard guide for each team to code the original studies in their sample on key factors hypothesized to be associated with replicability. These will be included in the meta-analysis to highlight how studies included in this special issue differ between time periods. Also, the meta-analysis will examine how the selection rules in each project affect understanding of the similarities and differences between studies in each time period. Because of the extensive reporting requirements, some or all of the authors of the individual replication projects will be invited as co-authors of the meta-analysis.
All preregistrations, data, materials, and code for all projects will be made openly available to the maximum extent possible following anonymization and protection of private participant information.
Proposals given in-principle acceptance as a Stage 1 Registered Report will receive awards of $15,000 from the Center for Open Science (up to 10 total awards) to support the costs of conducting the research. The funds supporting this project were contributed by the Center for Open Science and by the John Templeton Foundation.
Prospective authors are strongly encouraged to send pre-submission inquiries to Editor in Chief Simine Vazire (email@example.com) prior to preparing a full proposal. Pre-submission inquiries should be less than 500 words and outline the proposed approach including selection of findings to replicate, participating site(s), target sample size, and any unique features of the approach.
For prospective authors interested in conducting the meta-analysis, a pre-submission inquiry is required for consideration. The inquiry should be less than 500 words and include a list of co-authors, qualifications for conducting the meta-analysis, and a proposed approach to prospectively coordinate data aggregation across the replication teams. The meta-analysis is not eligible for a financial award.
You can start your submission process here. Collabra uses Scholastica for manuscript management and peer review. Please follow Collabra’s submission guidelines including the Registered Reports instructions.
Note in your cover letter that your registered report submission is part of the “Has Replicability Improved?” special collection. Please remember to also complete this online submission form. This form is a separate step in the process and not integrated with our Scholastica portal.
The special issue is open for pre-submission inquiries and Stage 1 Registered Report proposals beginning on February 14, 2022. All Registered Report proposals must be submitted by May 1, 2022 to be eligible for the special issue and an award.