“Crowdsourcing” is a methodological approach in which several researchers coordinate their resources to achieve research goals that would otherwise be difficult to attain individually. This article introduces a Nexus—a collection of empirical and theoretical articles that will be published in Collabra: Psychology—that is intended to encourage more crowdsourced research in psychological science by providing a specific outlet for such projects and by assisting researchers in developing and executing their projects. We describe how individuals can propose and lead a crowdsourced research project, how individuals can contribute to other ongoing projects, and other ways to contribute to this Nexus. Ultimately, we hope this Nexus will contain a set of highly-informative articles that demonstrate the flexibility and range of the types of research questions that can be addressed with crowdsourced research methods.
The community of research psychologists has access to a large and diverse pool of resources (e.g., time, participants, expertise, geographical locations, etc.) that, collectively, have the potential to produce considerable gains in knowledge, shape public policy, and improve human lives. Despite this potential, these resources may not be collectively used in the most effective manner: Individual researchers often have access to samples that are too small to properly detect the phenomenon of interest, idiosyncrasies in individual samples may not provide evidence about the generalizability of an effect, there may be redundancies in data collection procedures across studies, etc. In addition to being an impediment for scientific progress, these inefficiencies can be viewed as a disservice to the participants who volunteer their time and effort in the belief their data will contribute to reliable and generalizable knowledge, and to the public who entrust researchers to be good stewards of their research resources (e.g., Crutzen, & Peters, 2017). Thus, it is imperative for research psychologists to explore methodological approaches that attempt to address these inefficiencies.
One promising methodological approach is “crowdsourced” research, which is an approach that involves several researchers coordinating their resources to achieve goals that would otherwise be difficult to attain individually. For example, several recent, high-profile, large-scale research projects have demonstrated the potential of crowdsourcing research resources to make substantial contributions (e.g., the “Many Labs” projects, Ebersole et al., 2016; Eerland et al., 2016; Klein et al., 2014; Registered Replication Reports [RRR], Alogna et al. 2014; Cheung et al., 2016; Hagger et al., 2016; Wagenmakers et al., 2016; “The Pipeline Project,” Schweinsberg et al., 2016; the “ManyBabies” project, Frank et al., 2017; see also Schmalz, 2016). In each of these projects, several research teams each conducted a study (a) following the same methods, (b) at different locations and with different samples, and (c) the results from each research team were aggregated into a planned, common analysis from the project’s inception.
Despite the abovementioned examples of multi-site collaborative projects, we believe most researchers do not view such projects as a methodological approach that is available for addressing their research questions or they believe these projects are only used for replications of previously-published studies. Neither of these beliefs are necessarily true. To that end, we are initiating a Nexus—a collection of empirical and theoretical articles that will be published in Collabra: Psychology. The goals of this Nexus are to (a) provide an outlet for crowdsourced empirical projects, (b) assist authors in developing and executing their crowdsourced projects, and (c) demonstrate the flexibility and range of research questions that can be addressed with crowdsourced research methods.
Description of the Nexus
The Theme of the Current Nexus: Collections2
In contrast to many special issues in traditional journals, the common theme of the articles that will be included in the Nexus is the methodological approach rather than the substantive topic. Specifically, each empirical paper will involve several researchers who each collect data at independent sites and who aggregate all of the collected data into a common analysis (most likely a meta-analysis). For the current Nexus we call each multi-site study a “Collection2” (pronounced merely as “collection” but denoted as a type of crowdsourced research project by the capital C and the exponent). The name “Collection2” succinctly describes the methodological approach of these projects because there is a collection of researchers who each collect data at their individual site (i.e., these are collections of collections of data). The current Nexus also will be open to, for example, theoretical critiques of crowdsourced research, commentary articles on the promise or benefits of crowdsourced research, re-analyses of already-completed Collections2, and meta-science articles that are relevant to crowdsourced research.
Possible Types of Collections2
Here is a non-exhaustive list of the types of Collections2 that are possible for inclusion in the current Nexus and a hypothetical example of each.
Collections2 with Identical Operationalizations: A project wherein several researchers simultaneously conduct operational replications of a previously-published effect or of a novel (i.e., not previously-published) effect. These projects can test one effect (such as some RRRs) or can test several effects within the data collection process (such as the ManyLabs projects). Because the operationalization of the key variables is the same across labs, this design is excellent for estimating the magnitude and precision of an observed effect with a specific operationalization. Example: Researchers at five different data collection sites test the “ego depletion” hypothesis using the same method and same variable operationalization at each site. A random-effects meta-analysis is used to aggregate the results.
Collections2 with Multiple Operationalizations: A project wherein there is a common hypothesis that will be simultaneously tested at several different sites, but there are several different operationalizations of how the effect will be tested. The to-be-tested hypothesis can either be one that has been previously-published or not. These projects would test the conceptual replicability of an effect and whether the effect generalizes across different operationalizations of the key variables. Example: In a test of adaptive memory, ten researchers each test the hypothesis that individuals are better able to recall information that is processed in terms of its survival relevance. After agreeing on a common study design to facilitate the planned meta-analysis, each research team is allowed to test the focal hypothesis in whatever manner they deem appropriate. The overall project results in 10 different operationalizations of the focal hypothesis. The resulting meta-analysis estimates the population effect size and examines whether the particular operationalizations affected the estimated magnitude of the effect.
Construct-themed Collections2: Projects where researchers are interested in a common construct and several researchers collect data on several outcomes associated with the target construct. Example: Five individual researchers are each interested in the construct of Subjective Well-Being (SWB); however, each researcher is interested in a slightly different hypothesis involving that construct. They conduct a study that includes a common measure of SWB as well as several other measures that were chosen based on the individual researchers’ interests. Care is taken so the included effects would not interfere with one another when combined into a common data collection procedure. The researchers collect a sample from each locally-available subject pool, combine the data, and jointly publish a SWB-themed Collection2. The resulting publication contains a series of meta-analyses estimating the relationship between SWB and the several other constructs.
Population-themed Collections2: Projects where contributing researchers have a common interest in a specific population. This sort of a collaboration can test one or more hypotheses and would be ideal for researchers who study hard-to-recruit populations and want to maximize these participants’ time. Example: Five researchers have an interest in students with disabilities; however, each researcher has a slightly different interest in hypotheses involving that population. Because students with disabilities typically have a low prevalence in each locally-available subject pool, gathering large and representative samples is difficult. The researchers collect a sample from each locally-available subject pool, combine the data, and jointly publish a students-with-disabilities-themed Collection2. The resulting publication contains a series of meta-analyses of several effects involving students with disabilities.
Other Types of Collections2: Because of the diversity of the projects that can conceivably fall under the broad umbrella of crowdsourced research, the line-drawing of which articles can be included is difficult to do with certainty (e.g., a minimum number of labs needed to be considered “multi”-site, etc.). Researchers who have an idea for a possible Collection2 and are unsure whether it is appropriate for this Nexus are encouraged to contact us to discuss your idea.
Timeline
Unlike special issues in traditional journals, the Nexus format allows articles to be submitted for publication as they are completed. In short, authors do not have to wait for their projects to be evaluated or published at the same time as other projects (i.e., there is no submission deadline or “publication date” of the special issue). We believe this open-endedness in the timeline will be beneficial because we expect variability in how long different projects will take to complete. Importantly, there is no reason for researchers to opt out of these projects because they do not believe they would be completed in time for inclusion in the Nexus.
Other Information
The Author Processing Charges (APC) and APC waiver process for Collabra: Psychology also apply to this Nexus. This Nexus is open to all areas of psychological research. In fact, we hope to see submissions that include diversity in the areas of psychology, the research questions addressed, the methods, etc. Randy McCarthy and Chris Chartier will be the lead editors for this Nexus. We will serve as the point of contact for general questions about this Nexus. However, we may enlist ad-hoc editors to provide domain-specific knowledge for some proposals outside our areas of expertise. These ad-hoc editors would serve as the point of contact for specific projects they are handling.
How to Lead a Collection2
Each Collection2 will have a researcher who is designated as the corresponding researcher. Corresponding researchers will be responsible for submitting the articles through the Collabra: Psychology submission portal, will be the point-of-contact between Collabra: Psychology and all of the contributing researchers within each Collection2, and will take a leadership role in coordinating their Collection2. There are two ways to lead a Collection2: Lead a non-Registered Reports Collection2 or lead a Registered Reports Collection2.
Non-Registered Reports Process
Some Collections2 that are eligible for inclusion in the Nexus may be in progress at the time of this announcement. Or, for whatever reason, authors may choose to design their study, collect data, and submit a manuscript to the Nexus as a traditional, post-data-collection submission (i.e., non-Registered Reports). These submissions will be given full consideration for inclusion in the Nexus and the submitted manuscript will be evaluated in a traditional peer-review process. In this case, the corresponding researcher would serve the same role as a corresponding researcher in the traditional publication process.
Registered Reports Process
Some Collections2 may choose to be submitted as a Registered Report. A description of the general Registered Reports track at Collabra: Psychology can be found here: https://docs.google.com/document/d/1eeXPEC_oc4OYlywC9pRt6seL2xazlT58jAtRsCuEdCI/edit. In a notable departure from the 2-stage Registered Reports process described in this document, the Registered Reports format for this Nexus will involve a 3-stage process: A pre-data Collection2 proposal stage, a recruiting and registering contributing labs stage, and a post-data Collection2 manuscript stage. Thus, the first and third steps will follow Collabra: Psychology’s general Registered Reports process and the second step will be unique to this Nexus. Figure 1 shows the Registered Reports format for this Nexus.
An abbreviated description of this process is provided below with an emphasis on points where the Registered Reports process for the current Nexus departs from Collabra: Psychology’s general Registered Reports process. Authors should consult the Registered Reports process for Collabra: Psychology for more detail.
Stage 1
In Stage 1, the corresponding researcher will submit a proposed crowdsourced research project. Based on the Stage 1 peer-review process, an in-principle acceptance (IPA) may be extended. This IPA will give the researcher permission to move onto Stage 2.
What should be included in the Stage 1 proposal? Broadly, proposed projects will be evaluated like any other study. Namely, what is the research question? How is it (theoretically, practically) relevant? And is the proposed design of the study appropriate for addressing the research question? Aside from these broader issues, there are specifics that must be included in each proposed Collection2. Proposals will include an Introduction section, a Planned Methods section, and a Planned Analyses section. Each of these is described in detail below.
Introduction section for a proposed Collection2. Just like any empirical study, the Introduction section should introduce readers to the relevant background necessary to evaluate the proposed study. Additionally, Collections2 typically involve the use of a lot of research resources (e.g., participants, time, etc.); therefore, proposing authors must make a compelling justification why their proposed Collection2 would best be addressed using a “crowdsourced” methodological approach as opposed to a traditional single-site approach. For example, it may be difficult to justify Collections2 to study effects that are sufficiently established (e.g., a common Stroop task) or to study effects that are extremely speculative because those projects may not be a wise use of the collective research resources.
Planned Methods section for a proposed Collection2. In addition to describing a method that addresses the research question, the Planned Methods for Collections2 must address the following points:
What are the data collection procedures that will be followed at each contributing data collection site? This part will look much like a traditional Planned Methods section. Will each contributor follow the exact same procedures and use the same variable operationalizations? If not, how will the procedures for each individual data collection site be selected (e.g., each individual collaborator can choose how to test the stated hypothesis, the Collection2 lead researcher will randomly assign a procedure to a data collection site, each collaborator will select the procedures that another collaborator must follow, etc.)?
How many contributing data collection sites will participate? And how many participants at each lab will participate? How were these numbers determined?
What is the process for determining authorship on the final manuscript? Keep in mind that not all contributing labs need to be identified prior to submitting a proposal. Thus, it is critical to outline the criteria that will be applied when determining who will be an author on the final manuscript to prevent misunderstandings or disputes around authorship later. These criteria should include who is eligible for being an author and how authorship order for the final manuscript will be determined.
What is the proposed timeline for the project? How long is recruitment of contributing labs expected to take? Once the labs are recruited, how long before data collection could occur? Once contributing labs are identified, how long would data collection and data analysis take? How long until a final manuscript would be submitted? We understand that some of these steps may be difficult to know in advance and that some may change. However, please provide your best estimate.
Does the project require funding? If so, how will funding-related issues be handled?
Will you offer other contributing researchers time or space in the data collection procedure as a recruitment tool (see section on Sticks and Carrots below)? For example, to recruit other contributors you may want to offer them time at the end of a survey where they can add something relevant to their personal research interests. You do not (or probably cannot) know the specifics of what these additional measures may be at the time of the proposal, but we believe it is important to be explicit if this is something you will want to offer so the peer-reviewers can consider the implications for the project’s primary focus. If time and space is planned to be used as a recruitment tool, will the analyses of these additional measures be included in the final manuscript?
What are the criteria for contributing labs? Can undergraduate students or graduate students be contributing researchers? Can students only contribute if they are supervised by an advisor? Is there special training, expertise, or equipment that contributing labs need? Do contributing labs need specific software?
Any other topics that are relevant to the proposed project. Include any information the peer-reviewers would need to evaluate the proposed project.
Planned Analyses section for a proposed Collection2. The Planned Analyses for Collections2 must address the following points:
Are you using manipulation/attention checks or positive controls? How will those be analyzed?
Are there any exclusion criteria (e.g., participants must answer all items, participants must pass all manipulation checks, participants can only have 5% errors, etc.)? If so, how will those exclusion criteria be applied?
What is your focal hypothesis test? Which results would support your hypothesis and which results would be considered unsupportive of the hypothesis? This also needs to specify the analyses that will be used to aggregate or synthesize all of the data from the different labs (e.g., a random-effects meta-analysis).
What is your plan for sharing data? If you cannot share data, please specify why you cannot (e.g., there would be re-identification concerns, etc.).
Stage 2
In Stage 2, the corresponding researcher will identify all of the contributing labs for their Collection2. Prior to data collection, the lead researcher must provide a list of contributing labs and a confirmation that each contributing lab understands how authorship will be determined. Once the contributing labs and agreement on the authorship criteria are approved by the editors, a date-and time-stamped document will be posted on the Open Science Framework. The project will then be approved to begin data collection.
Recruiting contributors. Proposing authors may choose to recruit contributing labs on their own. If proposing authors want assistance with recruiting labs, the editors can assist authors with finding contributing labs by advertising on social media, using a mailing list of labs who have expressed interest in contributing to crowdsourced research, etc. Collections2 with an IPA also can post their project to StudySwap, which is an online platform for researchers to find potential collaborators. The editors can assist researchers in using the StudySwap platform. Our goal is to help make each Collection2 a success and we understand that getting contributing labs is an essential component of making these projects successful.
Sticks and Carrots. We believe that identifying contributing labs may be perceived as a challenge to these crowdsourced research projects. We hope this perception is not too much of a deterrent for researchers who want to lead a Collection2. It is worth noting there are several things that can be done to encourage other researchers to contribute to a proposed Collection2.
First, for authors who choose the Registered Reports route, an IPA is likely a great way to assuage researchers’ hesitations about contributing to a Collection2. Once an IPA is obtained, the lead researcher can approach potential contributors and assure them that a successfully-executed project will be accepted for publication. Second, some researchers may offer time or space in the data collection procedure to use as inducements for collaborators (as long as this approach is approved in the initial review process). For example, suppose a primary research project will take 40 minutes and the goal is to get five total labs to contribute to a project. The lead researcher may propose a one-hour data collection procedure where the first 40 minutes will be the primary project and four 5-minute chunks of time will be offered to the other four contributors. Finally, other recruitment tools (e.g., funding, etc.) may be used so long as it is approved during the Stage 1 review process.
Stage 3
In Stage 3, the corresponding researcher will submit a post-data-collection manuscript. This second round of reviews will not focus on the observed results. Instead, these reviews will focus on whether the proposed methods were properly executed, the planned analyses were properly conducted, the results were properly interpreted, and the proposed data sharing plan was followed. Additionally, authors need to include a section detailing any deviations from the proposed study. Things happen during data collection that are unforeseeable. That is fine and cannot be completely avoided. The important thing is these deviations are properly documented and transparently communicated with readers.
When a successfully-completed manuscript is ready to be accepted, authors will post the accepted IPA, the list of contributing labs, the pre-data authorship inclusion criteria (showing that each contributing lab understood the authorship criteria), and any shareable stimuli and data to an online repository (e.g., the Open Science Framework). This will transparently allow readers to compare the final manuscript to what was planned and approved. Links to these materials must be included in the final manuscript.
Notably, if a Collection2 is a replication of a previously-published article or is closely-aligned with an individual researcher who was not part of the project, we may solicit commentaries from original authors to be included in the Nexus. We will consider these invited commentaries on a case-by-case basis, but will be transparent with all of the parties involved.
Other Ways to Contribute to a Collection2
This Nexus will provide many opportunities for individuals to contribute to crowdsourced research. Please consider joining a Collaboration2 as a contributor or mentor a student who wants to contribute to an appropriate Collaboratio2. Follow StudySwap on Twitter (@Study_Swap), like StudySwap on facebook, or contact us to join our mailing list to receive updates.
Other Ways to Contribute to the Nexus
The Nexus also will have other ways that authors can contribute. If you are interested in writing an opinion piece or commentary on crowdsourced research in general, or of a specific Collection2, please contact us with your idea. Also, given our encouragement to openly share data, we hope this Nexus is a boon to meta-science. Researchers also are free to submit re-analyses of Collections2 to the Nexus. If you are interested in submitting a re-analysis, please contact us to discuss your idea prior to submission.
And, of course, an invaluable way to help make this Nexus a success is by getting the word out.
Concluding Thoughts
We are very excited for the possibilities of this Nexus. We hope this Nexus will generate a lot of informative data and demonstrate how crowdsourced research can be a useful methodological tool for the future of psychological science.
Competing Interests
The authors have no competing interests to declare.
Author Contributions
Drafting the article or revising it critically for important intellectual content: RM/CC
Final approval of the version to be published: RM/CC
Peer Review Comments
The author(s) of this paper chose the Open Review option, and the peer review comments are available at:http://doi.org/10.1525/collabra.107.pr