This editorial comes one year after I took the reins of Collabra: Psychology from Simine Vazire. Simine led the journal from 2020 to 2023, and I have had the privilege of working with her during that time as senior editor of the Methodology and Research Practices section. Simine has since become the editor in chief of Psychological Science, and has already implemented some significant changes (see Vazire, 2024).
If you are reading this editorial, you probably already know this, but Collabra: Psychology is the official journal of the Society for the Improvement of Psychological Science. Hallmarks of the journal are that its papers are all published gold open access, that it publishes the entire history of the peer review process along with each published paper, and that it currently has one of the lowest article processing charges (APC) of all journals in the field of psychology (as far as I know). On top of that, authors who are unable to pay the APC can request a waiver from the publisher.
Collabra: Psychology publishes original research, review papers, meta-analyses, and theory papers in different fields of psychology (clinical, cognitive, developmental, organizational, personality, and social) as well as methods papers, simulations, commentaries, and opinion pieces (methods and research practice). One thing that sets apart Collabra: Psychology is our focus on rigor, both scientifically and methodologically. Our editorial team prioritizes papers that report findings rigorously and transparently over papers that report findings with hyperbole and/or exaggerated claims of importance or novelty.
Another, perhaps underappreciated, feature of Collabra: Psychology is a shorter line between authors and the handling editor. If a manuscript contains a small bug, such as a non-functioning hyperlink or a comment in the document that had not been removed, authors have the option of uploading a new version directly through the discussion board upon request. This feature increases efficiency, a welcome contrast with having to take another hour to formally resubmit a paper.
This editorial is the first of what I intend to be a yearly editorial in which I share some of the major things that have been happening with the journal over the past year. This new practice dovetails nicely with the philosophy of transparency that our journal and our community strongly align with. I suspect that many editors in chief have specific knowledge of the goings-on of their journals that never see the light of day. This editorial is an attempt to give you a birds-eye-view of the journey Collabra: Psychology has taken this last year. The three themes I will be bringing up here are (1) a peek under the hood of the type of issues that typically lead to a manuscript getting desk rejected; (2) some information on the number of submissions we receive, the number of papers that get accepted, and some geolocation data about the previous two numbers that we have begun collecting midway through this year; and (3) some information on an exciting pilot study that Collabra: Psychology, in collaboration with the Center for Open Science, has begun to participate in on Registered Revisions.
General Triage: Reasons for Desk Rejecting
One of the main responsibilities of an editor in chief is to do a general triage of all articles that are submitted to their journal. The topics that are covered by Collabra: Psychology span most of the range of psychological research, and as such it is not possible for the editor in chief to always (or even usually) have detailed content knowledge of the topic under investigation. Sadly, most papers that get desk rejected receive a fairly standard decision letter, a necessary evil caused by the sheer number of submissions Collabra: Psychology receives (more on that in the next section). Having said so, I am happy to share the most frequent reasons why a manuscript receives a desk reject decision. The data I present now are anecdotal and do not necessarily generalize to different years, different journals, or different editors in chief. Having said that, I suspect there will be strong commonalities with other years and editors at least where Collabra: Psychology is concerned.
By far the most common reason for a desk reject decision during general triage is an article that is submitted as an Original Research Report, and that presents an empirical investigation for which the underlying theoretical foundation is scant or even lacking (Eronen & Bringmann, 2021; Meehl, 1967). Such an article might introduce the topic along the following lines:
There is ample evidence that feelings of X are prevalent in the general population (some references)
Recently, we have also seen that symptoms of Y are prevalent in the general population (some references)
There has been research on the relationship between some different feelings and symptoms of Y (some references), but up until now, there has not been any research on the relationship between feelings of X and symptoms of Y.
This paper investigates the relationship between feelings of X and symptoms of Y.
An article with a line of reasoning like this will likely receive a desk reject decision, because the justification of the study does not go beyond “this has not been studied before”. A variation of this type of article is the ‘add-on-mediation’ study:
There has been research on the relationship between feelings of X and symptoms of Y (some references)
There has been research on the relationship between feelings of Z and symptoms of Y (some references)
There has not been any research on a potential mediating role of feelings of Z on the relationship between feelings of X and symptoms of Y.
This paper investigates the potential mediating role of feelings of Z on the relationship between feelings of X and symptoms of Y.
To be crystal clear: both of these hypothetical types of articles could be articles that qualify for publication in Collabra: Psychology if there is some underlying theory from which the studied plan naturally flows. In case of scenario 1: Why does it make sense for feelings of X to relate to symptoms of Y? What previous research, theoretical model, or empirical results makes this a sensible expectation? In case of scenario 2: What theoretical model dictates such a mediating role? What does this investigation add to our understanding of symptoms of Y?
At the risk of over-stating this point, below is a simple demonstration in R of the fact that regressing some variable x on some variable y gives different results from regressing variable y on variable x:
library(mvtnorm) # Generate forty x and y datapoints with a correlation of .7 Data = rmvnorm (40, mean = c(0, 0), sigma = matrix (c(1, .7, .7, 1), 2, 2)) # Plot the data plot (Data, bty = 'n', axes = F, xlab = 'x', ylab = 'y', xlim = c(-2, 2), ylim = c(-2, 2)) axis (1); axis (2, las = 1) # Regress y on x abline (lm(Data[,2]~Data[,1]), col = 'blue') # Regress x on y abline (lm(Data[,1]~Data[,2]), col = 'red')
Figure 1 shows blue and red regression lines for regressing y on x and for x on y, respectively.
The point of this demonstration is to show that the conclusions one draws about a relationship between two variables change depending on which variable a researcher hypothesizes to be the predictor and which variable the criterion. The reason for this is that linear regression assumes there is only measurement error on the dependent variable (e.g, Xu, 2014). So, theory matters!
The second most common reason for a desk reject decision during general triage is an article that is submitted as an Original Research Report in which one or several issues with the methodology exist. I will list three common offenders in articles that get submitted to Collabra: Psychology. Firstly, measures that have validity problems (Flake & Fried, 2020; Haucke et al., 2021). These can be questionnaires that have been constructed in-house based on expertise of the researchers, but for which formal attempts of assessing construct validity, internal validity, and external validity are lacking. Secondly, samples that are too small or samples that are not a representative draw of the research population of interest. Small samples can be dealt with by proper sample size planning (Kovacs et al., 2022; Lakens, 2022). The risk of non-representative samples becomes larger when methods like convenience sampling are employed. Thirdly, the misinterpretation of the results of a statistical test. A typical example here is the interpretation of a non-significant result obtained through null hypothesis significance testing as evidence in favor of the null hypothesis. There are several tools available for quantifying pro-null evidence under both the frequentist and Bayesian framework, each with their own strengths and weaknesses (see e.g., Linde et al., 2023). A related issue is the presentation of tens (and sometimes even hundreds) of p-values in work that is presented as confirmatory. Flooding your manuscript with statistics makes it hard for your readers to focus on the main message, but it may also lead to HARKing, or hypothesizing after the results are known (e.g., Kerr, 1998).
A lot of the methodological issues listed above can be resolved or at least somewhat mitigated by preregistration of the study design (see e.g., Van ’t Veer & Giner-Sorolla, 2016), but I should note that even with a proper pre-registration, papers that test between one to five pre-registered hypotheses tend to be a lot clearer than papers that test over ten.
The third most common reason for a desk reject decision pertains to papers that are submitted as Perspective/Opinion articles and that present viewpoints that are not new. Collabra: Psychology prioritizes rigor and transparency over (sometimes exaggerated) claims of novelty, but as our submission guidelines for Perspective/Opinion articles states: “These should present a new and thoughtfully-considered viewpoint or opinion of a current problem, concept, implication, innovation, or practice relating to psychological science”. Collabra: Psychology receives a surprisingly high number of perspectives in which the main message is “We strongly believe X (reference A)” and in which the main message in reference A also was “We strongly believe X”, sometimes even with empirical data to back up the reported belief. The bar for publication is in that sense somewhat higher for a Perspective/Opinion article than for an Original Research Report: Empirical replications, when rigorously conducted and transparently reported, have a home at Collabra: Psychology. Perspective replications do not.
One final thought, none of these examples are set in stone, there are no hard rules. Sometimes the elements mentioned above trade off. And remember that the original data should be available at a trusted digital repository. Statements like “data will be shared upon reasonable request” do not generally align with our data accessibility policy (there are some exceptions for legal or ethical reasons; in such cases the editor should be informed at the time of submission).
Submission and Publishing Statistics: Geolocation Data
In this section, I would like to share some statistics that show how Collabra: Psychology has grown over the past years. Specifically, the number of submissions Collabra: Psychology received in 2021, 2022, and 2023 were 191, 249, and 280, respectively. The number of published articles in 2021, 2022, and 2023 were 74, 81, and 110. This means that the acceptance rate of papers submitted to Collabra: Psychology has hovered somewhere between 30 and 40 percent in the last three years.
For 2024, I am able to present more specific details with respect to submissions and acceptance rates. With a tremendous amount of help from our managing editor Liba Hladik, we were able to chart the geolocation data of corresponding authors submitting to Collabra: Psychology in 2024. The results are displayed in Table 1. The number of unique submissions (i.e., submissions excluding revised manuscripts) for 2024 is 320 and the number of accepted papers is 103 (at the time of writing).
. | Accepted . | Rejected . | Pending . | Total Submitted . |
---|---|---|---|---|
Asia | 9 | 38 | 12 | 59 |
Eastern Europe | 7 | 14 | 6 | 30 |
Oceania | 0 | 1 | 1 | 2 |
North-America | 27 | 12 | 44 | 83 |
South-America | 1 | 1 | 3 | 5 |
Western Europe | 59 | 24 | 58 | 141 |
Total | 103 | 90 | 127 | 320 |
. | Accepted . | Rejected . | Pending . | Total Submitted . |
---|---|---|---|---|
Asia | 9 | 38 | 12 | 59 |
Eastern Europe | 7 | 14 | 6 | 30 |
Oceania | 0 | 1 | 1 | 2 |
North-America | 27 | 12 | 44 | 83 |
South-America | 1 | 1 | 3 | 5 |
Western Europe | 59 | 24 | 58 | 141 |
Total | 103 | 90 | 127 | 320 |
Several observations jump to the forefront here. First of all, the vast majority of total submissions come from corresponding authors who are located in Western Europe (141) and North-America (83). Although we observed one co-author from an African institution, we have not received any submissions from a corresponding author who is affiliated with an African institution. Submission rates for South-America (5) and Oceania (2) are also quite low. Clearly, Collabra: Psychology needs to become more visible to potential authors from these geographical areas, and I would like to use this opportunity to encourage scholars from these regions to submit their work to Collabra: Psychology.
Secondly, there are clear differences in the ratio of accepted papers between Western Europe and North-America on the one hand, and Asia and Eastern Europe on the other hand. While I believe there are multiple reasons for the discrepancy, I would like to take the opportunity to focus on one which is both tangible and possible for the journal to have a role in combating: the potential for bias. It is well-known that there are systemic patterns of global and racial exclusion in academia. In an article published in Collabra: Psychology by Ledgerwood et al. (2024) earlier this year, the authors identify nine recommendations for improving inclusive excellence in publishing. This report is an attempt to execute #2 (“track and publicly report demographic diversity”). In an attempt to combat exclusive biases, Collabra: Psychology requires blinded submissions (#3 in Ledgerwood et al., 2024). A second step to combating potential biases is to ensure that the editorial board is representative in terms of editors from historically marginalized backgrounds (#1 in Ledgerwood et al., 2024). This year, we recruited four new members to our editorial team that are affiliated with a university in Asia (out of fourteen new members in total). Clearly more work needs to be done to increase representativeness of the editorial board, and we encourage scholars from historically marginalized backgrounds with an interest in joining the editorial board to reach out. As mentioned earlier, authors who are unable to pay the APC can request a waiver and all our articles are published open access addressing #6 (“improve access”). Fighting global and racial inequity will take time, but I believe Collabra: Psychology is taking steps in the right direction.
Registered Revisions Pilot: First Impressions
In September 2024, Collabra: Psychology has begun to participate in a pilot program on Registered Revisions (Haber & Daley, 2024). In short, if reviewers request either additional analyses or additional data at some point in their review, authors get to write a registered revision. This registered revision contains the plans for analyses (preferably with code) and/or protocol for data-collection (all of this akin to what one may specify in a preregistration or registered report), in addition to addressing the ‘normal’ review comments. Importantly, authors do not yet carry out the additional analyses or data collection. Reviewers respond to the plan and the editor now decides to (1) reject the plan, resulting in a rejected paper; or (2) request additional revisions; or (3) provide in principal acceptance (IPA; the same as for a registered report). If IPA is provided, the subsequent additional analyses or additional data collection will be enough to guarantee the paper gets published, provided the protocol as laid out in the registered revision was followed rigorously and transparantly.
At the time of writing, 18 authors that have submitted their work to Collabra: Psychology have signed an informed consent form, agreeing to participate in this pilot experiment. A total of six authors have been assigned to either the Standard Procedures or Registered Revisions conditions. If you intend to submit your empirical work to Collabra: Psychology in the near future, you may well receive an invitation to participate in this journal peer review policy experiment.
What is Coming?
In 2025, we intend to start another journal peer review policy experiment that focuses on the use of tools to detect statistical mistakes in reporting. It is my expectation that experimentation at the level of journal policy will become more and more mainstream, and at least for the duration of my editorial term Collabra: Psychology will be part of such experiments. We hope to continue to grow as a journal, and we welcome submissions, new ideas, and feedback.