There is a lively debate whether playing games that feature armed combat and competition (often referred to as violent video games) has measurable effects on aggression. Unfortunately, that debate has produced insights that remain preliminary without accurate behavioral data. Here, we present a secondary analysis of the most authoritative longitudinal data set available on the issue from our previous study (Vuorre et al., 2021). We analyzed objective in-game behavior, provided by video game companies, in 2,580 players over six weeks. Specifically, we asked how time spent playing two popular online shooters, Apex Legends (PEGI 16) and Outriders (PEGI 18), affected self-reported feelings of anger (i.e., aggressive affect). We found that playing these games did not increase aggressive affect; the cross-lagged association between game time and aggressive affect was virtually zero. Our results showcase the value of obtaining accurate industry data as well as an open science of video games and mental health that allows cumulative knowledge building.
For more than four decades the discourse surrounding video games has been dominated by the idea that playing games causes players to become aggressive and antisocial (Blumenthal, 1976). Indeed, the social sciences know few topics as contentious as research on games that feature conflict, combat, and competition—referred to in the literature, perhaps overly simplistic, as violent video games (Ferguson & Konijn, 2015; Grimes et al., 2008; Hall et al., 2011; Orben, 2020). The evidence for effects of these games on aggression is contested (Bushman et al., 2015; Bushman & Anderson, 2002; Huesmann, 2010; Ivory et al., 2015; Markey et al., 2015). The quality of that evidence is critical not only for scientific debate; public stakeholders regularly invite social scientists to give expert opinions and file legal briefs in court decisions on video games (Elson et al., 2019; Ferguson, 2018; Hall et al., 2011). A central shortcoming of evidence so far is poor data quality: Most studies investigate the effects of playing violent video games without actually measuring such play (Markey, 2015; Weber et al., 2020). If we don’t measure the behavior in question, we cannot advice policymakers on its effects (IJzerman et al., 2020).
Aggression includes not only physical and verbal aggression, but also hostility biases and feelings of anger, referred to in the literature as aggressive affect (Anderson & Bushman, 2002). The most prominent models that aim to explain the effect of playing violent video games on feelings of anger rely on a mix of social learning and excitation transfer (Allen et al., 2018): Just as overt physical and verbal acts of violence lead to arousal, violence in video games heightens arousal in the player. The player then carries over this arousal and feelings of anger into their lives outside of the play session. Through repeating this experience over many sessions, the player has feelings of anger regularly. However, many scholars have criticized such a mechanism as implausible (Ferguson & Dyck, 2012).
Compounding the lack of a clear theoretical account is the inconsistent quality of evidence (Drummond & Sauer, 2019). Several older meta-analyses conclude that playing violent video games causes aggression (Anderson et al., 2010; Anderson & Bushman, 2001). Many researchers have criticized not only that conclusion, but also the statistical analyses leading to it (Hilgard et al., 2017). Recent meta-analyses that address these problems find little to no association between playing violent video games and aggression (Drummond et al., 2020; Ferguson, 2015; Furuya-Kanamori & Doi, 2016). Moreover, a lot of the ‘raw material’ of these meta-analyses has been shown to result from poor research practices (Drummond & Sauer, 2019; Elson & Przybylski, 2017; Hilgard et al., 2017)—a problem well-known in meta-analysis which can only produce inferences as good as the individual studies (Ioannidis, 2016; Vosgerau et al., 2019). Research following current gold standards of full transparency aligns with meta-analyses showing little effects (Ferguson & Wang, 2019; Hilgard et al., 2019; Johannes et al., 2022; Przybylski & Weinstein, 2019). Yet, even those advances haven’t addressed one of the most important limitations of the literature: poor data quality (Davidson et al., 2021).
Unlike basic behaviors that can be isolated in the lab more easily, video game play is complex and thus difficult to measure—let alone manipulate (Eronen & Bringmann, 2021; Markey, 2015). The typical experiment has one group play a game featuring violence and another group play a game not featuring violence (e.g., Miedzobrodzka et al., 2021). Despite recent improvements in customizing manipulations (Hilgard et al., 2019), such artificial designs aren’t likely to generalize beyond the lab (Sherry, 2007; Yarkoni, 2020). Designs that aim to measure ‘natural’ video game play outside the lab had to rely on self-reports of video game play (e.g., Lee et al., 2021). However, such self-reports are inaccurate (Johannes et al., 2021; Kahn et al., 2014; Parry et al., 2021). The lone example of a field experiment manipulating natural play demonstrated little effect of playing a game featuring violence on aggression (Williams & Skoric, 2005). As a result, we find ourselves in a bind: either we rely on artificial designs or on poor measures; both hamper our inferences. To get out of this bind, we need accurate behavioral data—the kind the games industry collects.
Such data are only useful as an open resource for the community (Merton, 1973; Nelson & Simmons, 2018). Whereas many fields in the social sciences have come to see the value of full transparency—sharing materials, data, and code—for a truly collaborative, cumulative knowledge base (Christensen et al., 2019), the field of video games research has a history of opaqueness and, as a result, low credibility (Elson & Przybylski, 2017; Vazire, 2017). Recently, our team tried to contribute to such a knowledge base. We collaborated with seven games companies to produce a large, longitudinal data set that combines behavioral data with self-reports of players (Vuorre et al., 2021). We made this data set publicly available and invited other researchers to use the data to test their research questions. Here, we deliver an example of such a secondary analysis to address the challenge of understanding the effects of video games on aggressive affect.
This Study
In this study, we tested the effect of playing two online shooters that feature competitive armed combat, Apex Legends and Outriders, on aggressive affect in a large sample of players over time. Following recent developments in the field of media effects research, we also investigated the other direction (Johannes et al., 2022): the effect of aggressive affect on playing those two games. Rather than relying on the questionable accuracy of self-reported play, we used direct behavioral measures of playing games. This way, we aim to contribute to the discourse in the literature using behavioral data that haven’t been available until recently. Not only do we provide an important test of the central question of the effect of playing so-called violent games; by analyzing an existing open data set, we also deliver a concrete example of the value of academia-industry collaborations grounded in open science. In the mold of recent secondary data analysis following open science principles (Ferguson & Wang, 2019), we hope that our example showcases the value of transparency to the field.
Method
The data from Vuorre et al. (2021) consist of several thousand active players of seven popular video games who filled out a survey three times over six weeks, each wave separated by two weeks. Respondents’ responses were then combined with their actual play behavior, provided by game publishers. At each of the three waves, respondents reported their aggressive affect for the previous two weeks (past two weeks until now); for the same time frame, we calculated their total time spent playing. For details on the entire data, see Vuorre and colleagues (2021). Here, we detail the subset of the data we analyzed, namely of Apex Legends and Outriders.
Data, Materials, and Code
For the raw data, materials, and more details on the data set, we refer readers to the online supplementary materials of our previous project at https://osf.io/fb38n/. For the current paper, we provide all code to process and analyze these raw data at https://osf.io/zd6c2/. There, we document all steps from raw data to analysis.
Participants
As detailed in Vuorre et al. (2021), we collaborated with major games publishers who sent an email to active players of their titles, inviting them to participate in three surveys. Electronic Arts (Apex Legends) invited 900,000 players; Square Enix (Outriders) invited 90,000 players; both player bases were English speaking, from the US, UK, and Canada. Active players were defined as those who had played the game in the previous two weeks. The emails invited participants to a survey hosted by our department. We informed participants that they would be contacted for three surveys about their well-being and motivations for playing. We also informed them that we would combine their responses with their game play data and secured their informed consent for this protocol (SSH_OII_CIA_21_011).
1,609 Apex Legends players and 2,501 Outriders players gave their consent to participate, which corresponds to a 0.18% and 2.78% response rate, respectively. We were interested in the effect of playing these titles on aggressive affect. Therefore, we only analyzed data from participants who had played and had reported their feelings of anger for at least one wave. Of the players who consented to participate, 1,278 (79%) Apex Legends players and 1,850 (90%) Outriders players reported their feelings of anger at least once; of those, 1,092 (85%) Apex Legends players and 1,488 (80%) Outriders players played for at least one wave. Those players were our final sample.
The publishers then invited players to participate in waves 2 and 3 (see Figure 1). There were roughly two weeks between waves; depending on the publisher and when participants chose to respond, those intervals varied (Interquartile range = [13.0, 14.7] days). There was notable attrition: 21% (Apex Legends) and 24% (Outriders) of the sample at the first wave remained at the third wave. The sample was mostly male and on average 33 years old (see Table 1).
Characteristic | Overall, N = 2,5801 | Apex Legends, N = 1,0921 | Outriders, N = 1,4881 |
Age | 33 (25, 41) | 25 (20, 32) | 38 (32, 45) |
Missing | 5 | 3 | 2 |
Gender | |||
Man | 2,308 (90%) | 948 (87%) | 1,360 (92%) |
Non-binary / third gender | 38 (1.5%) | 23 (2.1%) | 15 (1.0%) |
Prefer not to say | 31 (1.2%) | 15 (1.4%) | 16 (1.1%) |
Woman | 198 (7.7%) | 103 (9.5%) | 95 (6.4%) |
Missing | 5 | 3 | 2 |
Experience | 25 (16, 31) | 17 (10, 25) | 30 (22, 35) |
Missing | 11 | 6 | 5 |
Characteristic | Overall, N = 2,5801 | Apex Legends, N = 1,0921 | Outriders, N = 1,4881 |
Age | 33 (25, 41) | 25 (20, 32) | 38 (32, 45) |
Missing | 5 | 3 | 2 |
Gender | |||
Man | 2,308 (90%) | 948 (87%) | 1,360 (92%) |
Non-binary / third gender | 38 (1.5%) | 23 (2.1%) | 15 (1.0%) |
Prefer not to say | 31 (1.2%) | 15 (1.4%) | 16 (1.1%) |
Woman | 198 (7.7%) | 103 (9.5%) | 95 (6.4%) |
Missing | 5 | 3 | 2 |
Experience | 25 (16, 31) | 17 (10, 25) | 30 (22, 35) |
Missing | 11 | 6 | 5 |
1Median (IQR); n (%). Experience = Years of having played video games.
Target Games
The two games we analyzed are primarily online shooters that feature competitive armed combat. According to the Pan European Game Information (PEGI), which assesses games on how appropriate they are for different ages of players in 38 European countries, neither game is suited for younger players. Apex Legends has a rating of PEGI 16, with an explicit content description for violence. The game is a popular first-person shooter, whose primary game mode is battle royale. PEGI outlines the following content specific issues for the game:
“Players can use a range of modern military weapons such as pistols, sniper rifles, automatic guns, frag grenades and knives. Successful hits from a firearm will degrade the health a character [sic] over time and is indicated by some splattering of blood and a reduction in the characters [sic] health gauge. Once this reaches a critical point, they will become immobile and eventually die and respawn or are revived by a team mate. Finisher cut scenes provide the best examples of realistic looking violence, although powerful looking the effects are not classed as very strong violence.”
The game has been out since February 2019 for PC, Xbox One, and PlayStation 4; in March 2021, it was released for Nintendo Switch as well. Figure 2, upper panel, shows a screenshot of typical play.
Outriders has a rating of PEGI 18 (i.e., adults only), with an explicit content description for violence and bad language. The game is a popular third-person shooter. PEGI outlines the following content specific issues:
“This game contains frequent depictions of extreme violence towards human-like characters, including dismemberment and decapitation. When characters are impacted, there are strong blood and gore effects. Powerful weapons cause characters to explode into large splashes of blood and body parts. The game also includes depictions of violence towards defenceless human-like characters. There are multiple instances in which humans, who are restrained in some way, are tortured or killed. The most notable example occurs when a man, who is restrained by his wrists, is stabbed through his face and then kicked from a moving vehicle. This game also contains frequent use of strong language (‘fuck’).”
The game has been out since December 2020 for PC, PlayStation 4 and 5, Xbox One and Series X/S; in April 2021, it was released for Stadia. Figure 2, lower panel, shows a screenshot of typical play.
Measures
Time spent playing
The data set contains players’ video game behavior that Electronic Arts and Square Enix recorded on their servers. For each player, the game publishers provided the start and end times of each session a participant played during the study period. Specifically, the telemetry covers play from 2 weeks before the first wave (i.e., the time for which participants reported aggressive affect at the first wave) until the third wave (i.e., the 2 weeks before wave 3 until wave 3) for a total of 6 weeks (see Figure 1). A player typically had multiple sessions of play preceding each survey (i.e., wave). Because we were interested in total time played for a given 2-week period, we aggregated all sessions over each 2-week window preceding the 3 surveys. The accuracy of logging game play behavior on the side of the companies often depends on the player’s internet connectivity and other technical limitations. In addition, each company has their own method of recording behavior (e.g., what counts as start and end of a session). Therefore, following our previous procedures, we excluded sessions that were below 0 or above 10 hours. Going forward, we work with hours played per day. See Figure 3 for distributions of hours per day for each game and wave.
Aggressive Affect
In Vuorre et al. (2021), we asked participants about their affective well-being with the scale of positive and negative experiences (SPANE; Diener et al., 2010). Participants reported how they had been feeling over the previous two weeks on six positive and six negative items. They indicated how often they had been experiencing each of those feelings on a Likert-type scale from 1 (Very rarely or never) to 7 (Very often or always). One of the negative items (“Angry”), assessed aggressive affect over the past two weeks. In Vuorre et al. (2021), we analyzed the aggregate of all items, including “Angry”, as a measure of well-being. Here, we used this individual item as our outcome variable. See Figure 2 for distributions of aggressive affect for each game and wave.
Results
To answer our research questions, we examined how the time spent playing the two games of interest, Apex Legends and Outriders, affected self-reported aggressive affect—and vice versa. At each of the 3 waves, participants reported their affective well-being in the 2 weeks before the survey (until now, the survey). We calculated time spent playing for the same time frame. In other words, the cross-lagged within-person associations between average play in hours in a 2-week window before the survey and aggressive affect in the two weeks after the survey—and vice versa—were the parameters of interest; we identified them as the most adequate estimate of causal effects. Figure 4 shows scatterplots of the association between hours played at the previous wave (e.g., weeks (0,2]) and aggressive affect at the current wave (e.g., weeks (2,4]).
To obtain estimates of the cross-lagged within-person associations, we ran random intercepts cross-lagged panel models, grouped per game (Hamaker, 2012; Hamaker et al., 2015). These models are popular in the field because they separate stable between-person differences from within-person changes. Therefore, these models provide us an estimate of how deviations from a player’s typical daily hours of play during a two-week period affect feelings of anger in the following two weeks—and vice vera. By including the trait-like, stable components of play and aggressive affect as well as their covariances, these models can account for stable confounders. The model also allows covariances between the (residuals of) within-person components to control for confounding at the current wave, but doesn’t control for time-varying confounders (Rohrer & Murayama, 2021).
Because there is no reason to believe that effects would systematically vary from one wave to the next, we constrained cross-lagged paths (within each game) to be equal. We estimated these models with the lavaan package (Rosseel, 2012) in R (R Core Team, 2022), and relied on full information maximum likelihood for missingness. Missingness occurred only on the aggressive affect measure. Our analysis sample (those who had reported aggressive affect at least once and played at least one wave) had 0s on play when a participant didn’t play, but a missing value for missing aggressive affect self-reports. Figure 5 visualizes the estimates. We followed recommendations to present unstandardized effects as primary outcomes (Baguley, 2009), but also discuss standardized effects for comparison to the literature. For full parameter estimates and additional information, including all standardized effects, see the online supplementary materials.
Our first research question asked about the effect of play on aggressive affect. According to our model, playing one hour more per day than a player usually does (in a given two-week window), has little to no effect on how much anger the player reports in the following two weeks. The effect is virtually zero for both games, even if it’s nominally negative. If we were to define a smallest effect size of interest of half a point on the 7-point anger scale (i.e., 7% of the response range), both CIs are equivalent to a smaller effect (Lakens et al., 2018). How much would a player need to play to reach that threshold? According to our model, the average Apex Legends player will need to play 50 hours more per day than they already typically play on that day to experience a half-point decrease of anger. For Outriders players, that number would still be 25 hours. However, there is uncertainty around these average estimates. For example, if the true effect were the upper bound of the 95% CI, Apex Legends players would need to play 1.6 hours and Outriders players 5.6 hours more than usual to experience a half-point increase of anger. Conversely, it is equally possible the true effect could be at the lower bound of the 95% CI. In that case, a similarly large increases in play could relate to lower levels of aggressive affect.
Half a point on a 7-point scale is admittedly arbitrary. What about the standardized effects in comparison to the literature? The largest standardized cross-lagged effect was r = –.03 [–.20, .13] (see online materials). That effect is below a conservative threshold of r = .20 that Ferguson (2009) suggests as practically significant, but also below a more liberal smallest effect sizes of interest of r = .10 recently used in the literature (Ferguson & Wang, 2019; Orben & Przybylski, 2019). In fact, our standardized estimates are smaller than the meta-analytic estimate of the relation between delinquency and variables specifically selected to be unrelated to aggression (Ferguson & Heene, 2021). What if we assumed the worst case scenario from a public health perspective? Even if we take the upper confidence interval as the true effect, it would barely clear the more liberal threshold. In relative terms, our estimate falls on the lowest end of effects in the literature. Hilgard and colleagues (2017) employ several bias correction techniques; their most liberal range of effects is r = .16 [–.04, .35]. In sum, our estimated effect of playing these two games on feelings of anger is practically zero.
As for our second research question: What was the effect of aggressive affect on subsequent play? Were weeks where a player felt angrier than usual followed by increased or decreased play? Our model estimates suggest, once more, that there is little to no effect. Although both estimates are positive, they were equivalent to extremely small effects. Apex Legends players who reported feeling one point angrier than they usually do on a seven-point anger scale for a given two-week period played 1.8 minutes more per day in the subsequent two weeks (i.e., 0.03 x 1h). For Outriders players, the same change led to 1.2 minutes more play. Even taking the larger of the two upper CIs as true effect would only result in 4.8 minutes more play per day. A one-point increase on a seven-point scale would, by all accounts, represent a large effect; by the same token, we consider it a large ‘treatment’. Even such a large change in aggressive affect leads to extremely small changes in play. Therefore, we believe it is fair to conclude that the reciprocal effect is practically insignificant. Such a conclusion aligns with standardized thresholds; the largest effect of aggressive affect was r = .05 [–.04, .14].
The model also informed us about trait-like differences between people in the form of the covariances between the random intercepts. These covariances weren’t significantly different from 0 for either game. For Apex Legends, those with higher general time spent playing had 0.01 [–0.08, 0.10] higher aggressive affect. Standardized: r = 0.02 [–0.12, 0.15]. For Outriders, those with higher general time spent playing reported –0.04 [–0.14, 0.06] lower aggressive affect. Standardized: r = –0.10 [–0.35, 0.14]. Neither of these differences are sizeable enough to suggest substantial between-person confounding.
Discussion
Research studying the potential effects of playing video games that feature conflict, combat, and competition has been facing an impasse in recent years. Experiments generally have low validity and observational studies suffer from poor measurement. This lack of adequate data has been limiting the inferences we can draw about the effects of playing so-called violent video games on aggression. Here, we conducted a secondary analysis of a data set we collected for an earlier project that had accurate, behavioral measures of play. Our primary research question asked about the effect of playing two online shooters, Apex Legends and Outriders, on aggressive affect. Our secondary research question asked about reciprocity: the effect of aggressive affect on playing these two games. We found that effects were equivalent to being extremely small—no matter the game or the direction of the effect. Our results speak against meaningful effects of playing violent games on aggressive affect and vice versa.
Effect Sizes
What leads us to conclude that the effect sizes we found are not meaningful? For one, in absolute terms, they are virtually zero. Even if we ignore that the effects were nonsignificant, a 0.02 reduction of feelings of anger on a seven-point scale, even on a population level, seems practically insignificant to us (Anvari et al., 2021). To reach a half-point increase on the scale, the average player would need to play more than there are hours in a day. Even if we assumed that effects accumulate over the two-week windows we studied (Götz et al., 2021), we’d reach half a point after 25 weeks—assuming no fluctuations.
As we said earlier, choosing half a point on a 7-point scale is arbitrary. The field clearly needs theoretical and empirical work to identify a smallest effect size of interest—especially on the unstandardized scale (Anvari et al., 2021; Baguley, 2009; Lakens et al., 2018). The largest standardized effect we found was miniscule and much closer to nonsense correlations than meaningful effects (Ferguson & Wang, 2019). The standardized effects we found did not even clear the threshold of r = .10 that Ferguson and Heene (2021) identify among variables selected to be unrelated to each other as a minimum cut-off to rule out artifactual effects. Moreover, to assess the practical significance of our effect, we need to know how aggressive affect translates to acts of aggression and how severe acts of aggression must be to harm ourselves or others. Assuming that our effect is below a threshold for practical significance, that aggressive affect doesn’t translate directly to acts of aggression, and that not all acts of aggression ensue severe harm, we feel confident in calling our effects inconsequential.
Such a conclusion of small or even negligible effects is in line with more recent meta-analyses (Drummond et al., 2020; Ferguson, 2015; Hilgard et al., 2017; Mathur & VanderWeele, 2019). It also aligns with studies following current best practices (Ferguson & Wang, 2019; Hilgard et al., 2019; Johannes et al., 2022; Przybylski & Weinstein, 2019). In fact, the effects we estimated are barely compatible with estimates of a small to medium sized true relation between playing violent video games and aggressive affect in cross-sectional work (Hilgard et al., 2017; Table 3). Previous work mostly relied upon inaccurate self-reports and rarely employed open science practices. The discrepancy between our findings and the field highlights once more that we must continue transparent work with video game companies to acquire accurate behavioral data.
Generalizability and Causality
Although we had a large sample of players, our inferences are limited for several reasons. First, we sampled players and play from only two games. Second, as we discuss in detail in Vuorre et al. (2021), players self-selected for participation. If time spent playing, aggressive affect, or their relation influenced whether someone participated, missingness isn’t random anymore, and our conclusions are limited in how general they are. For example, imagine there’s a group of players that find playing these games relaxing—and are therefore more likely to help researchers out by participating in research. Then the effect of time spent playing also determines whether someone participates. As a result, we’d only make inferences to a group of players for whom there is no effect.
Such self-selection also has consequences for causal inferences. Our research questions explicitly asked about causal effects. Because we have observational data, we must state the conditions under which causality holds (Hernán, 2018). Self-selection (and resulting attrition) can mean a biased causal effect: A true negative effect can be biased toward the null if mostly players participate for whom the effect is null or positive. Moreover, our model controlled for stable confounders, but not for time-varying confounders (Rohrer & Murayama, 2021). For example, changes in the amount of leisure time may lead to more play, but also less frustration and feelings of anger, thereby biasing a true positive effect toward the null. Our conclusion about the effect of playing two games on aggressive affect only holds under these assumptions—not to speak of choosing the correct time frame for the lag. For example, any effects may take much longer to accumulate (Dormann & Griffin, 2015; Lee et al., 2021; Vuorre et al., 2021).
Data Sharing
A cumulative science of video games must build on each other’s work and share all resources it produces (Merton, 1973). This level of transparency and collaboration not only enables cumulative knowledge building; it also increases trust among the funders of our work (Vazire, 2017). The field of video games research has evidently done a poor job at transparency—which has hurt the field’s credibility (Elson & Przybylski, 2017). Here, we hope to have shown an example of such cumulative, resource efficient science. The data set we relied upon is one of the most authoritative currently available; when we originally introduced it to the scientific community, we called for researchers using it to test research questions about the psychology of video game play. Here, we demonstrate how such a secondary analysis can yield unique and important insights. In Vuorre et al. (2021), we analyzed the full well-being scale. Here, we analyzed a single item of that scale, but there are many more variables in the data. We believe we’ve only scratched the surface of the original data source and invite other researchers to conduct more work with these data.
Conclusion
Research on games featuring violence has long been a field of low credibility that suffered from poor research practices as well as poor data quality. Like few other fields, it can benefit from open collaborations with industry partners within a framework of open data. We demonstrate how such open data enable the field to test research questions in a cumulative manner. Playing two online shooters didn’t cause meaningful changes in aggression; we’re certain future work can use the same data to answer more questions about the psychology of play.
Author Contributions
Conceptualization: N.J., M.V., and A.K.P.
Data curation: N.J., M.V., and K.M.
Formal analysis: N.J., M.V., and K.M.
Funding acquisition: A.K.P.
Investigation: N.J., M.V., and A.K.P.
Methodology: N.J., M.V., K.M., and A.K.P.
Project administration: N.J., M.V., and A.K.P.
Resources: A.K.P.
Software: N.J., M.V., and K.M.
Supervision: A.K.P.
Validation: N.J., M.V., and K.M.
Visualization: N.J., M.V., K.M., and A.K.P.
Writing - original draft: N.J.
Writing - review & editing: N.J., M.V., K.M., and A.K.P.
Funding
This research was supported by the Huo Family Foundation.
Data Accessibility Statement
The raw data, materials, and code are available at https://osf.io/zd6c2/. Readers can find a public preprint of this manuscript at https://psyarxiv.com/gt8ze.
Competing Interests
We declare no conflicts of interest. The funder had no role in study design, data analysis, decision to publish, or preparation of the manuscript. N.J. is an editor at Collabra Psychology. He was not involved in the review process of the article.