There is a lively debate whether playing games that feature armed combat and competition (often referred to as violent video games) has measurable effects on aggression. Unfortunately, that debate has produced insights that remain preliminary without accurate behavioral data. Here, we present a secondary analysis of the most authoritative longitudinal data set available on the issue from our previous study (Vuorre et al., 2021). We analyzed objective in-game behavior, provided by video game companies, in 2,580 players over six weeks. Specifically, we asked how time spent playing two popular online shooters, Apex Legends (PEGI 16) and Outriders (PEGI 18), affected self-reported feelings of anger (i.e., aggressive affect). We found that playing these games did not increase aggressive affect; the cross-lagged association between game time and aggressive affect was virtually zero. Our results showcase the value of obtaining accurate industry data as well as an open science of video games and mental health that allows cumulative knowledge building.

For more than four decades the discourse surrounding video games has been dominated by the idea that playing games causes players to become aggressive and antisocial (Blumenthal, 1976). Indeed, the social sciences know few topics as contentious as research on games that feature conflict, combat, and competition—referred to in the literature, perhaps overly simplistic, as violent video games (Ferguson & Konijn, 2015; Grimes et al., 2008; Hall et al., 2011; Orben, 2020). The evidence for effects of these games on aggression is contested (Bushman et al., 2015; Bushman & Anderson, 2002; Huesmann, 2010; Ivory et al., 2015; Markey et al., 2015). The quality of that evidence is critical not only for scientific debate; public stakeholders regularly invite social scientists to give expert opinions and file legal briefs in court decisions on video games (Elson et al., 2019; Ferguson, 2018; Hall et al., 2011). A central shortcoming of evidence so far is poor data quality: Most studies investigate the effects of playing violent video games without actually measuring such play (Markey, 2015; Weber et al., 2020). If we don’t measure the behavior in question, we cannot advice policymakers on its effects (IJzerman et al., 2020).

Aggression includes not only physical and verbal aggression, but also hostility biases and feelings of anger, referred to in the literature as aggressive affect (Anderson & Bushman, 2002). The most prominent models that aim to explain the effect of playing violent video games on feelings of anger rely on a mix of social learning and excitation transfer (Allen et al., 2018): Just as overt physical and verbal acts of violence lead to arousal, violence in video games heightens arousal in the player. The player then carries over this arousal and feelings of anger into their lives outside of the play session. Through repeating this experience over many sessions, the player has feelings of anger regularly. However, many scholars have criticized such a mechanism as implausible (Ferguson & Dyck, 2012).

Compounding the lack of a clear theoretical account is the inconsistent quality of evidence (Drummond & Sauer, 2019). Several older meta-analyses conclude that playing violent video games causes aggression (Anderson et al., 2010; Anderson & Bushman, 2001). Many researchers have criticized not only that conclusion, but also the statistical analyses leading to it (Hilgard et al., 2017). Recent meta-analyses that address these problems find little to no association between playing violent video games and aggression (Drummond et al., 2020; Ferguson, 2015; Furuya-Kanamori & Doi, 2016). Moreover, a lot of the ‘raw material’ of these meta-analyses has been shown to result from poor research practices (Drummond & Sauer, 2019; Elson & Przybylski, 2017; Hilgard et al., 2017)—a problem well-known in meta-analysis which can only produce inferences as good as the individual studies (Ioannidis, 2016; Vosgerau et al., 2019). Research following current gold standards of full transparency aligns with meta-analyses showing little effects (Ferguson & Wang, 2019; Hilgard et al., 2019; Johannes et al., 2022; Przybylski & Weinstein, 2019). Yet, even those advances haven’t addressed one of the most important limitations of the literature: poor data quality (Davidson et al., 2021).

Unlike basic behaviors that can be isolated in the lab more easily, video game play is complex and thus difficult to measure—let alone manipulate (Eronen & Bringmann, 2021; Markey, 2015). The typical experiment has one group play a game featuring violence and another group play a game not featuring violence (e.g., Miedzobrodzka et al., 2021). Despite recent improvements in customizing manipulations (Hilgard et al., 2019), such artificial designs aren’t likely to generalize beyond the lab (Sherry, 2007; Yarkoni, 2020). Designs that aim to measure ‘natural’ video game play outside the lab had to rely on self-reports of video game play (e.g., Lee et al., 2021). However, such self-reports are inaccurate (Johannes et al., 2021; Kahn et al., 2014; Parry et al., 2021). The lone example of a field experiment manipulating natural play demonstrated little effect of playing a game featuring violence on aggression (Williams & Skoric, 2005). As a result, we find ourselves in a bind: either we rely on artificial designs or on poor measures; both hamper our inferences. To get out of this bind, we need accurate behavioral data—the kind the games industry collects.

Such data are only useful as an open resource for the community (Merton, 1973; Nelson & Simmons, 2018). Whereas many fields in the social sciences have come to see the value of full transparency—sharing materials, data, and code—for a truly collaborative, cumulative knowledge base (Christensen et al., 2019), the field of video games research has a history of opaqueness and, as a result, low credibility (Elson & Przybylski, 2017; Vazire, 2017). Recently, our team tried to contribute to such a knowledge base. We collaborated with seven games companies to produce a large, longitudinal data set that combines behavioral data with self-reports of players (Vuorre et al., 2021). We made this data set publicly available and invited other researchers to use the data to test their research questions. Here, we deliver an example of such a secondary analysis to address the challenge of understanding the effects of video games on aggressive affect.

In this study, we tested the effect of playing two online shooters that feature competitive armed combat, Apex Legends and Outriders, on aggressive affect in a large sample of players over time. Following recent developments in the field of media effects research, we also investigated the other direction (Johannes et al., 2022): the effect of aggressive affect on playing those two games. Rather than relying on the questionable accuracy of self-reported play, we used direct behavioral measures of playing games. This way, we aim to contribute to the discourse in the literature using behavioral data that haven’t been available until recently. Not only do we provide an important test of the central question of the effect of playing so-called violent games; by analyzing an existing open data set, we also deliver a concrete example of the value of academia-industry collaborations grounded in open science. In the mold of recent secondary data analysis following open science principles (Ferguson & Wang, 2019), we hope that our example showcases the value of transparency to the field.

The data from Vuorre et al. (2021) consist of several thousand active players of seven popular video games who filled out a survey three times over six weeks, each wave separated by two weeks. Respondents’ responses were then combined with their actual play behavior, provided by game publishers. At each of the three waves, respondents reported their aggressive affect for the previous two weeks (past two weeks until now); for the same time frame, we calculated their total time spent playing. For details on the entire data, see Vuorre and colleagues (2021). Here, we detail the subset of the data we analyzed, namely of Apex Legends and Outriders.

Data, Materials, and Code

For the raw data, materials, and more details on the data set, we refer readers to the online supplementary materials of our previous project at https://osf.io/fb38n/. For the current paper, we provide all code to process and analyze these raw data at https://osf.io/zd6c2/. There, we document all steps from raw data to analysis.

Participants

As detailed in Vuorre et al. (2021), we collaborated with major games publishers who sent an email to active players of their titles, inviting them to participate in three surveys. Electronic Arts (Apex Legends) invited 900,000 players; Square Enix (Outriders) invited 90,000 players; both player bases were English speaking, from the US, UK, and Canada. Active players were defined as those who had played the game in the previous two weeks. The emails invited participants to a survey hosted by our department. We informed participants that they would be contacted for three surveys about their well-being and motivations for playing. We also informed them that we would combine their responses with their game play data and secured their informed consent for this protocol (SSH_OII_CIA_21_011).

1,609 Apex Legends players and 2,501 Outriders players gave their consent to participate, which corresponds to a 0.18% and 2.78% response rate, respectively. We were interested in the effect of playing these titles on aggressive affect. Therefore, we only analyzed data from participants who had played and had reported their feelings of anger for at least one wave. Of the players who consented to participate, 1,278 (79%) Apex Legends players and 1,850 (90%) Outriders players reported their feelings of anger at least once; of those, 1,092 (85%) Apex Legends players and 1,488 (80%) Outriders players played for at least one wave. Those players were our final sample.

The publishers then invited players to participate in waves 2 and 3 (see Figure 1). There were roughly two weeks between waves; depending on the publisher and when participants chose to respond, those intervals varied (Interquartile range = [13.0, 14.7] days). There was notable attrition: 21% (Apex Legends) and 24% (Outriders) of the sample at the first wave remained at the third wave. The sample was mostly male and on average 33 years old (see Table 1).

Figure 1. Time frame for data collection.
Figure 1. Time frame for data collection.
Close modal
Table 1. Demographic features of sample.
Characteristic Overall, N = 2,5801 Apex Legends, N = 1,0921 Outriders, N = 1,4881 
Age 33 (25, 41) 25 (20, 32) 38 (32, 45) 
Missing 
Gender    
Man 2,308 (90%) 948 (87%) 1,360 (92%) 
Non-binary / third gender 38 (1.5%) 23 (2.1%) 15 (1.0%) 
Prefer not to say 31 (1.2%) 15 (1.4%) 16 (1.1%) 
Woman 198 (7.7%) 103 (9.5%) 95 (6.4%) 
Missing 
Experience 25 (16, 31) 17 (10, 25) 30 (22, 35) 
Missing 11 
Characteristic Overall, N = 2,5801 Apex Legends, N = 1,0921 Outriders, N = 1,4881 
Age 33 (25, 41) 25 (20, 32) 38 (32, 45) 
Missing 
Gender    
Man 2,308 (90%) 948 (87%) 1,360 (92%) 
Non-binary / third gender 38 (1.5%) 23 (2.1%) 15 (1.0%) 
Prefer not to say 31 (1.2%) 15 (1.4%) 16 (1.1%) 
Woman 198 (7.7%) 103 (9.5%) 95 (6.4%) 
Missing 
Experience 25 (16, 31) 17 (10, 25) 30 (22, 35) 
Missing 11 

1Median (IQR); n (%). Experience = Years of having played video games.

Target Games

The two games we analyzed are primarily online shooters that feature competitive armed combat. According to the Pan European Game Information (PEGI), which assesses games on how appropriate they are for different ages of players in 38 European countries, neither game is suited for younger players. Apex Legends has a rating of PEGI 16, with an explicit content description for violence. The game is a popular first-person shooter, whose primary game mode is battle royale. PEGI outlines the following content specific issues for the game:

“Players can use a range of modern military weapons such as pistols, sniper rifles, automatic guns, frag grenades and knives. Successful hits from a firearm will degrade the health a character [sic] over time and is indicated by some splattering of blood and a reduction in the characters [sic] health gauge. Once this reaches a critical point, they will become immobile and eventually die and respawn or are revived by a team mate. Finisher cut scenes provide the best examples of realistic looking violence, although powerful looking the effects are not classed as very strong violence.”

The game has been out since February 2019 for PC, Xbox One, and PlayStation 4; in March 2021, it was released for Nintendo Switch as well. Figure 2, upper panel, shows a screenshot of typical play.

Figure 2. Screenshots of the two games. Upper panel shows Apex Legends. Lower panel shows Outriders.
Figure 2. Screenshots of the two games. Upper panel shows Apex Legends. Lower panel shows Outriders.
Close modal

Outriders has a rating of PEGI 18 (i.e., adults only), with an explicit content description for violence and bad language. The game is a popular third-person shooter. PEGI outlines the following content specific issues:

“This game contains frequent depictions of extreme violence towards human-like characters, including dismemberment and decapitation. When characters are impacted, there are strong blood and gore effects. Powerful weapons cause characters to explode into large splashes of blood and body parts. The game also includes depictions of violence towards defenceless human-like characters. There are multiple instances in which humans, who are restrained in some way, are tortured or killed. The most notable example occurs when a man, who is restrained by his wrists, is stabbed through his face and then kicked from a moving vehicle. This game also contains frequent use of strong language (‘fuck’).”

The game has been out since December 2020 for PC, PlayStation 4 and 5, Xbox One and Series X/S; in April 2021, it was released for Stadia. Figure 2, lower panel, shows a screenshot of typical play.

Measures

Time spent playing

The data set contains players’ video game behavior that Electronic Arts and Square Enix recorded on their servers. For each player, the game publishers provided the start and end times of each session a participant played during the study period. Specifically, the telemetry covers play from 2 weeks before the first wave (i.e., the time for which participants reported aggressive affect at the first wave) until the third wave (i.e., the 2 weeks before wave 3 until wave 3) for a total of 6 weeks (see Figure 1). A player typically had multiple sessions of play preceding each survey (i.e., wave). Because we were interested in total time played for a given 2-week period, we aggregated all sessions over each 2-week window preceding the 3 surveys. The accuracy of logging game play behavior on the side of the companies often depends on the player’s internet connectivity and other technical limitations. In addition, each company has their own method of recording behavior (e.g., what counts as start and end of a session). Therefore, following our previous procedures, we excluded sessions that were below 0 or above 10 hours. Going forward, we work with hours played per day. See Figure 3 for distributions of hours per day for each game and wave.

Figure 3. Distribution of hours played per day and feelings of anger for each game and wave.

Points are the raw data. We trimmed hours at the 3h mark for clarity, omitting 2.8% of values.

Figure 3. Distribution of hours played per day and feelings of anger for each game and wave.

Points are the raw data. We trimmed hours at the 3h mark for clarity, omitting 2.8% of values.

Close modal

Aggressive Affect

In Vuorre et al. (2021), we asked participants about their affective well-being with the scale of positive and negative experiences (SPANE; Diener et al., 2010). Participants reported how they had been feeling over the previous two weeks on six positive and six negative items. They indicated how often they had been experiencing each of those feelings on a Likert-type scale from 1 (Very rarely or never) to 7 (Very often or always). One of the negative items (“Angry”), assessed aggressive affect over the past two weeks. In Vuorre et al. (2021), we analyzed the aggregate of all items, including “Angry”, as a measure of well-being. Here, we used this individual item as our outcome variable. See Figure 2 for distributions of aggressive affect for each game and wave.

To answer our research questions, we examined how the time spent playing the two games of interest, Apex Legends and Outriders, affected self-reported aggressive affect—and vice versa. At each of the 3 waves, participants reported their affective well-being in the 2 weeks before the survey (until now, the survey). We calculated time spent playing for the same time frame. In other words, the cross-lagged within-person associations between average play in hours in a 2-week window before the survey and aggressive affect in the two weeks after the survey—and vice versa—were the parameters of interest; we identified them as the most adequate estimate of causal effects. Figure 4 shows scatterplots of the association between hours played at the previous wave (e.g., weeks (0,2]) and aggressive affect at the current wave (e.g., weeks (2,4]).

Figure 4. Scatterplots of aggressive affect (in the current wave) and average hours played per day (at the previous wave).

Points are the raw data; lines represent generalized additive model regression lines; shades around those lines represent the 95% CI. We truncated hours played at the previous wave at 3h for clarity.

Figure 4. Scatterplots of aggressive affect (in the current wave) and average hours played per day (at the previous wave).

Points are the raw data; lines represent generalized additive model regression lines; shades around those lines represent the 95% CI. We truncated hours played at the previous wave at 3h for clarity.

Close modal

To obtain estimates of the cross-lagged within-person associations, we ran random intercepts cross-lagged panel models, grouped per game (Hamaker, 2012; Hamaker et al., 2015). These models are popular in the field because they separate stable between-person differences from within-person changes. Therefore, these models provide us an estimate of how deviations from a player’s typical daily hours of play during a two-week period affect feelings of anger in the following two weeks—and vice vera. By including the trait-like, stable components of play and aggressive affect as well as their covariances, these models can account for stable confounders. The model also allows covariances between the (residuals of) within-person components to control for confounding at the current wave, but doesn’t control for time-varying confounders (Rohrer & Murayama, 2021).

Because there is no reason to believe that effects would systematically vary from one wave to the next, we constrained cross-lagged paths (within each game) to be equal. We estimated these models with the lavaan package (Rosseel, 2012) in R (R Core Team, 2022), and relied on full information maximum likelihood for missingness. Missingness occurred only on the aggressive affect measure. Our analysis sample (those who had reported aggressive affect at least once and played at least one wave) had 0s on play when a participant didn’t play, but a missing value for missing aggressive affect self-reports. Figure 5 visualizes the estimates. We followed recommendations to present unstandardized effects as primary outcomes (Baguley, 2009), but also discuss standardized effects for comparison to the literature. For full parameter estimates and additional information, including all standardized effects, see the online supplementary materials.

Figure 5. Estimates and 95% CI of the cross-lagged regressions of the random intercept cross-lagged panel model for each game.

Estimates are unstandardized.

Figure 5. Estimates and 95% CI of the cross-lagged regressions of the random intercept cross-lagged panel model for each game.

Estimates are unstandardized.

Close modal

Our first research question asked about the effect of play on aggressive affect. According to our model, playing one hour more per day than a player usually does (in a given two-week window), has little to no effect on how much anger the player reports in the following two weeks. The effect is virtually zero for both games, even if it’s nominally negative. If we were to define a smallest effect size of interest of half a point on the 7-point anger scale (i.e., 7% of the response range), both CIs are equivalent to a smaller effect (Lakens et al., 2018). How much would a player need to play to reach that threshold? According to our model, the average Apex Legends player will need to play 50 hours more per day than they already typically play on that day to experience a half-point decrease of anger. For Outriders players, that number would still be 25 hours. However, there is uncertainty around these average estimates. For example, if the true effect were the upper bound of the 95% CI, Apex Legends players would need to play 1.6 hours and Outriders players 5.6 hours more than usual to experience a half-point increase of anger. Conversely, it is equally possible the true effect could be at the lower bound of the 95% CI. In that case, a similarly large increases in play could relate to lower levels of aggressive affect.

Half a point on a 7-point scale is admittedly arbitrary. What about the standardized effects in comparison to the literature? The largest standardized cross-lagged effect was r = –.03 [–.20, .13] (see online materials). That effect is below a conservative threshold of r = .20 that Ferguson (2009) suggests as practically significant, but also below a more liberal smallest effect sizes of interest of r = .10 recently used in the literature (Ferguson & Wang, 2019; Orben & Przybylski, 2019). In fact, our standardized estimates are smaller than the meta-analytic estimate of the relation between delinquency and variables specifically selected to be unrelated to aggression (Ferguson & Heene, 2021). What if we assumed the worst case scenario from a public health perspective? Even if we take the upper confidence interval as the true effect, it would barely clear the more liberal threshold. In relative terms, our estimate falls on the lowest end of effects in the literature. Hilgard and colleagues (2017) employ several bias correction techniques; their most liberal range of effects is r = .16 [–.04, .35]. In sum, our estimated effect of playing these two games on feelings of anger is practically zero.

As for our second research question: What was the effect of aggressive affect on subsequent play? Were weeks where a player felt angrier than usual followed by increased or decreased play? Our model estimates suggest, once more, that there is little to no effect. Although both estimates are positive, they were equivalent to extremely small effects. Apex Legends players who reported feeling one point angrier than they usually do on a seven-point anger scale for a given two-week period played 1.8 minutes more per day in the subsequent two weeks (i.e., 0.03 x 1h). For Outriders players, the same change led to 1.2 minutes more play. Even taking the larger of the two upper CIs as true effect would only result in 4.8 minutes more play per day. A one-point increase on a seven-point scale would, by all accounts, represent a large effect; by the same token, we consider it a large ‘treatment’. Even such a large change in aggressive affect leads to extremely small changes in play. Therefore, we believe it is fair to conclude that the reciprocal effect is practically insignificant. Such a conclusion aligns with standardized thresholds; the largest effect of aggressive affect was r = .05 [–.04, .14].

The model also informed us about trait-like differences between people in the form of the covariances between the random intercepts. These covariances weren’t significantly different from 0 for either game. For Apex Legends, those with higher general time spent playing had 0.01 [–0.08, 0.10] higher aggressive affect. Standardized: r = 0.02 [–0.12, 0.15]. For Outriders, those with higher general time spent playing reported –0.04 [–0.14, 0.06] lower aggressive affect. Standardized: r = –0.10 [–0.35, 0.14]. Neither of these differences are sizeable enough to suggest substantial between-person confounding.

Research studying the potential effects of playing video games that feature conflict, combat, and competition has been facing an impasse in recent years. Experiments generally have low validity and observational studies suffer from poor measurement. This lack of adequate data has been limiting the inferences we can draw about the effects of playing so-called violent video games on aggression. Here, we conducted a secondary analysis of a data set we collected for an earlier project that had accurate, behavioral measures of play. Our primary research question asked about the effect of playing two online shooters, Apex Legends and Outriders, on aggressive affect. Our secondary research question asked about reciprocity: the effect of aggressive affect on playing these two games. We found that effects were equivalent to being extremely small—no matter the game or the direction of the effect. Our results speak against meaningful effects of playing violent games on aggressive affect and vice versa.

Effect Sizes

What leads us to conclude that the effect sizes we found are not meaningful? For one, in absolute terms, they are virtually zero. Even if we ignore that the effects were nonsignificant, a 0.02 reduction of feelings of anger on a seven-point scale, even on a population level, seems practically insignificant to us (Anvari et al., 2021). To reach a half-point increase on the scale, the average player would need to play more than there are hours in a day. Even if we assumed that effects accumulate over the two-week windows we studied (Götz et al., 2021), we’d reach half a point after 25 weeks—assuming no fluctuations.

As we said earlier, choosing half a point on a 7-point scale is arbitrary. The field clearly needs theoretical and empirical work to identify a smallest effect size of interest—especially on the unstandardized scale (Anvari et al., 2021; Baguley, 2009; Lakens et al., 2018). The largest standardized effect we found was miniscule and much closer to nonsense correlations than meaningful effects (Ferguson & Wang, 2019). The standardized effects we found did not even clear the threshold of r = .10 that Ferguson and Heene (2021) identify among variables selected to be unrelated to each other as a minimum cut-off to rule out artifactual effects. Moreover, to assess the practical significance of our effect, we need to know how aggressive affect translates to acts of aggression and how severe acts of aggression must be to harm ourselves or others. Assuming that our effect is below a threshold for practical significance, that aggressive affect doesn’t translate directly to acts of aggression, and that not all acts of aggression ensue severe harm, we feel confident in calling our effects inconsequential.

Such a conclusion of small or even negligible effects is in line with more recent meta-analyses (Drummond et al., 2020; Ferguson, 2015; Hilgard et al., 2017; Mathur & VanderWeele, 2019). It also aligns with studies following current best practices (Ferguson & Wang, 2019; Hilgard et al., 2019; Johannes et al., 2022; Przybylski & Weinstein, 2019). In fact, the effects we estimated are barely compatible with estimates of a small to medium sized true relation between playing violent video games and aggressive affect in cross-sectional work (Hilgard et al., 2017; Table 3). Previous work mostly relied upon inaccurate self-reports and rarely employed open science practices. The discrepancy between our findings and the field highlights once more that we must continue transparent work with video game companies to acquire accurate behavioral data.

Generalizability and Causality

Although we had a large sample of players, our inferences are limited for several reasons. First, we sampled players and play from only two games. Second, as we discuss in detail in Vuorre et al. (2021), players self-selected for participation. If time spent playing, aggressive affect, or their relation influenced whether someone participated, missingness isn’t random anymore, and our conclusions are limited in how general they are. For example, imagine there’s a group of players that find playing these games relaxing—and are therefore more likely to help researchers out by participating in research. Then the effect of time spent playing also determines whether someone participates. As a result, we’d only make inferences to a group of players for whom there is no effect.

Such self-selection also has consequences for causal inferences. Our research questions explicitly asked about causal effects. Because we have observational data, we must state the conditions under which causality holds (Hernán, 2018). Self-selection (and resulting attrition) can mean a biased causal effect: A true negative effect can be biased toward the null if mostly players participate for whom the effect is null or positive. Moreover, our model controlled for stable confounders, but not for time-varying confounders (Rohrer & Murayama, 2021). For example, changes in the amount of leisure time may lead to more play, but also less frustration and feelings of anger, thereby biasing a true positive effect toward the null. Our conclusion about the effect of playing two games on aggressive affect only holds under these assumptions—not to speak of choosing the correct time frame for the lag. For example, any effects may take much longer to accumulate (Dormann & Griffin, 2015; Lee et al., 2021; Vuorre et al., 2021).

Data Sharing

A cumulative science of video games must build on each other’s work and share all resources it produces (Merton, 1973). This level of transparency and collaboration not only enables cumulative knowledge building; it also increases trust among the funders of our work (Vazire, 2017). The field of video games research has evidently done a poor job at transparency—which has hurt the field’s credibility (Elson & Przybylski, 2017). Here, we hope to have shown an example of such cumulative, resource efficient science. The data set we relied upon is one of the most authoritative currently available; when we originally introduced it to the scientific community, we called for researchers using it to test research questions about the psychology of video game play. Here, we demonstrate how such a secondary analysis can yield unique and important insights. In Vuorre et al. (2021), we analyzed the full well-being scale. Here, we analyzed a single item of that scale, but there are many more variables in the data. We believe we’ve only scratched the surface of the original data source and invite other researchers to conduct more work with these data.

Conclusion

Research on games featuring violence has long been a field of low credibility that suffered from poor research practices as well as poor data quality. Like few other fields, it can benefit from open collaborations with industry partners within a framework of open data. We demonstrate how such open data enable the field to test research questions in a cumulative manner. Playing two online shooters didn’t cause meaningful changes in aggression; we’re certain future work can use the same data to answer more questions about the psychology of play.

Conceptualization: N.J., M.V., and A.K.P.

Data curation: N.J., M.V., and K.M.

Formal analysis: N.J., M.V., and K.M.

Funding acquisition: A.K.P.

Investigation: N.J., M.V., and A.K.P.

Methodology: N.J., M.V., K.M., and A.K.P.

Project administration: N.J., M.V., and A.K.P.

Resources: A.K.P.

Software: N.J., M.V., and K.M.

Supervision: A.K.P.

Validation: N.J., M.V., and K.M.

Visualization: N.J., M.V., K.M., and A.K.P.

Writing - original draft: N.J.

Writing - review & editing: N.J., M.V., K.M., and A.K.P.

This research was supported by the Huo Family Foundation.

The raw data, materials, and code are available at https://osf.io/zd6c2/. Readers can find a public preprint of this manuscript at https://psyarxiv.com/gt8ze.

We declare no conflicts of interest. The funder had no role in study design, data analysis, decision to publish, or preparation of the manuscript. N.J. is an editor at Collabra Psychology. He was not involved in the review process of the article.

Allen, J. J., Anderson, C. A., & Bushman, B. J. (2018). The general aggression model. Current Opinion in Psychology, 19, 75–80. https://doi.org/10.1016/j.copsyc.2017.03.034
Anderson, C. A., & Bushman, B. J. (2001). Effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, and prosocial behavior: A meta-analytic review of the scientific literature. Psychological Science, 12(5), 353–359.
Anderson, C. A., & Bushman, B. J. (2002). Human aggression. Annual Review of Psychology, 53(1), 27–51. https://doi.org/10.1146/annurev.psych.53.100901.135231
Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., Rothstein, H. R., & Saleem, M. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in eastern and western countries: A meta-analytic review. Psychological Bulletin, 136(2), 151–173. https://doi.org/10.1037/a0018251
Anvari, F., Kievit, R., Lakens, D., Przybylski, A. K., Tiokhin, L., Wiernik, B. M., & Orben, A. (2021). Evaluating the practical relevance of observed effect sizes in psychological research. PsyArXiv. https://doi.org/10.31234/osf.io/g3vtr
Baguley, T. (2009). Standardized or simple effect size: What should be reported? British Journal of Psychology, 100(3), 603–617. https://doi.org/10.1348/000712608X377117
Blumenthal, R. (1976, December 28). ‘Death Race.’ The New York Times. https://www.nytimes.com/1976/12/28/archives/death-race-game-gains-favor-but-not-with-the-safety-council.html
Bushman, B. J., & Anderson, C. A. (2002). Violent video games and hostile expectations: A test of the General Aggression Model. Personality and Social Psychology Bulletin, 28(12), 1679–1686.
Bushman, B. J., Gollwitzer, M., & Cruz, C. (2015). There is broad consensus: Media researchers agree that violent media increase aggression in children, and pediatricians and parents concur. Psychology of Popular Media Culture, 4(3), 200–214. https://doi.org/10.1037/ppm0000046
Christensen, G., Wang, Z., Paluck, E. L., Swanson, N., Birke, D. J., Miguel, E., & Littman, R. (2019). Open science practices are on the rise: The state of social science (3S) survey [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/5rksu
Davidson, B. I., Ellis, D., Stachl, C., Taylor, P., & Joinson, A. (2021). Measurement practices exacerbate the generalizability crisis: Novel digital measures can help [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/8abzy
Diener, E., Wirtz, D., Tov, W., Kim-Prieto, C., Choi, D., Oishi, S., & Biswas-Diener, R. (2010). New well-being measures: Short scales to assess flourishing and positive and negative feelings. Social Indicators Research, 97(2), 143–156. https://doi.org/10.1007/s11205-009-9493-y
Dormann, C., & Griffin, M. A. (2015). Optimal time lags in panel studies. Psychological Methods, 20(4), 489–505. https://doi.org/10.1037/met0000041
Drummond, A., & Sauer, J. D. (2019). Divergent meta-analyses do not present uniform evidence that violent video game content increases aggressive behaviour. PsyArXiv. https://doi.org/10.31234/osf.io/xms5u
Drummond, A., Sauer, J. D., & Ferguson, C. J. (2020). Do longitudinal studies support long-term relationships between aggressive game play and youth aggressive behaviour? A meta-analytic examination. Royal Society Open Science, 7(7), 200373. https://doi.org/10.1098/rsos.200373
Elson, M., Ferguson, C. J., Gregerson, M., Hogg, J. L., Ivory, J., Klisanin, D., Markey, P. M., Nichols, D., Siddiqui, S., & Wilson, J. (2019). Do policy statements on media effects faithfully represent the science? Advances in Methods and Practices in Psychological Science, 2(1), 12–25. https://doi.org/10.1177/2515245918811301
Elson, M., & Przybylski, A. K. (2017). The science of technology and human behavior: Standards, old and new. Journal of Media Psychology, 29(1), 1–7. https://doi.org/10.1027/1864-1105/a000212
Eronen, M. I., & Bringmann, L. F. (2021). The theory crisis in Psychology: How to move forward. Perspectives on Psychological Science, 16(4), 779–788. https://doi.org/10.1177/1745691620970586
Ferguson, C. J. (2009). An effect size primer: A guide for clinicians and researchers. Professional Psychology: Research and Practice, 40(5), 532–538. https://doi.org/10.1037/a0015808
Ferguson, C. J. (2015). Do angry birds make for angry children? A meta-analysis of video game influences on children’s and adolescents’ aggression, mental health, prosocial behavior, and academic performance. Perspectives on Psychological Science, 10(5), 646–666. https://doi.org/10.1177/1745691615592234
Ferguson, C. J. (2018). Violent video games, sexist video games, and the law: Why can’t we find effects? Annual Review of Law and Social Science, 14(1), 411–426. https://doi.org/10.1146/annurev-lawsocsci-101317-031036
Ferguson, C. J., & Dyck, D. (2012). Paradigm change in aggression research: The time has come to retire the General Aggression Model. Aggression and Violent Behavior, 17(3), 220–228. https://doi.org/10.1016/j.avb.2012.02.007
Ferguson, C. J., & Heene, M. (2021). Providing a lower-bound estimate for psychology’s “crud factor”: The case of aggression. Professional Psychology: Research and Practice, 52(6), 620–626. https://doi.org/10.1037/pro0000386
Ferguson, C. J., & Konijn, E. A. (2015). She said/he said: A peaceful debate on video game violence. Psychology of Popular Media Culture, 4(4), 397.
Ferguson, C. J., & Wang, J. C. K. (2019). Aggressive video games are not a risk factor for future aggression in youth: A longitudinal study. Journal of Youth and Adolescence, 48(8), 1439–1451. https://doi.org/10.1007/s10964-019-01069-0
Furuya-Kanamori, L., & Doi, S. A. R. (2016). Angry birds, angry children, and angry meta-analysts: A reanalysis. Perspectives on Psychological Science, 11(3), 408–414. https://doi.org/10.1177/1745691616635599
Götz, F., Gosling, S., & Rentfrow, J. (2021). Small effects: The indispensable foundation for a cumulative psychological science [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/hzrxf
Grimes, T., Anderson, J. A., & Bergen, L. (2008). Media violence and aggression: Science and ideology. Sage.
Hall, R. C. W., Day, T., & Hall, R. C. W. (2011). A plea for caution: Violent video games, the supreme court, and the role of science. Mayo Clinic Proceedings, 86(4), 315–321. https://doi.org/10.4065/mcp.2010.0762
Hamaker, E. L. (2012). Why researchers should think “within-person”: A paradigmatic rationale. In M. R. Mehl & T. S. Conner (Eds.), Handbook of research methods for studying daily life (pp. 43–61). Guilford Press.
Hamaker, E. L., Kuiper, R. M., & Grasman, R. P. P. P. (2015). A critique of the cross-lagged panel model. Psychological Methods, 20(1), 102–116. https://doi.org/10.1037/a0038889
Hernán, M. A. (2018). The C-word: Scientific euphemisms do not improve causal inference from observational data. American Journal of Public Health, 108(5), 616–619. https://doi.org/10.2105/AJPH.2018.304337
Hilgard, J., Engelhardt, C. R., & Rouder, J. N. (2017). Overstated evidence for short-term effects of violent games on affect and behavior: A reanalysis of Anderson et al. (2010). Psychological Bulletin, 143(7), 757–774. https://doi.org/10.1037/bul0000074
Hilgard, J., Engelhardt, C. R., Rouder, J. N., Segert, I. L., & Bartholow, B. D. (2019). Null effects of game violence, game difficulty, and 2D:4D digit ratio on aggressive behavior. Psychological Science, 30(4), 606–616. https://doi.org/10.1177/0956797619829688
Huesmann, L. R. (2010). Nailing the coffin shut on doubts that violent video games stimulate aggression: Comment on Anderson et al. (2010). Psychological Bulletin, 136(2), 179–181. https://doi.org/10.1037/a0018567
IJzerman, H., Lewis, N. A., Przybylski, A. K., Weinstein, N., DeBruine, L., Ritchie, S. J., Vazire, S., Forscher, P. S., Morey, R. D., Ivory, J. D., & Anvari, F. (2020). Use caution when applying behavioural science to policy. Nature Human Behaviour, 1–3. https://doi.org/10.1038/s41562-020-00990-w
Ioannidis, J. P. A. (2016). The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. The Milbank Quarterly, 94(3), 485–514. https://doi.org/10.1111/1468-0009.12210
Ivory, J. D., Markey, P. M., Elson, M., Colwell, J., Ferguson, C. J., Griffiths, M. D., Savage, J., & Williams, K. D. (2015). Manufacturing consensus in a diverse field of scholarly opinions: A comment on Bushman, Gollwitzer, and Cruz (2015). Psychology of Popular Media Culture, 4(3), 222–229.
Johannes, N., Dienlin, T., Bakhshi, H., & Przybylski, A. K. (2022). No effect of different types of media on well-being. Scientific Reports, 12(61), 1–13. https://doi.org/10.1038/s41598-021-03218-7
Johannes, N., Vuorre, M., & Przybylski, A. K. (2021). Video game play is positively correlated with well-being. Royal Society Open Science, 8(2), 202049. https://doi.org/10.1098/rsos.202049
Kahn, A. S., Ratan, R., & Williams, D. (2014). Why we distort in self-report: Predictors of self-report errors in video game play. Journal of Computer-Mediated Communication, 19(4), 1010–1023. https://doi.org/10.1111/jcc4.12056
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/2515245918770963
Lee, E.-J., Kim, H. S., & Choi, S. (2021). Violent video games and aggression: Stimulation or catharsis or both? Cyberpsychology, Behavior, and Social Networking, 24(1), 41–47. https://doi.org/10.1089/cyber.2020.0033
Markey, P. M. (2015). Finding the middle ground in violent video game research: Lessons From Ferguson (2015). Perspectives on Psychological Science, 10(5), 667–670. https://doi.org/10.1177/1745691615592236
Markey, P. M., Markey, C. N., & French, J. E. (2015). Violent video games and real-world violence: Rhetoric versus data. Psychology of Popular Media Culture, 4(4), 277–295. https://doi.org/10.1037/ppm0000030
Mathur, M. B., & VanderWeele, T. J. (2019). Finding common ground in meta-analysis “wars” on violent video games. Perspectives on Psychological Science, 14(4), 705–708. https://doi.org/10.1177/1745691619850104
Merton, R. K. (1973). The sociology of science: Theoretical and empirical investigations. University of Chicago press.
Miedzobrodzka, E., Konijn, E. A., & Krabbendam, L. (2021). Emotion recognition and inhibitory control in adolescent players of violent video games. Journal of Research on Adolescence. https://doi.org/10.1111/jora.12704
Nelson, L. D., & Simmons, J. (2018). Psychology’s renaissance. Annual Review of Psychology, 69, 511–534. https://doi.org/10.1146/annurev-psych-122216- 011836
Orben, A. (2020). The sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372
Orben, A., & Przybylski, A. K. (2019). Screens, teens, and psychological well-being: Evidence from three time-use-diary studies. Psychological Science, 30(5), 682–696. https://doi.org/10.1177/0956797619830329
Parry, D. A., Davidson, B. I., Sewall, C. J. R., Fisher, J. T., Mieczkowski, H., & Quintana, D. S. (2021). A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nature Human Behaviour, 1–13. https://doi.org/10.1038/s41562-021-01117-5
Przybylski, A. K., & Weinstein, N. (2019). Violent video game engagement is not associated with adolescents’ aggressive behaviour: Evidence from a registered report. Royal Society Open Science, 6(2), 171474. https://doi.org/10.1098/rsos.171474
R Core Team. (2022). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.r-project.org/
Rohrer, J. M., & Murayama, K. (2021). These are not the effects you are looking for: Causality and the within-/between-person distinction in longitudinal data analysis [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/tg4vj
Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36.
Sherry, J. L. (2007). Violent video games and aggression: Why can’t we find effects. Mass Media Effects Research: Advances through Meta-Analysis, 12, 245–262.
Vazire, S. (2017). Quality uncertainty erodes trust in science. Collabra: Psychology, 3(1), 1. https://doi.org/10.1525/collabra.74
Vosgerau, J., Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2019). 99% impossible: A valid, or falsifiable, internal meta-analysis. Journal of Experimental Psychology: General, 148(9), 1628–1639. https://doi.org/10.1037/xge0000663
Vuorre, M., Johannes, N., Magnusson, K., & Przybylski, A. K. (2021). Time spent playing video games is unlikely to impact well-being [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/8cxyh
Weber, R., Behr, K. M., Fisher, J. T., Lonergan, C., & Quebral, C. (2020). Video game violence and interactivity: Effect or equivalence? Journal of Communication. https://doi.org/10.1093/joc/jqz048
Williams, D., & Skoric, M. (2005). Internet fantasy violence: A test of aggression in an online game. Communication Monographs, 72(2), 217–233. https://doi.org/10.1080/03637750500111781
Yarkoni, T. (2020). The generalizability crisis. Behavioral and Brain Sciences, 1–37. https://doi.org/10.1017/S0140525X20001685
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material