Decision-making processes in everyday life are complex. Research on decision-making has focused on self-report or experimental paradigms to understand this process. Recent work has highlighted the potential for complex iterative decision-making frameworks. We developed a simulated decision-making paradigm to assess the relationship between in-game and real-world behaviors and symptoms of depression through exploratory and then pre-registered, confirmatory analyses. Our pre-registered and post-hoc confirmatory analyses highlighted the link between in-game technology use and real-world technology use. We also explored decision-making through transition probabilities to evaluate how specific decisions might unfold over time. The findings emphasized the stability of discrete decision-making in two independent samples. Taken together, these findings suggest that some behavioral patterns appear to be quite stable. Our novel “game” has the potential to provide important insights into decision-making processes and may provide a unique method for identifying and intervening on specific targeted behaviors.

Our actions have a profound impact on how we feel. For example, some activities such as exercise have been shown to elevate mood (Chekroud et al., 2018; Liao et al., 2015; Reichert et al., 2017), whereas other activities such as use of social media (e.g. Facebook, Instagram etc.) may negatively impact mood (Frison & Eggermont, 2017; Sagioglou & Greitemeyer, 2014; Thorisdottir et al., 2019). Our actions do not exist in a vacuum. Decision-making processes figure prominently with other elements of human agency in a reciprocally causal fashion in determining one’s actions (Bandura, 1978).

There are numerous methods for evaluating what people do during the day. This ranges from retrospective self-report using general behavioral trends (“in the past week…”), specific daily “reconstruction” (Kahneman et al., 2004), as well as ecological momentary assessment (EMA) — asking individuals to note what they are doing in the moment over a period of several days (Shiffman et al., 2008). A more recent strategy is monitoring activity remotely using so-called passive sensing data,—e.g., through recorded location, movement, interactions, and smartphone audio recordings (Cornet & Holden, 2018).

There are also numerous experimental paradigms developed specifically to try to understand why people make specific decisions. For instance, with the approach avoidance task (AAT), individuals “push away” or “pull towards” different stimuli using a joystick (Rinck & Becker, 2007). How quickly this is accomplished (measured with reaction time) is thought to reflect cognitive decision-making processes. These decisions may be related to a wide range of different (visual) domains such as alcohol craving (Kim & Lee, 2015), mood (Radke et al., 2014) or exercise (Hannan et al., 2019). Computerized tasks have also been used to investigate specific decision-making processes e.g. delay discounting (L. Green & Myerson, 2004) or the Ultimatum Game (Montague & Lohrenz, 2007). However, specific decisions are largely constrained by the experimental paradigm and the stimuli being used. We gain important information about subtle cognitive processes, but at a (fairly large) step removed from the naturalistic context of decision-making.

Bridging the gap between what people are doing and why they are doing it represents a critically important challenge. For instance, what leads someone currently experiencing dysphoric mood to take time to exercise instead of spending time watching videos on a smartphone? Low mood and depression are of widespread concern (16.2% lifetime in adults, Kessler et al., 2003; 5 to 29% in adolescents, Carrellas et al., 2017). Moreover, there is often a clear intention to “do better” in one’s life (e.g., New Year’s resolutions). Yet, we still do not fully understand the link between intentions and actions. For instance, despite a desire to feel better, depressed individuals’ intentions to engage in adaptive coping behavior (e.g., exercising) are thwarted by behavioral decisions that maintain low mood (e.g., continued social media use; Chekroud et al., 2018). Researchers have proposed computational models highlighting neurobiological differences (e.g., the role of dopamine) that may help account for the relatively poor correspondence between intention and action. Moreover, evidence suggests that the presence of some depressive symptoms—such as low mood and/or anhedonia—may heighten risk for major depressive disorder (MDD; Fried & Nesse, 2015; Horwath et al., 1992). The benefits of mood-boosting activities ought to outweigh any resistance. Yet, when the intention to exercise meets the moment in which the exercise is actually to occur, opportunities to do something, anything, else often crop up. This intersection of intention and behavior represents a critical juncture in understanding decision-making processes, which may be worsened by low mood and/or by anhedonia (Treadway & Zald, 2013).

Freedman and colleagues (2018) introduced the use of vignettes in psychological research to provide a microcosm of real decisions. This approach offers the opportunity to ask questions that provide insight into specific decision-making processes (M. C. Green & Jenkins, 2014). However, there is no real limit to extending this approach from simple binary decisions to more complex decisions that more closely resemble real life. For example, if someone is checking out at the grocery store, do they go to lane 5 or 6? Depending on the lane chosen, one might see a different candy bar or different magazine. As the number of decision junctures grow, the number of unique experiences increases. At a certain point, in particular with the inclusion of recursions (e.g., returning to an earlier state, or switching lanes to see what candy bars are on the other side), it becomes increasingly unlikely for any two individuals to experience the same series of decisions in exactly the same way. We conceptualize this approach as a kind of structured simulation wherein participants make decisions in an environment that is intended to reflect part of the decision-making process as it might occur in the real world.

In the current study, we developed a novel text-based simulation approach to reflect a fully realized decision-making environment, expanding on the previous vignette-based work. The primary aim of the study was to evaluate a dual-strategy approach in the assessment of decision-making. To do this we focused on the intersection between real-world activities and the decision to engage in specific in-game activities. In other words, how similar were behaviors in the simulated environment with those made in the recent past? Considering the importance of the link between mood and behavior, a secondary aim was to consider how symptoms of depression might be related to real-world activities or the decision to engage in specific in-game activities. We used an exploratory modelling approach to generate specific data-derived hypotheses in a first study, and then conducted a second pre-registered replication to test these hypotheses using a confirmatory modelling approach. Post-hoc, we also explored the specifics of decision-making during the game by deriving probabilistic scores for different patterns of behavior (e.g., how likely they were to occur).

Participants

Participants were undergraduate students at the University of Texas at Austin who received course credit. We screened data for failing attention checks and completing all portions of the study. After screening, Study 1 (collected between 07-14-2020 and 08-30-2020) consisted of 268 participants and Study 2 (collected between 08-31-2020 and 09-30-2020) consisted of 229 participants. See Table 1 for participant demographics. All procedures were approved by the University of Texas at Austin IRB. Sample size was determined based on a predetermined stopping rule (pre-registered).

Table 1. Demographic Summary
 Study 1 (n=239) Study 2 (n=219) 
Variable Mean (SD) 
Age 19.03 (1.45) 18.85 (1.85) 
PHQ-8 Total Score 6.98 (5.50) 7.67 (5.51) 
 No. (%) 
Sex, Female 173 (72.4) 153 (69.9) 
Ethnicity, Hispanic 86 (36.0) 67 (30.6) 
Race   
Hispanic 49 (20.5) 38 (17.4) 
Black or African American 12 (5.0) 3 (1.3) 
Asian 56 (23.4) 49 (22.4) 
White 91 (38.1) 90 (41.1) 
Multiple 30 (12.6) 34 (15.5) 
Unknown/not reported 1 (.4) 0 (0) 
 Study 1 (n=239) Study 2 (n=219) 
Variable Mean (SD) 
Age 19.03 (1.45) 18.85 (1.85) 
PHQ-8 Total Score 6.98 (5.50) 7.67 (5.51) 
 No. (%) 
Sex, Female 173 (72.4) 153 (69.9) 
Ethnicity, Hispanic 86 (36.0) 67 (30.6) 
Race   
Hispanic 49 (20.5) 38 (17.4) 
Black or African American 12 (5.0) 3 (1.3) 
Asian 56 (23.4) 49 (22.4) 
White 91 (38.1) 90 (41.1) 
Multiple 30 (12.6) 34 (15.5) 
Unknown/not reported 1 (.4) 0 (0) 

Procedures

Participants first completed demographic information and self-report questionnaires. Participants were then directed to complete the game. Instructions asked participants to imagine they were home alone, and to try to do as they would if so. Following completion of the game, participants were directed to a brief survey where they completed questions assessing their impressions of the game.

A Day at Home: A text-based decision-making simulation

We developed a text-based “game” using the open-sourced platform Twine (http://twinery.org/). We extended prior work using this platform by developing an interactive environment that allowed participants to freely make decisions about what they wanted to do during “a day at home.” The game began with these brief instructions:

While not everything involved in this game may describe your life, wherever possible, please make the decision that makes the most sense to you. If you wouldn’t choose any of the options, choose the one you would be most likely to choose if those were the only options available.

The game takes place with you at home alone on a day when you don’t have any work to do. This may be different than your current living situation—if so, just imagine how you would act on your own.

Participants could choose from a number of possible options at each page of the game—from when they “woke up” at 8AM (options included going back to sleep for half-hour intervals) to when they wanted to end the game (i.e., go to sleep). There were decisions about use of a smartphone, television, computer, cooking, exercise, etc.

There were a number of domains where the granularity of decisions was fairly high (e.g., to watch YouTube or look at Instagram), and others where it was lower (e.g., options to exercise were yoga/stretching, weightlifting, and calisthenics). There was some feedback provided to participants over the course of the game if they chose an action repeatedly (e.g., exercising many times → “You feel rather sore from all of your exercise!” and using the phone a certain number of times → “You’ve been on your phone for a while now.”).

The game used in the current study can be accessed here (no data will be collected): https://affectlab.bard.edu/stayhome/example.html

Measures

Daily Activities. The Daily Activities assessment was a retrospective self-report evaluation of 25 different types of activities completed in the past month (e.g., “I bake or cook food”; “I drink alcohol”; “I use social media”; “I exercise at home”) on a 1-4 scale from “not at all” to “nearly every day”. The full assessment is available on the Open Science Framework with other materials from the study; https://doi.org/10.17605/OSF.IO/WB7GS.

In-Game Measures. From the “A Day at Home” game, we extracted 9 variables reflecting the total number of times specific actions were taken: phone use, computer use, alcohol use, television use, exercise, cooking, new hobbies, old hobbies, and a ninth variable reflecting the length of the game day. We also recorded all participants’ actions, and used them to calculate transition probabilities as described below.

Patient Health Questionnaire – 8 (PHQ-8)(Kroenke et al., 2009). The PHQ-8 is an 8-item self-report scale assessing symptoms of depression. Participants rated how often they had been bothered by each concern within the last two weeks on a 4-point scale (0-3) from “not at all” to “nearly every day”. In study 1 the alpha coefficient was 0.89 and in study 2 the alpha coefficient was 0.89.

Post-Game Measures. We asked participants to (1) rate the similarity of the game to how they would spend a day off on a scale from 1, “very different” to 7, “very similar”; (2) whether the game made them reflect on how they might spend a day alone at home on a scale from 1, “not at all” to 5, “a lot”; (3) whether it would be helpful to receive feedback during the game based on what they decided to do on a scale from 1, “not at all” to 5, “very much”. We also included two measures of COVID anxiety and loneliness in the first study; although we did not conduct analyses involving these measures for study 1, we included them in study 2 for parallelism.

Data Analysis

All data used in the current paper, relevant syntax, and the pre-registration are available at https://doi.org/10.17605/OSF.IO/42UEG

Network Analysis. We used Bayesian Graphical Gaussian Models implemented with the BGGM package (Williams et al., 2020) in R to conduct the exploratory and confirmatory analyses (Rodriguez et al., 2020). Graphical gaussian models (GGMs) are becoming commonly implemented in psychology to understand relationships among a large number of variables, while controlling for individual associations. This modelling approach can be flexibly implemented using practically any cross-sectional measures.

In the current context, we were interested in understanding associations between a large number of real-world activities, in-game activities, and symptoms of depression. In regards to associations between real-world and in-game activities, it was particularly important to address statistically that not all activities corresponded perfectly between assessment modalities. The number of possible associations, made it possible (if not likely) that there would be spurious relationships if we relied only on uncorrected bivariate correlations. At the same time, we felt that it could be useful to discover unexpected relationships that could provide unexpected insights. The use of GGMs allowed us to examine relationships within these data in a robust exploratory way. Although relatively new, Bayesian GGMs offer several distinct advantages over the more traditional GGMs. Specifically, for the current work we were interested in replicating the associations identified in our first, exploratory study; there is an emphasis in the GGM literature on replicability. While network findings are broadly consistent, small variations can influence the conclusions drawn (Borsboom et al., 2018). There are a large number of reasons that this might be the case generally, but considering the novel nature of our study design, we chose to incorporate robust methodological and statistical checks on our initial findings. The BGGM package in R contains a function that tests specific hypotheses in new data; thus, we were able to explore our data in study 1 and use it to generate hypotheses that we then tested in study 2.

We used the ‘explore’ function to generate BGGMs in study 1. We then identified central bridge nodes using the networktools package (Jones & Jones, 2017). Bridge nodes are variables that are associated within a specific measure and across distinct measures. Bridge node centrality (Jones et al., 2019) is a measure of the total strength of the associations of a specific variable with variables in a different measure (essentially a sum of the correlation coefficients). We chose bridge centrality, because we were interested in identifying in-game behaviors with the broadest links to real-world behaviors as well as the relative importance of different in-game behaviors to the real-world behaviors. Bridge centrality provided a specific assessment of in-game behaviors that were most related to real-world behaviors. To test the central bridge nodes identified in our exploratory analyses using the data collected in study 2, we used the ‘confirm’ function in BGGM. We report bridge central node strength as sums (individual edges are reported in supplementary tables with standard deviations). Positive bridge nodes reflect sums of positive associations, whereas negative bridge nodes reflect sums of negative associations. There were also three post-hoc (unplanned, exploratory) confirmatory analyses we conducted on specific edges that linked in-game behavior to real world behavior directly (television-watching, computer use/real-world videogame-playing, and cooking) that we had not considered a priori due to our focus on bridge centrality.

Based on reviewer feedback we also computed bivariate correlations in order to facilitate readers evaluation of individual relationship between real-world and in-game behaviors. These matrices are provided as a separate supplemental R-markdown file (Correlation Supplement).

Bayes Factors (BF) are used in three ways in our analyses: in the exploratory networks, they reflect the likelihood that the edge is truly zero (null), that the edge is a positive association, or that the edge is a negative association. In the confirmatory analyses, BFs reflect the likelihood that our hypotheses were met (greater BF reflects greater likelihood that we confirm our hypothesis). Finally, BFs are included in pairwise correlation tables included in the supplemental materials to reflect the likelihood the correlation coefficient is meaningfully different from zero.

Transition Probabilities. We calculated exploratory transition probabilities to better understand the relationship between the first and second study. A transition probability is the likelihood that a person will move from one state to another over time (one can return to the same state as well). In the context of the game, we became interested in understanding how stable certain behaviors were—for example, calculating the likelihood that a person that was watching TV would continue to watch TV or would instead transition to another behavior (or state). The difficulty in the context of the game was that we did not design the data output to produce an easily calculable series of transitions. There were so many possible choices for participants that the full transition matrix is incredibly sparse, i.e., the likelihood of transitioning between any two states is very, very small. To address this, we removed behaviors related to movement between places in the game and additionally collapsed across categories. All uses of technology, from social media to TV, were collapsed into a “technology” category, all consumption activities (like eating or drinking) were collapsed to a “consumable” category, and all activities (like exercise) were collapsed to an “activities” category. From these three such categories, we calculated a 3 $*$\times$*$ 3 transition matrix for each individual by first creating a state table (number of occurrences of each category from or to another category) with the msm package (Jackson, 2011), then taking the proportion of total fixations to calculate the transition probabilities.

Considering the extensive exploratory nature of this methodology, we were primarily interested in a simple comparison between the transition matrices of studies 1 and study 2 using a Bayesian equivalent of a t-test, using the brms package (Bürkner, 2018) in R. Because we did not expect the transition matrices to differ in meaningful ways, calculating Bayes Factors allowed us to evaluate the possibility that the transition probabilities were not meaningfully different between study 1 and study 2 (smaller Bayes Factors reflect a greater probability that the two values are not different). We followed this analysis with a further exploratory, unplanned regression to test the association between different transition probabilities and total depression symptom scores.

Study 1

See Figures 1 and 2 for an overview of the network graphs. Supplementary tables (1-3) indicate edge strength, standard deviation, and the likelihood that the edge is null (truly zero), positive, or negative. Additional supplementary tables (4–7) include pair-wise correlations and their associated Bayes Factors.

Figure 1. A) The exploratory network plot for the relationships between game activities and real activities. B) The confirmatory network plot for the relationships between game activities and real activities.

A high-resolution version of the figure can be found at https://osf.io/ctjxy/?view_only=41cc5e54da574cd0919a56b4e1c6b94d; a legend with captions for each icon is in the supplemental markdown output. Note. Solid lines show positive associations, dashed lines show negative associations. Faded lines show associations without sufficient evidence to support hypotheses. The width of the lines reflects the strengths of the associations. Node labels were generated using representative icons from Font Awesome. For the most part these are fairly intuitive (e.g. the beer stein or martini glass reflect alcohol use; the smartphone reflects time spent on the phone). Highlighted nodes are those that were found to be central bridges between the game behaviors and the real-life behaviors in the exploratory analysis.

Figure 1. A) The exploratory network plot for the relationships between game activities and real activities. B) The confirmatory network plot for the relationships between game activities and real activities.

A high-resolution version of the figure can be found at https://osf.io/ctjxy/?view_only=41cc5e54da574cd0919a56b4e1c6b94d; a legend with captions for each icon is in the supplemental markdown output. Note. Solid lines show positive associations, dashed lines show negative associations. Faded lines show associations without sufficient evidence to support hypotheses. The width of the lines reflects the strengths of the associations. Node labels were generated using representative icons from Font Awesome. For the most part these are fairly intuitive (e.g. the beer stein or martini glass reflect alcohol use; the smartphone reflects time spent on the phone). Highlighted nodes are those that were found to be central bridges between the game behaviors and the real-life behaviors in the exploratory analysis.

Close modal
Figure 2. A) The exploratory network plot for the relationships between game activities and depression symptoms. B) The confirmatory network plot for the relationships between game activities and depression symptoms. C) The exploratory network plot for the relationships between real activities and depression symptoms. D) The confirmatory network plot for the relationships between real activities and depression symptoms.

A high-resolution version of the figure can be found at https://osf.io/ctjxy/?view_only=41cc5e54da574cd0919a56b4e1c6b94d; a legend with captions for each icon is in the supplemental markdown output. Note. Solid lines show positive associations, dashed lines show negative associations, and line width represents the strengths of those associations. Node labels were generated using representative icons. Highlighted nodes are those that were found to be central bridges between the game/real-life behaviors and PHQ-8 symptoms in the exploratory analysis.

Figure 2. A) The exploratory network plot for the relationships between game activities and depression symptoms. B) The confirmatory network plot for the relationships between game activities and depression symptoms. C) The exploratory network plot for the relationships between real activities and depression symptoms. D) The confirmatory network plot for the relationships between real activities and depression symptoms.

A high-resolution version of the figure can be found at https://osf.io/ctjxy/?view_only=41cc5e54da574cd0919a56b4e1c6b94d; a legend with captions for each icon is in the supplemental markdown output. Note. Solid lines show positive associations, dashed lines show negative associations, and line width represents the strengths of those associations. Node labels were generated using representative icons. Highlighted nodes are those that were found to be central bridges between the game/real-life behaviors and PHQ-8 symptoms in the exploratory analysis.

Close modal

Game Activities – Real Activities. We identified six central bridge nodes, three positive (i.e. the sums of positive associations): phone-use in-game (0.810), drinking in-game (0.632), and working outside the home (0.591) and three negative (i.e. the sums of negative associations): phone-use in-game (0.363), exercise in-game (0.389), knitting (0.590). See Figure 1A.

Game Activities – Depression symptoms. We identified four central bridge nodes, two positive: “Little interest or pleasure in doing things” (0.431) and “Feeling bad about yourself—or that you are a failure or have let yourself or your family down?” (0.325) and two negative: “computer use in-game (0.380)” and “Feeling down, depressed, or hopeless?” (0.854). See Figure 2A.

Real Activities – Depression symptoms. We identified five central bridge nodes, two positive: writing (0.307) and “Poor appetite or overeating?” (0.284) and three negative: working outside the home (0.304), “Little interest or pleasure in doing things” (0.440), and “Trouble falling or staying asleep, or sleeping too much?” (0.401). See Figure 2C.

Study 2

Confirmatory findings (in-game and real-world behavior). Replication of the positive bridge centrality of phone use in-game was strongly supported (BF = 22.49). Our secondary a priori hypothesis that in-game phone use would be more strongly associated with real world social media use compared to real world exercise was slightly supported (BF = 1.80). Contrary to prediction, we found slight evidence against the positive bridge centrality of in-game drinking (BF = .388) and of working outside home (BF = .411) and against the negative bridge centrality of phone use in-game and knitting (BF = .244) and exercise in-game (BF = .290). See Figure 1B.

Confirmatory findings (in-game behavior and depression symptoms). None of the predicted associations related to network centrality between in-game activities and symptoms of depression identified in study 1 were replicated in Study 2. Specifically, there was evidence against the centrality of “Feeling down, depressed, or hopeless?” (BF = .001), the centrality of computer use in-game (BF = 0.039), and the centrality of “Feeling bad about yourself—or that you are a failure or have let yourself or your family down?” (BF = .023). There was ambiguous evidence regarding the centrality of “Little interest or pleasure in doing things?” (BF = 1.119). See Figure 2B.

Confirmatory findings (real-world behavior and depression symptoms). None of the associations between real-world activities and symptoms of depression observed in Study 1 replicated in Study 2 - “Little interest or pleasure in doing things” (positive centrality BF = 0.186; negative centrality BF = 0.074), “Moving or speaking so slowly that other people could have noticed? Or so fidgety or restless that you have been moving a lot more than usual?” (BF = 0.362), “Poor appetite or overeating?” (BF = 0.078), “Feeling tired or having little energy?” (BF = 0.291), and working outside of home (BF = 0.310). See Figure 2D.

Exploratory findings (in-game and real-world behavior). Our exploratory analyses found confirmatory support for specific associations between in-game and real-world television (BF = 21.97), between in-game computer use and real-world videogame use (BF = 6.69), and between in-game cooking and real-world cooking (BF = 1.68). Figures 1A-B.

Exploratory transition probability findings. In comparing each transition probability from study 1 and study 2, we found fairly strong evidence for no difference (in favor of the null): BFs ranged from 0.029–0.003. Our evaluation of the association between transition probabilities (combined across samples) and total symptoms of depression indicated only a few reliable associations: consumable → activities, b = -24.15, 95% Confidence Intervals [-48.84, -1.29], technology → activities, b = -24.10, 95% CI [-45.80, -1.80], and activities → consumable b = 21.85, 95% CI [0.29, 43.32].

Participant reactions to the text-based game. On average, participants in both studies 1 and 2 rated the game as “a little similar” to how they would spend a day (M1 = 5.43, SD1 = 2.35; M2 = 5.36, SD2 = 2.56). Participants in both studies reported that the game made them reflect “a lot” on how they might spend the day at home (M1 = 4.83, SD1 = 1.32; M2 = 4.96, SD2 = 1.17). Lastly participants in both studies reported that, on average it would be “a little” helpful to receive feedback during the game (M1 = 3.79, SD1 = 1.01; M2 = 3.81, SD2 = 1.02).

We developed a novel, open-ended, text-based game to evaluate decision-making in a complex, simulated context. The primary aim of our study was to confirm the relationship between specific in-game and real-world behaviors. We used network analysis to generate exploratory associations and to generate hypotheses about associations between specific activities (e.g., phone use in-game was positively associated with social media use in the real-world). In a second, pre-registered study, we tested the replication of these associations between in-game and real-world behaviors. We partially confirmed our pre-registered hypotheses regarding the specific associations between in-game and real-world activities. Our post-hoc assessments identified strong support for several intuitive associations between in-game behaviors and real-world activities (television and computer games). Taken together, it appears that the use of technology in-game seems to be particularly related to technology use in real life. For example, social media use in real life was positively associated with in-game phone use in both samples (as shown in Figure 1), suggesting that individuals who reported greater time spent weekly on social media also engaged more with the phone during the game. This might reflect ways that actions in the real-world correspond to behaviors taken in the game—it is worth highlighting that there were a range of things people could have been doing on their phone, but our results showed that participants in both samples tend to use their phone for social media. This is a relatively intuitive finding, but suggests ways in which in-game behaviors could be used to probe specific real-world decisions (specific phone use, time-of-day of use, etc.). Other findings of highlighted differences between the samples—for instance in study 1 alcohol use in-game was associated with substance use in the real-world, but not alcohol use, whereas in study 2 alcohol use in-game was associated with alcohol use in the real-world, but not substance use. Our measurement of activities both in-game and real-world was limited and this may also have led to differences between samples; moreover, the timing of data collection may have influenced the activities endorsed. For example, we collected data first in the summer then in the Fall—it is plausible that what participants were doing and how they were feeling varied somewhat due to vacation vs. the beginning of the semester (as the participants were all undergraduates). Increasing the sample size and providing a broader range of more real life and in-game analogues may help address these inconsistencies.

It should be noted that the linkage between depressive symptoms and in-game actions or real-world behaviors observed in Study 1 were not replicated in Study 2. For example, anhedonia symptoms were associated with in-game phone use and real-life alcohol use in Study 1, but not in Study 2 (see Figure 2). One plausible explanation is that while the average depression symptom severity was similar across samples, there is wide individual variation in individual depressive symptoms that are endorsed (Fried, 2017). Yet, our transition analysis did indicate that some decision-making processes were associated with depression across the combined samples. Similar to the correspondence between in-game and real-world behaviors, this finding suggests that the network analysis may have been too granular to identify decision-making processes associated with depression. Future research may consider testing whether a sample of clinically depressed individuals differ in meaningful ways from the general population, as this would provide insight into how clinically elevated depression may differentially influence decision-making processes.

The distinction between intentionality and action is complex and simulated environments might evoke decisions that imply intentions to act in a certain way rather than true decisions that will be carried out (we do not always act in accordance with our intentions; Searle, 1979). Thus, it is also worth considering whether there would be a meaningful difference between the associations between real-life behavior and in-game choices, as compared to simply asking someone to make a predictive inference about their future behavior, e.g., “how much time will you spend using your phone tomorrow”? The in-game behaviors could be related to this type of inference and it is something to consider for future research, as there is a recent large meta-analysis suggesting a weak correspondence between predicted and future behavior (Wood et al., 2016). Yet, there are a number of important considerations related to this type of inference, including whether or not the assessment of current behavior is accurate. Using a dual strategy approach to investigate behavior (the relationship between recent behavior and in-game activities), we identified unique links between real actions and simulated decision-making. When taking into account the complex nature of decision-making, the flexibility of the game, and the requirement that participants use their imagination, the replication of associations between independent samples is notable.

A benefit of this iterative vignette-based game approach is its potential as a flexible tool that can begin to address more complex questions about decision-making. Our exploratory transition probability analysis aimed to determine how consistent the decision-making processes between certain behaviors were across two samples, which were highly similar. While we cannot infer longitudinal stability from these findings, the consistency of the transition probabilities suggests that this approach may be useful for examining group differences in this type of decision-making process (e.g., demographic features or presence/absence of symptoms of mental distress). Transition probabilities might reflect more global decision-making process that underpins specific behavioral decisions (Bishop & Gagne, 2018) that the network analysis failed to capture at a more detailed level.

It is important to note that we did not develop the environment explicitly to answer questions about transition processes and needed to extensively clean the data to make it usable for this type of approach. Future research might consider developing an environment tailored to computational methods that are expressly suited for answering questions about decision-making (e.g. reinforcement learning, Blanco et al., 2013), especially as they relate to symptoms of mental health concerns like anxiety or depression. Our exploratory findings provide some support for the potential utility of this approach for complex simulated decision-making environment.

The importance of decision-making is largely related to how it is connected to behavior. Behavioral interventions rely heavily on providing individuals with insight into this process and modifying actions taken. Among the range of empirically supported intervention strategies to address low mood, among other concerns (such as physical health), behavioral activation has shown considerable promise in reducing depressive symptoms (Stein et al., 2021). In typical behavioral activation interventions, patients work with a therapist to develop a list of valued activities (e.g. scheduling exercise, carrying out a pleasurable activity, picking up a hobby) in order to boost mood and provide alternatives to depressionogenic activities (e.g. substance use, Daughters et al., 2018). Our results suggest that a large proportion of individuals gained some insight from completing the game and, critically, were open to considering changing their behavior in the context of playing the game. Thus, this simulated decision-making approach may provide an important way for individuals to gain insight into their own behaviors and for health specialists (medical and psychological) to help in the process by making further modifications to this paradigm.

Limitations

First, our activity sets for both real-world and in-game were quite limited in number; and while their selection appeared reasonable with respect to face validity, we did not collect formal data on their representativeness. Future research may consider exploring whether other assessment approaches (e.g., getting phone logs or EMA) correspond to in-game behaviors more closely than a single self-report assessment. Second, increasing the complexity of the game may be useful, but there are also constraints on how complex it can be without being overly burdensome to participants. Third, our sample consisted of undergraduate students and thus our findings may not generalize to non-student samples that differ demographically or behaviorally which may have influenced patterns of responding in ways that we were not able to account for. Finally, while our sample sizes were moderately large, they were relatively small in the context of network analysis.

Conclusions

This paper provides some evidence supporting a dual strategy approach to evaluating decision-making through the use of a text-based game. We identified and replicated several specific (pre-registered and exploratory) associations between in-game and real-world behaviors. Our hope is that in the future it becomes feasible to test behavioral interventions within the simulated environment and determine whether these actually translate to changes in real-world decision-making. This may be relevant to clinical psychology in respect to mood and affective disorders, but may also have broad utility related to a wide number of health-related behaviors (e.g., exercise, substance use, medical screening, or vaccination). However, more robust links between in-game and real-world behaviors are needed to help guide intervention research that would leverage the game-context as a targeted intervention strategy.

M.R. and J.D.B. developed the study. J.D.B. led development of the game. M.R. Conducted the analyses. M.J.T. helped supervise the project and provided critical feedback. M.R. drafted the manuscript and J.D.B. and M.J.T. provided critical revisions. All authors approved the final version of the manuscript for submission.

The authors have no conflicts of interest to report.

We would like to thank Isabella Schloss and Max Vanderhyden for their feedback during the development of the game.

Study 1 was not formally pre-registered. The preregistration for study 2 can be found at https://osf.io/rp2c4. Additionally, all data used in the current paper, relevant syntax, and the pre-registration are available at https://osf.io/42ueg/.

Supplementary Tables and Figures: https://osf.io/3f52p/

Bandura, A. (1978). The self system in reciprocal determinism. American Psychologist. https://doi.org/10.1037/0003-066X.33.4.344
Bishop, S. J., & Gagne, C. (2018). Anxiety, depression, and decision making: A computational perspective. Annual Review of Neuroscience. https://doi.org/10.1146/annurev-neuro-080317-062007
Blanco, N. J., Otto, A. R., Maddox, W. T., Beevers, C. G., & Love, B. C. (2013). The influence of depression symptoms on exploratory decision-making. Cognition. https://doi.org/10.1016/j.cognition.2013.08.018
Borsboom, D., Robinaugh, D. J., Rhemtulla, M., & Cramer, A. O. J. (2018). Robustness and replicability of psychopathology networks. World Psychiatry. https://doi.org/10.1002/wps.20515
Bürkner, P. C. (2018). Advanced Bayesian multilevel modeling with the R package brms. R Journal. https://doi.org/10.32614/rj-2018-017
Carrellas, N. W., Biederman, J., & Uchida, M. (2017). How prevalent and morbid are subthreshold manifestations of major depression in adolescents? A literature review. Journal of Affective Disorders. https://doi.org/10.1016/j.jad.2016.12.037
Chekroud, S. R., Gueorguieva, R., Zheutlin, A. B., Paulus, M., Krumholz, H. M., Krystal, J. H., & Chekroud, A. M. (2018). Association between physical exercise and mental health in 1·2 million individuals in the USA between 2011 and 2015: a cross-sectional study. The Lancet Psychiatry. https://doi.org/10.1016/S2215-0366(18)30227-X
Cornet, V. P., & Holden, R. J. (2018). Systematic review of smartphone-based passive sensing for health and wellbeing. Journal of Biomedical Informatics. https://doi.org/10.1016/j.jbi.2017.12.008
Daughters, S. B., Magidson, J. F., Anand, D., Seitz-Brown, C. J., Chen, Y., & Baker, S. (2018). The effect of a behavioral activation treatment for substance use on post-treatment abstinence: a randomized controlled trial. Addiction. https://doi.org/10.1111/add.14049
Freedman, G., Seidman, M., Flanagan, M., Green, M. C., & Kaufman, G. (2018). Updating a classic: A new generation of vignette experiments involving iterative decision making. Advances in Methods and Practices in Psychological Science, 1(1), 43–59. https://doi.org/10.1177/2515245917742982
Fried, E. I. (2017). The 52 symptoms of major depression: Lack of content overlap among seven common depression scales. Journal of Affective Disorders. https://doi.org/10.1016/j.jad.2016.10.019
Fried, E. I., & Nesse, R. M. (2015). Depression sum-scores don’t add up: Why analyzing specific depression symptoms is essential. BMC Medicine. https://doi.org/10.1186/s12916-015-0325-4
Frison, E., & Eggermont, S. (2017). Browsing, posting, and liking on Instagram: The reciprocal relationships between different types of Instagram use and adolescents’ depressed mood. Cyberpsychology, Behavior, and Social Networking. https://doi.org/10.1089/cyber.2017.0156
Green, L., & Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychological Bulletin. https://doi.org/10.1037/0033-2909.130.5.769
Green, M. C., & Jenkins, K. M. (2014). Interactive narratives: Processes and outcomes in user-directed stories. Journal of Communication. https://doi.org/10.1111/jcom.12093
Hannan, T. E., Moffitt, R. L., Neumann, D. L., & Kemps, E. (2019). Implicit approach-avoidance associations predict leisure-time exercise independently of explicit exercise motivation. Sport, Exercise, and Performance Psychology. https://doi.org/10.1037/spy0000145
Horwath, E., Johnson, J., Klerman, G. L., & Weissman, M. M. (1992). Depressive symptoms as relative and attributable risk factors for first-onset major depression. Archives of General Psychiatry. https://doi.org/10.1001/archpsyc.1992.01820100061011
Jackson, C. (2011). Multi-state models for panel data: The msm package for R. Journal of Statistical Software, 38(1), 1–28.
Jones, P. J., & Jones, M. P. (2017). Package ‘networktools’ [Internet]. https://cran.rproject.org/web/packages/networktools/networktools.pdf
Jones, P. J., Ma, R., & McNally, R. J. (2019). Bridge centrality: A network approach to understanding comorbidity. Multivariate Behavioral Research. https://doi.org/10.1080/00273171.2019.1614898
Kahneman, D., Krueger, A. B., Schkade, D. A., Schwarz, N., & Stone, A. A. (2004). A survey method for characterizing daily life experience: The day reconstruction method. Science. https://doi.org/10.1126/science.1103572
Kessler, R. C., Berglund, P., Demler, O., Jin, R., Koretz, D., Merikangas, K. R., Rush, A. J., Walters, E. E., & Wang, P. S. (2003). The Epidemiology of Major Depressive Disorder: Results from the National Comorbidity Survey Replication (NCS-R). Journal of the American Medical Association. https://doi.org/10.1001/jama.289.23.3095
Kim, D. Y., & Lee, J. H. (2015). Development of a Virtual Approach-Avoidance Task to Assess Alcohol Cravings. Cyberpsychology, Behavior, and Social Networking. https://doi.org/10.1089/cyber.2014.0490
Kroenke, K., Strine, T. W., Spitzer, R. L., Williams, J. B. W., Berry, J. T., & Mokdad, A. H. (2009). The PHQ-8 as a measure of current depression in the general population. Journal of Affective Disorders. https://doi.org/10.1016/j.jad.2008.06.026
Liao, Y., Shonkoff, E. T., & Dunton, G. F. (2015). The acute relationships between affect, physical feeling states, and physical activity in daily life: A review of current evidence. Frontiers in Psychology. https://doi.org/10.3389/fpsyg.2015.01975
Montague, P. R., & Lohrenz, T. (2007). To Detect and Correct: Norm Violations and Their Enforcement. Neuron. https://doi.org/10.1016/j.neuron.2007.09.020
Radke, S., Güths, F., André, J. A., Müller, B. W., & de Bruijn, E. R. A. (2014). In action or inaction? Social approach-avoidance tendencies in major depression. Psychiatry Research. https://doi.org/10.1016/j.psychres.2014.07.011
Reichert, M., Tost, H., Reinhard, I., Schlotz, W., Zipf, A., Salize, H. J., Meyer-Lindenberg, A., & Ebner-Priemer, U. W. (2017). Exercise versus nonexercise activity: E-diaries unravel distinct effects on mood. Medicine and Science in Sports and Exercise. https://doi.org/10.1249/MSS.0000000000001149
Rinck, M., & Becker, E. S. (2007). Approach and avoidance in fear of spiders. Journal of Behavior Therapy and Experimental Psychiatry. https://doi.org/10.1016/j.jbtep.2006.10.001
Rodriguez, J. E., Williams, D. R., Rast, P., & Mulder, J. (2020). On formalizing theoretical expectations: Bayesian testing of central structures in psychological networks. Preprint PsyArXiv. https://doi.org/10.31234/osf.io/zw7pf
Sagioglou, C., & Greitemeyer, T. (2014). Facebook’s emotional consequences: Why Facebook causes a decrease in mood and why people still use it. Computers in Human Behavior. https://doi.org/10.1016/j.chb.2014.03.003
Searle, J. R. (1979). The intentionality of intention and action. Inquiry (United Kingdom). https://doi.org/10.1080/00201747908601876
Shiffman, S., Stone, A. A., & Hufford, M. R. (2008). Ecological momentary assessment. Annual Review of Clinical Psychology. https://doi.org/10.1146/annurev.clinpsy.3.022806.091415
Stein, A. T., Carl, E., Cuijpers, P., Karyotaki, E., & Smits, J. A. J. (2021). Looking beyond depression: A meta-analysis of the effect of behavioral activation on depression, anxiety, and activation. Psychological Medicine, 51(9), 1491–1504. https://doi.org/10.1017/s0033291720000239
Thorisdottir, I. E., Sigurvinsdottir, R., Asgeirsdottir, B. B., Allegrante, J. P., & Sigfusdottir, I. D. (2019). Active and passive social media use and symptoms of anxiety and depressed mood among Icelandic adolescents. Cyberpsychology, Behavior, and Social Networking. https://doi.org/10.1089/cyber.2019.0079
Treadway, M. T., & Zald, D. H. (2013). Parsing anhedonia: Translational models of reward-processing deficits in psychopathology. Current Directions in Psychological Science, 22(3), 244–249. https://doi.org/10.1177/0963721412474460
Williams, D. R., Rast, P., Pericchi, L. R., & Mulder, J. (2020). Comparing Gaussian graphical models with the posterior predictive distribution and Bayesian model selection. Psychological Methods, February. https://doi.org/10.1037/met0000254
Wood, C., Conner, M., Miles, E., Sandberg, T., Taylor, N., Godin, G., & Sheeran, P. (2016). The impact of asking intention or self-prediction questions on subsequent behavior: A meta-analysis. Personality and Social Psychology Review. https://doi.org/10.1177/1088868315592334
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material