Prosocial behaviors constitute vital ingredients for all types of social interactions and relationships as well as for society at large. Corresponding to this significance, the study of prosocial behaviors has received considerable attention across scientific disciplines. A striking feature of this research is that most disciplines rely on economic games to measure actual prosocial behavior in controlled experimental settings. However, empirical research often fails to fully exploit the richness of this class of paradigms. The current work aims to overcome this issue by providing a theory-driven overview of and introduction to the variety of economic games for researchers in psychology and beyond. Specifically, we introduce prominent theories of games (Game Theory and Interdependence Theory) and show how the concepts from these theories can be integrated in a unifying theoretical framework considering games as providing specific situational affordances for behavior. Additionally, we describe several games in detail, including their structural features, the affordances they involve, the social motives that may guide behavior, the flexibility they entail to manipulate specific situational aspects and, thus, affordances, and typical research findings. We conclude that tailored selection and combination of games and game variants allows to obtain a unique understanding of the underlying psychological processes involved in prosocial behavior. As a practical tool for researchers, we also provide standardized game instructions and guidelines for the implementation of games in future research. Ultimately, the review can foster optimal use of economic games in future work and thereby set the stage for high-class, replicable, and innovative research on human prosociality.

Prosocial behaviors such as cooperation, generosity, and reciprocity are among the most basic ingredients for the functioning of human societies (Nowak, 2006). Broadly defined, prosocial behaviors describe all kinds of actions that benefit others, usually at personal costs, in situations of interdependence between two or more individuals. Indeed, prosocial behaviors can refer to anything from global challenges for all humanity to every-day interpersonal interactions. At the global level, for example, prosocial actions contribute to solving major challenges such as slowing down climate change (Alston, 2015; Marx & Weber, 2012) and fostering nuclear disarmament (Lodgaard, 2010; Plous, 1988). At the societal micro-level, prosocial behaviors promote the productivity of organizations (Podsakoff et al., 1997) and affect the well-being of interpersonal relations (Le et al., 2018; Thielmann & Hilbig, 2015). Corresponding to this ubiquity, human prosociality has been the target of an abundant amount of research across scientific fields, including (evolutionary) biology, (behavioral) economics, ethics, philosophy, political science, sociology, and psychology.

A seminal step in the study of human prosociality has been the development of so-called economic games – a class of paradigms originated in behavioral economics, which nowadays constitutes a gold standard for the measurement of actual prosocial behaviors in controlled, experimental settings (Baumard et al., 2013). In essence, economic games model social interactions in various situations of interdependence that “afford (make possible) the manifestation of the higher level ‘social person factors’” (Kelley et al., 2003, p. 74). Specifically, real-life situations and challenges are simplified so as to model (only) the critical features of the situation under scrutiny. As such, economic games “provide a coherent, substantive model of many actual encounters” (Murnighan & Wang, 2016, p. 80).

Economic games have a long tradition for studying prosocial behavior, not only in economics but also in psychology. Already in the 1950s, Deutsch (1958, 1960) relied on games to present pioneering work on the nature and determinants of trust and trustworthiness, and Thibaut & Kelley (1959) put forward their Theory of Interdependence – one of the most influential taxonomies of interpersonal situations – based on the analysis of economic games (Kelley & Thibaut, 1978). Today, games are used in most areas within psychology, including developmental psychology to illuminate the development of prosociality from childhood (Böhm & Buttelmann, 2017; Fehr et al., 2008; Tomasello & Vaish, 2013), clinical psychology to study interpersonal problems related to different disorders (Raihani & Bell, 2018; Rodebaugh et al., 2016), personality psychology to understand the behavioral consequences and dissociability of certain traits (Thielmann et al., 2020; Zhao & Smillie, 2015), cognitive psychology to examine the cognitive processes involved in making (prosocial vs. selfish) decisions (Bell et al., 2010; Rand et al., 2012), and environmental psychology to identify the determinants of pro-environmental behavior (Barrett & Dannenberg, 2012; Klein et al., 2017).

Notwithstanding the broad application of games and the unique achievements that research using games has brought about to solve “the puzzle of cooperation” (Colman, 2006) and of prosocial behavior more generally, the many advantages of games are often not fully exploited in (psychological) research. The key reason for this is that practical, theoretically-founded guidelines for the use of games in empirical research are missing, as is a broad overview of (i) which games (and variants thereof) exist, (ii) what kind of situations they model, and (iii) which psychological processes they afford to become expressed in behavior and, thus, what the games actually measure.1 Here, we close this gap by providing a theory-based introduction into this powerful class of paradigms, aiming to foster a common theoretical understanding of behavior in games as well as optimized (and standardized) usage of games in future research.2 As such, our review is particularly directed to researchers in psychology and beyond who aim at using economic games in their empirical studies to understand the determinants and consequences of prosocial behavior in various interdependent situations.

Economic games have become a well-established tool for studying social behavior (Freedman & Flanagan, 2017; van Dijk & De Dreu, 2021) due to several key advantages. First, the observations produced by economic games directly serve as the variable of interest and represent the to-be-measured construct – a preference – without the need for wide-ranging auxiliary assumptions. Unlike approximating a construct such as prejudice from observed differences in response times (as in the Implicit Association Test; e.g., Devine et al., 2002) or self-esteem from self-reports on questionnaire items, choices in economic games express a preference and this preference is the construct one aims to explain. Importantly, how this preference comes about – that is, the causal mechanisms producing the preference (e.g., psychological processes) – is a separate issue (though economic games also bear significant advantages in this regard, see below). In this particular respect, economic games continue the behaviorist tradition: In a classical conditioning experiment, the (strength of the) conditioned response itself directly reflects the construct of interest, namely the (strength of the) association between stimuli. The causal mechanisms producing the association are a separate issue (indeed, mechanisms involving cognition were deliberately neglected in the behaviorist tradition), but there is nonetheless an element of objectivity to the paradigm itself: Observed responses are the construct of interest.

Second, with Game Theory (Luce & Raiffa, 1957; von Neumann & Morgenstern, 1944) and Interdependence Theory (Kelley & Thibaut, 1978; Thibaut & Kelley, 1959), there exist comprehensive theoretical frameworks that allow for precise description and analysis of game behavior. Specifically, Game Theory provides tools for the mathematical analysis of games as abstract models of interacting decision-makers. The decision-makers or players, respectively, are represented by their preferences over the possible outcomes of the game, including outcomes for the self and others. By and large, Game Theory provides a framework to formally capture and analyze strategic interaction. This advantage is similar to the one offered, say, by item-response theory, which formally describes responses in a test and thus allows for superior test-design, better comparability, and use of tailored analyses and scoring methods. Interdependence Theory, in turn, focuses on the interpersonal structure of a game. Most prominently, it specifies how the formal situation (i.e., the game as is) is transformed into a subjectively perceived situation. In simple terms, Interdependence Theory adds psychological meaning to Game Theory and it is therefore an indispensable tool that helps to understand and explain observable game behavior.3 Here, we demonstrate how Game Theory and Interdependence Theory can inform a unifying theoretical framework that specifies the situational affordances involved in interdependent situations (Thielmann et al., 2020), which can ultimately enhance our understanding of prosocial behavior in a variety of social situations as modelled in games.

Third, games offer a flexible toolbox to study prosocial behavior in various (classes of) interdependent situations (see Section II for details). As such, games even allow modeling complex societal challenges and studying behaviors in corresponding situations in controlled experimental settings. For example, games can be adapted to study specific (prosocial vs. selfish) behaviors such as vaccination (Böhm et al., 2016), bribery (Abbink et al., 2002), antimicrobial prescribing of medical practitioners (Colman et al., 2019), pro-environmental donations (Klein et al., 2017), or refugee helping (Böhm et al., 2018). Findings from such studies can, in turn, provide valuable information on the usefulness of certain policy changes (e.g., Barrett & Dannenberg, 2012; Betsch et al., 2017; Böhm et al., 2018; Tarrant et al., 2019) and thereby ultimately help solving key societal challenges.

Finally, economic games allow for implementation with “real” outcomes (most commonly monetary incentives), thus rendering choices truly consequential. The advantage is that, unlike in hypothetical situations or questionnaire self-reports, reporting behavior that is not aligned with one’s true preferences essentially becomes (subjectively) costly. If one were to ignore the instructions or respond at random, one is more likely to forgo preferred outcomes. Thus, it is commonly accepted that participants should be more motivated and attentive (e.g., Baron, 2001; Camerer & Hogarth, 1999). More importantly, responses might be less prone to influences of social desirability, that is, a desire to appear – rather than to be – prosocial. Whereas in a questionnaire one can simply claim to be prosocial, a player in an incentivized game must quite literally put their money where their mouth is. Thus, to appear more prosocial than one truly is, one must forgo desired outcomes. The presence of consequences thus renders observations instances of “actual” behavior (Baumeister et al., 2007; Klein & Hilbig, 2019; Murnighan & Wang, 2016; Pruitt & Kimmel, 1977).

As summarized above, the key goal of the current article is to provide a broad, theory-driven summary and introduction into economic games that is specifically directed to psychologists who may consider using games in their research. As such, we aim to foster high-quality and theory-based research on the many aspects of human prosociality. To this end, we first introduce the most common theoretical frameworks for the description and analysis of economic games to establish a common ground and language: Game Theory and Interdependence Theory. Based on this, we further present a unifying theoretical framework that integrates concepts from both these theories under the umbrella of situational affordances to allow for a thorough psychological understanding of the person processes at play in each game (Thielmann et al., 2020). Second, we provide a detailed overview of the most commonly applied games, including (less commonly used) game variants to manipulate specific situational aspects as well as typical research findings. In so doing, our primary focus is on providing a basis for research allowing a targeted and theory-driven selection of games that are optimally tailored for answering specific research questions.

In this section, we will give a brief introduction into the two most prominent theories of games, Game Theory and Interdependence Theory, and show how the concepts from both these theories can inform a theoretical framework of prosocial behavior based on situational affordances (Thielmann et al., 2020). For more detailed overviews of Game Theory and Interdependence Theory, we direct interested readers to more specified textbooks and articles (for Game Theory, e.g., Fudenberg & Tirole, 1991; Myerson, 1991; Osborne, 2002; Tadelis, 2013; for Interdependence Theory, e.g., Balliet et al., 2017; Rusbult & Van Lange, 2003; Van Lange & Balliet, 2015).

### I.1 Game Theory

Basic concepts. Game Theory is the formal analysis of strategic interaction introduced by von Neumann and Morgenstern in 1944. In essence, “game theory is to games of strategy what probability theory is to games of chance” (Anatol Rapoport & Chammah, 1966, p. 13). The game-theoretic description of a game involves three elements: the interacting individuals (players), their sets of available actions (sets of strategies), and their preferences over all possible combinations of available strategies and corresponding outcomes (payoffs or utilities4). The set of players consists of all parties involved in the game. A player’s strategy refers to a complete plan of action for the game (i.e., the action the player will choose in each potential situation), whereas a player’s set of strategies consists of all strategies available to the player. Finally, a player’s payoff or utility function is a numerical representation of their preferences that assigns a numerical value to every possible combination of all players’ strategies (and associated outcomes) – with higher values being assigned to more preferred outcomes. Accordingly, it has been proposed that the term “preference indicator function” may better capture the meaning of utility functions in a game-theoretic sense (Osborne, 2002).

A necessary condition for applying the utility concept is that all possible strategy combinations can at least be weakly ordered (i.e., they can be ranked, with ties being allowed) according to the player’s preferences. A player is called rational if the player has such ordered preferences and acts in line with these to maximize their payoff. Importantly, Game Theory does not dictate that payoffs must be ordered according to their (material or non-material) consequences; in fact, it is beyond any game-theoretic analysis to determine how material outcomes translate to payoffs (in contrast to Interdependence Theory; see Section I.2). Thus, a player can be rational although, for example, foregoing personal material benefits for the sake of increasing another’s (material) outcome. Nonetheless, for the sake of simplicity and in line with prior experimental research (e.g., Bardsley et al., 2010; Bolton, 1991), in all game-theoretic analyses of the games described here, we take the given outcomes as a direct representation of preferences (i.e., higher outcomes are strictly preferred over smaller ones), meaning that the given outcomes directly translate to payoffs or utility, respectively, in a linear fashion.

Game types. Games differ with regard to several structural features and can be classified accordingly. The type of a game provides information on the available sets of strategies, the knowledge players have about other players’ chosen strategies, and the relationship between payoffs. Thus, as detailed below (Section I.3), different (classes of) games provide different affordances for behavior, and knowing the type and structure of a game implies which psychological processes are afforded and may ultimately become expressed in behavior.

A key dimension for classifying games is the sequence of players’ actions. In simultaneous games, all players choose simultaneously and under ignorance of the other players’ actions. Thus, players can only assume what others will do. A classic example of a simultaneous game is the Prisoner’s Dilemma (Figure 1A; Section II.2). In this game, each of two players independently decides whether to cooperate (C) or defect (D), and the combination of players’ choices results in certain payoffs (e.g., in the Prisoner’s Dilemma the highest individual payoff is always achieved through unilateral defection (D, C) whereas the highest joint payoff is always achieved through mutual cooperation (C, C)). Typically, two-player simultaneous games are represented in so-called “normal form” using matrices which summarize all conceivable (combinations of) strategies and the corresponding payoffs for each player (Figure 1A). In sequential games, by contrast, players decide one after the other. Thus, except for the player acting first, players have some knowledge about the choices made by other (previous) players, which allows to specifically track reactions to these choices. A typical example of a sequential game is the (binary) Trust Game (Figure 1B; Section II.1). In this game, a trustor first decides whether to send a given endowment to the trustee or to keep it. If the trustor sends the amount, it is multiplied (usually tripled) by the experimenter and added to the trustee’s endowment. In reaction, the trustee then decides how much of the transferred (multiplied) amount to return to the trustor. Specifically, in the binary variant of the game, the trustee can decide between returning half of the (multiplied) transfer and returning nothing, thus keeping the entire amount. Sequential games are typically represented in so-called “extensive form” using tree diagrams (Figure 1B) that provide information on the sequencing of players’ possible actions, their available actions at every decision node, and their payoffs for all combinations of players’ strategies.

Figure 1. (A) Normal form representation of a Prisoner’s Dilemma. The first value in each cell represents the payoff of player 1, the second the payoff of player 2. (B) Extensive-form representation of the binary Trust Game. The first value in each payoff vector represents the payoff of the trustor, the second the payoff of the trustee.

Note. C = cooperation, D = defection. The absolute payoffs are exemplary.

Figure 1. (A) Normal form representation of a Prisoner’s Dilemma. The first value in each cell represents the payoff of player 1, the second the payoff of player 2. (B) Extensive-form representation of the binary Trust Game. The first value in each payoff vector represents the payoff of the trustor, the second the payoff of the trustee.

Note. C = cooperation, D = defection. The absolute payoffs are exemplary.

Close modal

Another way to classify games refers to whether sets of strategies and corresponding payoffs vary between players. In symmetric games, all players are interchangeable: they have the same sets of strategies available and have the same associated payoffs. An example of a symmetric game is once more the Prisoner’s Dilemma in Figure 1A: Both players can choose between cooperation (C) and defection (D) and all strategy combinations yield symmetric payoffs. That is, player 1’s payoff from choosing C when player 2 chooses D is the same as player 2’s payoff from choosing C when player 1 chooses D. By contrast, in asymmetric games, players have different sets of strategies available and/or the strategy combinations induce different payoffs. A classic example of an asymmetric game is the binary Trust Game in Figure 1B: Whereas the trustor’s set of strategies is comprised of sending or keeping their endowment, the trustee’s set of strategies is comprised of keeping or returning half of the multiplied amount. Similarly, we could adapt the Prisoner’s Dilemma in Figure 1A such that mutual defection is, for instance, associated with payoff 4 for player 1 while maintaining payoff 3 for player 2, thereby creating an asymmetric Prisoner’s Dilemma through asymmetry in payoffs (despite symmetry in strategy sets). In general, (a)symmetry can thus refer to the set of strategies, the associated payoffs, or both.

Finally, games can be classified according to whether the sum of players’ payoffs differs depending on players’ strategies. In constant-sum games, the sum of players’ payoffs is the same for any combination of players’ strategies.5 As such, constant-sum games involve perfect negative interdependence between players’ payoffs and are thus strictly competitive: whenever a player prefers one combination of strategies over another, their opponent will prefer the opposite. A typical example of a constant-sum game is the Dictator Game (see also Section II.1 for details). In this game, a dictator can freely decide how to distribute an endowment between him/herself and a recipient, who cannot react to the dictator’s split. Here, a higher payoff for the dictator necessarily leads to a lower payoff for the recipient, and vice versa, and the sum of payoffs generally equals the dictator’s initial endowment. By contrast, in non-constant-sum games the sum of players’ payoffs varies as a function of players’ strategies. Prominent examples of non-constant-sum games are social dilemmas (see Section II.2 for details), such as the Prisoner’s Dilemma introduced previously (Figure 1A). Here, the sum of players’ payoffs depends on the players’ strategies, reaching its maximum for mutual cooperation (e.g., total payoff 5 + 5 = 10 in Figure 1A) while reaching its minimum for mutual defection (e.g., total payoff 3 + 3 = 6 in Figure 1A).

Equilibrium solutions. Equilibria are “solutions” to games resulting from a game-theoretic analysis of the strategic interaction at hand. An equilibrium is a combination of strategies, one for each player, with the property that no player can unilaterally be better off by switching to another strategy. Equilibria can thus be understood as normative predictions (what players ought to do under the assumptions of Game Theory). Notably, given that a player’s “best response” (i.e., the strategy with the highest payoff) typically depends on the other players’ strategies, a game may have multiple equilibria resulting in different predictions regarding players’ behaviors. Furthermore, several solution concepts exist, which differ in how selective they are and to which games they are typically applied. Here, we refer to three of the most prominent solution concepts: the Nash equilibrium, the equilibrium in dominant strategies, and the subgame-perfect equilibrium.

The Nash equilibrium is a strategy combination in which no player can obtain a strictly higher payoff by deviating from the equilibrium (choosing another strategy) unilaterally. That is, players choose mutual best responses given the others’ strategies. In the Prisoner’s Dilemma, for instance, mutual defection is the single Nash equilibrium: If player 1 were to unilaterally deviate from their equilibrium strategy (defection) and opt for cooperation instead (whereas player 2 adheres to defection), player 1 would be worse off (payoff 1 < 3; see Figure 1A). The same holds for player 2 if they were to deviate from the equilibrium strategy (defection) while player 1 sticks to it.

An equilibrium in dominant strategies is a strategy combination in which all players choose their dominant strategy. A strategy of a player is called (strictly) dominant if it provides a (strictly) higher payoff than any other strategy of this player, irrespective of other players’ strategies. Thus, a dominant strategy is a player’s best strategy in general. As such, an equilibrium in dominant strategies necessarily constitutes a Nash equilibrium, but not vice versa. As is apparent in Figure 1A for the Prisoner’s Dilemma, defection is a dominant strategy because it always yields a higher payoff, regardless of whether the other player cooperates (i.e., payoff 7 vs. 5) or defects (i.e., payoff 3 vs. 1). Thus, mutual defection constitutes an equilibrium in dominant strategies. However, in many games, players do not have a dominant strategy; by implication, these games also lack an equilibrium in dominant strategies. For example, in the Trust Game in Figure 1B, the trustor’s best response depends on the trustee’s strategy: If the trustee chooses “return”, the trustor’s best choice is “trust” (because payoff 6 > 3), but if the trustee chooses “keep”, the trustor’s best choice is “distrust” (because payoff 3 > 0).

Finally, the subgame-perfect equilibrium is a refinement of the Nash equilibrium that is particularly suited for the analysis of sequential games with perfect information (in which players observe the actions of all preceding players).6 A strategy combination is a subgame-perfect equilibrium if it induces a Nash equilibrium in each subgame of the game. A subgame starts at each decision node in the game tree and contains all parts of the tree that follow after this decision node. For example, in the binary Trust Game (Figure 1B), one subgame starts at the trustor’s choice and one at the trustee’s choice: The first subgame corresponds to the complete game; the second subgame contains the trustee’s node, the trustee’s set of strategies, and the corresponding payoffs for both players. To identify a subgame-perfect equilibrium of a game with a predefined (finite) number of stages, backward induction over subgames is the method of choice: One starts with the decision(s) at the final subgame of the game (e.g., the trustee’s choice in the Trust Game) and determines the best response for the involved player. These decisions are “fixed” before continuing with the previous stage of the game. At this stage, again, one determines the player’s best response, now conditional on the (fixed) best response in the subsequent stage. This procedure is continued until the first decision is reached. In the binary Trust Game, backward induction identifies (distrust | keep) as the unique subgame-perfect equilibrium: At the final subgame of the game, the trustee’s best response is to keep the trustor’s transfer, yielding payoff 12 (keep) as compared to payoff 6 (return). Fixing this decision as given, the trustor’s best response is to distrust so as to ensure payoff 3 instead of payoff 0 that would result from trust. Overall, the subgame-perfect equilibrium identifies a player’s best strategy given that the other player(s) will choose their best strategy once it is their turn.

Efficiency. To judge payoffs from the perspective of the society (i.e., the set of players as a group), one can use the criterion of social efficiency or social welfare, respectively. The social welfare associated with a payoff combination corresponds to the sum of all individual payoffs. Thus, applying this criterion rests on the assumption that players’ payoffs are directly comparable and that building their sum is a valid operation (which is, traditionally, non-standard in Game Theory). In the Prisoner’s Dilemma in Figure 1A, only the payoff combination resulting from mutual cooperation (5, 5) is efficient according to this criterion because 5 + 5 > max{7 + 1, 3 + 3, 1 + 7}.7

### I.2 Interdependence Theory

Interdependence Theory (Kelley et al., 2003; Kelley & Thibaut, 1978; Thibaut & Kelley, 1959) provides a comprehensive analysis and typology of interpersonal interactions as, for example, modelled in games. It is based on the assumption that any interaction I can be described in terms of the interacting individuals A and B (“players” in game-theoretic terms) with their needs, thoughts, and concerns for each other in a situation S. Formally speaking, I = f (S, A, B). The choice options and corresponding outcomes are traditionally represented by a classical tool from economic Game Theory, namely payoff matrices. However, note that the “payoff” entries in the matrix have a different meaning because they reflect players’ (monetary) outcomes rather than utilities. Thus, to maintain consistent terminology, we will continue to speak of “outcomes” in what follows, except when referring to a game-theoretic analysis (in which case we will rely on the “payoff” terminology). In general, Interdependence Theory proposes different indices relating the outcomes in the matrix to each other so as to precisely describe the pattern of interdependence inherent in the situation (see paragraph on the Prisoner’s Dilemma in Section II.2).

Interdependence structure. The situation characterizing an interaction between players is a key element in Interdependence Theory. In essence, each situation S can be described in terms of six structural dimensions: First, the (i) level of dependence designates the degree to which a player’s outcome is dependent on their own actions (actor control), the other player’s actions (partner control), and the players’ joint actions (joint control). Closely related, (ii) mutuality of dependence describes the degree to which players are equally dependent on one another or, stated differently, the extent to which one player has more or less power. The (iii) basis of dependence describes how players influence each other, that is, via partner control or joint control, and it thus describes the degree of coordination a situation requires. Moreover, Interdependence Theory considers the (iv) covariation of interests (i.e., the degree to which players’ outcomes conflict vs. correspond) as well as the (v) temporal structure (i.e., the degree to which the situation involves dynamic and sequential processes, and thus whether there is the potential for future interdependence). Finally, (vi) information certainty describes the extent to which players have information about the other’s outcomes, each player’s impact on the other’s outcome, the other’s goals and motives, and the opportunities for future interaction. Altogether, these structural dimensions characterize the given situation.

Interdependence processes. Based on the evaluation of the given situation (S), players are assumed to engage in a psychological transformation, which eventually results in an effective situation. In essence, transformation describes the process of considering the consequences of one’s own actions for the partner’s outcomes and behaviors as well as for one’s own long-term goals. That is, outcomes are evaluated based on the perception of emotional, social, instrumental, and opportunity rewards and costs. As a consequence, players might not play the game we think they are playing based on the assumption that outcomes directly relate to preferences, but rather rely on their mental construal of the situation when making a decision and order outcomes in a different way than implied by their material value (e.g., Columbus et al., 2019; Gerpott et al., 2018; Halevy et al., 2012; Halevy & Chou, 2014). Stated differently, the transformation of outcomes to payoffs (in a game-theoretic sense) may be non-monotonic according to Interdependence Theory. After all, this very feature of assuming a transformation from a given to an effective situation makes Interdependence Theory go one (psychological) step beyond Game Theory: Whereas Interdependence Theory starts with outcomes and addresses how these are translated to payoffs or utilities, respectively, Game Theory rests on utilities. Thus, whereas in Interdependence Theory the starting point of any consideration is the given situation, in Game Theory it is the effective situation.

Decisively, the transformation process calls basic social motives into play: Whereas the given situation exclusively involves consideration of one’s own personal outcome, the effective situation involves (positive and negative) other-regarding preferences and relationship-relevant concerns. In general, a motive denotes whatever a player seeks to accomplish or maximize in a situation, and it may depend on both a player’s stable dispositions and situationally-activated (state-like) goals (e.g., norms, reputational concerns; e.g., Kelley et al., 2003), including any interaction between these. Social motives have early on been considered as drivers of behavior in economic games (e.g., Kuhlman & Marshello, 1975; McClintock, 1972; Messick & McClintock, 1968). In turn, Kelley and Thibaut (1978) proposed the following social motives as guiding principles of psychological transformation from the given to the effective situation: MaxOwn (maximizing one’s own outcome), MaxOther (maximizing others’ outcomes), MaxJoint (maximizing the sum of own and others’ outcomes), MinDiff (minimizing the absolute difference between own and others’ outcomes), MaxRel (maximizing the relative difference between own and others’ outcomes), and MinOther (minimizing others’ outcomes). In our motivational analysis of games in Section II, we refer to these motives as greed (MaxOwn), competitiveness (MaxRel), and spite (MinOther) – all denoting selfish motives – and altruism (MaxOther), social welfare concerns (MaxJoint), and fairness (MinDiff) – all denoting prosocial motives. In general, social motives shape players’ effective situation by affecting the weights assigned to the potential outcomes.

To illustrate this, again consider the Prisoner’s Dilemma in Figure 1A, now assuming that the values in the matrix denote (monetary) outcomes (rather than payoffs, as would be the case in a game-theoretic consideration). The given matrix implies the best individual outcome resulting from unilateral defection (7), followed by mutual cooperation (5). However, transformation may yield another (effective) matrix: If, for instance, player 1 has social welfare concerns and thus highly weighs both players’ joint outcomes, the player might prefer mutual cooperation over unilateral defection (in game-theoretic terms, this would suggest that the player assigns a higher payoff to mutual cooperation than to unilateral defection). In consequence, the player’s preferences will follow another rank order than implied by the monetary outcomes in the matrix. Stated differently, players’ preferences depend on their social motives in a situation, suggesting that their representation of the situation may differ from the mere game structure. Interdependence Theory thus provides an explanation for the apparent deviations in players’ behavior from game-theoretic predictions (i.e., equilibrium solutions) based on the given situation (assuming that monetary outcomes are directly related to payoffs): Any “gap between the ‘given’ situation and the observed behavior always indicates that ‘person factors’ are at work” (Kelley et al., 2003, p. 77).

### I.3 An Affordance-Based Framework of Prosocial Behavior

Integrating concepts from Game Theory and Interdependence Theory, Thielmann, Spadaro, and Balliet (2020) recently presented a unifying theoretical framework of prosocial behavior, proposing that interdependent situations as modelled in games can be understood in terms of situational affordances that allow different person factors to become expressed in prosocial versus selfish behavior. Generally speaking, affordances describe properties of situations that “provide a context for the expression of motives, goals, values, and preferences” (Reis, 2008, p. 316). Several theories in psychology – including Interdependence Theory – consider affordances as key aspects of situations that allow certain dispositional factors to influence behavior (e.g., De Vries et al., 2016; Holmes, 2004; Kelley et al., 2003; Mischel & Shoda, 1995). Thus, considering games through the lens of affordances provides a unique perspective on corresponding behavior and allows for a thorough understanding of the psychological processes at play.

In games – and interdependent situations more generally – four broad affordances are particularly prominent, namely (i) a possibility for exploitation, (ii) a possibility for reciprocity, (iii) a temporal conflict between short- and long-term interests, and (iv) dependence on others under uncertainty (Thielmann et al., 2020). Depending on the structural features of a game, one or more of these affordances will be present to guide behavior. Thus, affordances denote exogenous factors inherent in a (game) situation. Each affordance is, in turn, uniquely tied to a psychological process,8 that is, an endogenous entity within the person that may be activated and ultimately become expressed in the person’s behavior. Table 1 gives an overview of the four key affordances involved in interdependent situations and corresponding psychological processes afforded. Moreover, Table 1 delineates how each of these affordances is linked to the structural features of games as specified in Game Theory (Section I.1) and to basic dimensions of interdependence as specified in Interdependence Theory (Section I.2; see also Gerpott et al., in press; Thielmann et al., 2020).

Table 1. Situational affordances in interdependent situations and corresponding psychological processes afforded to guide behavior. Features of the game structure as described by Game Theory and dimensions of interdependence as proposed by Interdependence Theory provide the affordances in the first place.
Situational affordancesPsychological processesGame structureDimensions of interdependence
(i) Possibility for exploitation Unconditional concern for others’ welfare  One-shot; Simultaneous; Sequential, as player acting last; Asymmetric (high power)   (Low) coordination; (Low) conflict of interests; (High) power
(ii) Possibility for reciprocity Conditional concern for others’ welfare  Repeated; Sequential, as player not acting first  –
(iii) Temporal conflict Self-regulation of immediate impulse gratification  Repeated; Sequential, as player not acting last  (High) future interdependence
(iv) Dependence under uncertainty Beliefs about others’ prosociality  Simultaneous; Sequential, as player not acting last; Asymmetric (low power)   (High) level of dependence; (High) information uncertainty
Situational affordancesPsychological processesGame structureDimensions of interdependence
(i) Possibility for exploitation Unconditional concern for others’ welfare  One-shot; Simultaneous; Sequential, as player acting last; Asymmetric (high power)   (Low) coordination; (Low) conflict of interests; (High) power
(ii) Possibility for reciprocity Conditional concern for others’ welfare  Repeated; Sequential, as player not acting first  –
(iii) Temporal conflict Self-regulation of immediate impulse gratification  Repeated; Sequential, as player not acting last  (High) future interdependence
(iv) Dependence under uncertainty Beliefs about others’ prosociality  Simultaneous; Sequential, as player not acting last; Asymmetric (low power)   (High) level of dependence; (High) information uncertainty

First, the possibility for exploitation affordance is present when a player can increase their outcome at others’ costs, and particularly so if the player does not need to fear any sanctions by the other player(s) for selfish (exploitative) behavior. In terms of structural features of games, this applies to one-shot games in which either all players act simultaneously or to the player acting last in a sequential interaction. Moreover, asymmetric games characterized by asymmetry in power between players involve a possibility for exploitation for the player having more power than the other(s). In terms of basic dimensions of interdependence, a possibility for exploitation is present in situations of low coordination (in which increasing one’s own outcome is necessarily associated with decreasing others’ outcomes), high conflict of interests (in which a player can gain much more by selfish as compared to prosocial behavior), and high power (in which a player has strong or even complete power over the final outcome distribution). Whenever a game involves a possibility for exploitation, it allows the expression of unconditional concern for others’ welfare in (prosocial vs. selfish) behavior.

Second, the possibility for reciprocity affordance is present when a player can react to other players’ previous behavior. This applies when games are played repeatedly or for the player not acting first (and thus reacting to another) in a sequential interaction. Interestingly, there is no clear conceptual link of the reciprocity affordance to any of the basic dimensions of interdependence. Whenever a game involves a possibility for reciprocity, it allows the expression of conditional concern for others’ welfare in (prosocial vs. selfish) behavior.

Third, the temporal conflict affordance is present when the potential long-term consequences of players’ behavior conflict with players’ short-term interests. This holds in repeated games (in which prosocial behavior may help establish a mutually beneficial relationship, at the cost of short-term personal gains) as well as in sequential games for players not acting last (because they have to take into account potential negative reactions to their selfish behavior by others). In terms of basic dimensions of interdependence, temporal conflict is present when there is high future interdependence, that is, when players’ behavior likely affects their outcomes in the future. Whenever a game involves a temporal conflict, it allows the expression of self-regulation of immediate impulse gratification in (prosocial vs. selfish) behavior.

Fourth, the dependence under uncertainty affordance is present when a player’s outcome is (at least to some extent) dependent on others’ behavior, but the player does not know the others’ strategies. This applies in simultaneous games as well as for players not acting last in sequential games. Moreover, dependence is higher in asymmetric games in which one player has more power than the other (due to asymmetry in outcomes and/or sets of strategies). Correspondingly, in terms of basic dimensions of interdependence, dependence under uncertainty is present when there is a high level of dependence and high information uncertainty. Whenever a game involves dependence under uncertainty, it allows the expression of beliefs about others’ prosociality in (prosocial vs. selfish) behavior.

In addition to these four broad affordances, social situations may also vary on more specific sub-affordances that allow the expression of certain social motives in behavior (Thielmann et al., 2020). Specifically, the social motives specified in Interdependence Theory have direct conceptual links to two psychological processes afforded in interdependent situations – unconditional and conditional concern for others – and they may therefore be expressed in the presence of certain sub-affordances of the exploitation and reciprocity affordances. These sub-affordances specifically relate to the outcomes that can be achieved by prosocial versus selfish behavior in a situation. For example, in non-constant-sum games (e.g., social dilemmas), prosocial behavior may increase players’ joint outcomes, and so allow for the expression of the motive of social welfare concerns in (prosocial) behavior. Likewise, if a game allows to maximize equality in players’ outcomes (unconditionally, as e.g., in the Dictator Game, or conditionally on another’s actions, as e.g., in the Trust Game as trustee), it affords the expression of a fairness motive. Thus, depending on the game structure and the potential outcomes that can be achieved, different social motives may ultimately be afforded to guide behavior (Thielmann et al., 2015).

Crucially, a game may involve several of said (sub-)affordances. Thus, simply knowing the affordances in a game may not be sufficient to draw conclusive inferences on the psychological processes (including social motives) involved in a player’s choice. This is also because any one particular behavior may not be driven by a single psychological process or social motive, and multiple (more or less conflicting) processes may also work together to produce a certain behavior. Thus, to isolate a certain psychological process and/or social motive, it may be necessary to either (i) use games that provide strategies which are unique for a specific motive (which is rarely the case) or (ii) combine different games or variants of the same game. Following an overview of which games exist, we will detail below how such combinations may be implemented based on a theoretical analysis of the affordances present in a game.

In what follows, we provide an overview of several economic games that have been commonly used in research on interindividual, intragroup, and intergroup interactions. Besides describing the structural features and theoretical underpinnings of the games, we derive the (sub-)affordances each game involves. This provides clear predictions about which psychological processes (including social motives) should be afforded in a game to guide behavior and it also forms the basis for tailored selection (and combination) of games in research. Building on this, we outline how the games can be adapted to manipulate the affordances in a game (including a summary of game variants), and we end with coarsely reviewing typical empirical findings based on each game.

As an organizing framework, we broadly distinguish between sequential resource-allocation games and social dilemmas given their structural differences and, thus, inherent differences in affordances. Table 2 provides an overview of the games we discuss – including the (sub-)affordances present in each game – as well as of common game variants, which are described in more detail in the online supplement on the Open Science Framework (OSF; https://osf.io/t8m7x/). Although the list of game variants we introduce cannot be comprehensive, it demonstrates the flexibility of games to model specific features of social situations and shows potential adaptations in terms of affordances present. In the OSF online supplement, we also provide standardized instructions for all (basic) games (in English and German)9 as well as further guidelines for the practical implementation of games in empirical studies.

Again, note that we base all game-theoretic analyses and corresponding behavioral predictions on the given situation, assuming that (monetary) outcomes directly translate to payoffs – that is, that the given situation basically represents the effective situation. Thus, as a necessary consequence, the game-theoretic analyses are based on the assumption that players aim to maximize their individual outcome, meaning that they are purely selfish. Any empirical deviation from the behavioral predictions derived from a game-theoretic analysis thus implies either irrational responding (e.g., players do not behave according to their preferences) or the involvement of other-regarding preferences. Moreover, note that in experimental research, payoffs are usually represented – and manipulated – by (monetary or non-monetary) outcomes, again suggesting that purely selfish players should choose the outcome-maximizing (selfish-rational) strategy.

### II.1 Sequential resource-allocation games

Sequential resource-allocation games are sequential games in which one player is initially asked to allocate a given endowment between themselves and one or several others who may then – in most games – react to this allocation. The games discussed here are the Dictator Game, the Ultimatum Game, and the Trust Game as well as variants thereof.

#### Dictator Game

Game structure. The Dictator Game (Forsythe et al., 1994; Kahneman et al., 1986) is a sequential constant-sum game with asymmetric player roles. The game involves two players, the dictator (D) and the recipient (R). The dictator is endowed with a fixed amount of e tokens and decides how many tokens g (with 0 ≤ ge) to allocate to the recipient. The recipient has no opportunity to react to the dictator’s allocation.10 Thus, the dictator has full power over the final outcome distribution, and the dictator’s payoff or utility (uD), respectively, simply equals uD = eg whereas the recipient’s payoff is uR = g. From a game-theoretic viewpoint, the dominant strategy for the dictator is to give nothing (g = 0) such that uD = e and uR = 0. In terms of affordances involved, the Dictator Game thus provides a possibility for exploitation allowing unconditional concern for others’ welfare to guide behavior (Table 2). Low concern for others may be expressed in (selfish) behavior through the social motives of greed, competitiveness, and/or spite, all suggesting g = 0. High concern for others may be expressed in (prosocial) behavior through the social motives of fairness, suggesting g = $e2$, and/or altruism, suggesting g = e.

Table 2. Summary of commonly used economic games and game variants thereof
 Game Structure Payoffs (u) Broad affordance(s) Social motive(s)a Sequential resource-allocation games Dictator Game A dictator (D) receives an endowment (e) and freely decides how much to keep versus give (g) to a recipient (R) $$u_{D}=e-g$$ $$u_{R}=g$$ Exploitation altruism, fairness vs. greed, competitiveness, spite Triple Dictator Game … D’s transfer to R is tripled. $$u_{D}=e-g$$ $$u_{R}=3 g$$ Exploitation altruism, fairness, social welfare vs. greed, competitiveness, spite Generosity Game … D’s payoff is fixed and D simply decides on R’s payoff. $$u_{D}=e$$ $$u_{R}=g$$ Exploitation altruism, fairness, social welfare vs. competitiveness, spite Solidarity Game … D and two recipients R1 and R2 each can win e in a lottery. D decides in advance how much of e to give (g) to R1 and/or R2 if D wins e in the lottery and either or both of R1 and R2 lose in the lottery. if either $$R 1$$ or $$R 2$$ loses: $$u_{D}=e-g$$ $$u_{R 1}=g$$ or $$u_{R 2}=g$$ $$u_{R 2}= e$$ or $$u_{R 1}=e$$ if both $$R 1$$ and $$R 2$$ lose: $$u_{D}=e-2^{*} g$$ $$u_{R 1}=g$$ $$u_{R 2}=g$$ Exploitation altruism, fairness vs. greed, competitiveness, spite Faith Game … R can choose whether to receive g or a fixed amount f < $$\frac{e}{2}$$. $$u_{D}=e-g$$ $$u_{R} \in\{g, f\}$$ Dependence N/A Ultimatum Game A proposer (P) receives an endowment (e) and decides how much to keep versus give (g) to a recipient (R). R is empowered to accept versus reject P’s offer which affects both players’ payoffs. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=u_{R}=0$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Impunity Game … R’s decision only affects R’s own payoff. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=e-g$$ $$u_{R}=0$$ P: Exploitation; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: greed, social welfare vs. (none) Spite Game … R’s decision only affects P’s payoff. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=0$$ $$u_{R}=g$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity P: altruism, fairness vs. greed, competitiveness, spite R: altruism, social welfare vs. competitiveness, spite Three-person Ultimatum Game … P divides e between R and a passive bystander (B). R’s decision affects all three players. if $$R$$ accepts: $$u_{P}=e-g_{R}-g_{B}$$ $$u_{R}=g_{R}$$ $$u_{B}=g_{B}$$ if $$R$$ rejects: $$u_{P}=u_{R}=u_{B}=0$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Rubinstein Bargaining Game … if R rejects P’s offer, bargaining continues with R making a new offer to P and so on. The game ends once an offer is accepted by either player. Bargaining time is costly, with discounting factor 0 < t < 1 decreasing in each round. $$u_{P}=(e-g)^{*} t$$ $$u_{R}=g^{*} t$$ (Exploitation), Reciprocity, Temporal conflict, Dependence altruism, social welfare vs. greed, competitiveness, spite Power-to-Take Game … both P and R have to earn their endowment (eP and eR, respectively) in an effortful task. P then decides on the proportion (t) to take from eR. Before the corresponding amount is transferred to P, R can destroy any proportion (d) of eR. $$u_{P}=e_{P}+t^{*}(1-d)^{*} e_{R}$$ $$u_{R}=(1-t)^{*}(1-d)^{*} e_{R}$$ P: Exploitation, Temporal conflict, Dependence; R: Reciprocity; Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Trust Game A trustor (I) divides an endowment (e) between herself and a trustee (T). The transferred amount (g) is multiplied by a constant (m) and added to T’s endowment. T can return any amount (r) of g * m to I. $$u_{I}=e-g+r$$ $$u_{T}=e+m^{*} g-r$$ I: (Exploitation), Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. greed, competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Moonlighting Game … I cannot only send g ≤ $$\frac{e}{2}$$ tokens to T but can alternatively take t ≤ $$\frac{e}{2}$$ tokens from T. In turn, T cannot only return r ≤ m * $$\frac{e}{2}$$ tokens but can alternatively reduce I’s payoff by m * p tokens at cost p. $$u_{I}=e-g+t+r-m^{*} p$$ $$u_{T}=e+m^{*} g-t-r-p$$ I: Exploitation, Temporal conflict, Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. greed, competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Distrust Game … I does not receive an initial endowment but T is endowed with eT = m * e + e. I decides how much to take (t) from eT, with t being divided by m. T then decides how much to give (r) to I. $$u_{I}=\frac{t}{m}+r$$ $$u_{T}=e_{T}-t-r$$ I: (Exploitation), Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Social dilemmas Prisoner’s Dilemma Two players decide independently whether to cooperate or defect. Their payoffs depend on the combination of players’ strategies. The maximum individual payoff results from unilateral defection, the minimum from unilateral cooperation. Exploitation, Dependence altruism, social welfare vs. greed, competitiveness, spite C D C R | R S | T D T | S P | P with T > R > P > S Prisoner’s Dilemma-Alt … a third “withdrawal” option W is added that realizes a fixed payoff E for both players. Exploitation, (Dependence) altruism, social welfare (C) vs. greed, competitiveness, spite (D) C D W C R | R S | T E | E D T | S P | P E | E W E | E E | E E | E with T > R > E > P > S Prisoner’s Dilemma-R … a third (defective) option Drel is added that realizes a lower absolute payoff for the selecting player, but a higher relative payoff in comparison to the other player. Exploitation, Dependence altruism, social welfare (C) vs. greed (D) vs. competitiveness, spite (Drel) C D Drel C R | R S | T E | R D T | S P | P F | S Drel R | E S | F E | E with T > R > P > S > F > E; in addition, R/E > T/S and S/F > 1 Prisoner’s Dilemma with variable dependence …each player independently chooses their dependence on the other player before making a decision on whether to cooperate or defect. similar to the Prisoner’s Dilemma, with T – R and P – S becoming larger (smaller) for high (low) dependence Exploitation, (Dependence) altruism, social welfare vs. greed, competitiveness, spite Public Goods Game Each member i of a group of size N decides how much ( gi) of an individual endowment (e) to contribute to a group account. Contributions are multiplied by a constant m (with 1 < m < N) and shared equally across all group members. $${u}_{i}$$ $$=$$ $$e$$ $$–$$ $${g}_{i}{+}\frac{m*\sum_{j=1}^N {g}_{j}}{N}$$ Exploitation, Dependence altruism, social welfare vs. greed, competitiveness, spite Step-level Public Goods Game … resources in the group account are only shared equally across group members if a contribution threshold t is reached $${u}_{i}$$ $$=$$ $${e}$$ $$–{g}_{i}+$$ $${x}$$ $$*\frac{m*\sum_{j=1}^{N}{g}_{j}}{N}$$ with x = 1 if $$\sum_{j=1}^N {g}_{j}{≥}$$ $${t}$$ and x = 0 otherwise (Exploitation), Temporal conflict, Dependence altruism, social welfare vs. greed, competitiveness, spite Commons Dilemma … group members decide how much (tiz) to take from a common resource Cz (with tiz ≤ Cz) in round z. Following each round, Cz recovers with reproduction rate r > 1: Cz+1 = (Cz – $$\sum_{j=1}^N {t}_{i}^{z}$$) * r. The game ends once the amount Cz+x available in round z + x is depleted, that is, if the collective consumption $$\sum_{j=1}^N {t}_{j}^{z+x}$$ ≥ Cz+x. $${u}_{i}=$$ $$\sum_{z}^{}{t}_{i}^{z}$$ Exploitation, Temporal conflict (assuming z > 1) Dependence altruism, social welfare vs. greed, competitiveness, spite Volunteer’s Dilemma … group members decide between cooperation (volunteering) and defection (somebody else should do the job). If at least one player cooperates, a public good of value v is provided. Cooperation comes with costs c < v. $$u_{i}=v-c$$ if $$i$$ cooperates; $$u_{j}=v$$ if $$i$$ defects but at least one other player cooperates; $$u_{i}=0$$ otherwise Exploitation, Temporal conflict, Dependence altruism, social welfare vs. competitiveness, spite Intergroup Prisoner’s Dilemma Each player i (of N > 3 players) is assigned to one of two group with n = $$\frac{N}{2}$$ members and decides how much (gBi) of their individual endowment (e) to contribute to a group account (between-group pool B). Contributions are multiplied by a constant m (with 1 < m < n). Every token contributed increases the payoff of players from i’s in-group I and decreases the payoff of players from i’s out-group O by $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $$e$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$- \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite vs. greed, collective altruism, collective welfare, in-group competitiveness, in-group spite Intergroup Prisoner’s Dilemma–Maximizing Difference …a second group account (within-group pool W) is added. Every token gWi contributed to this pool increases the payoff of players from the in-group I by $$\frac{m}{n}$$ without affecting the payoff of players from the out-group O. $${u}_{i}$$ $$=$$ $$e$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$- \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite (B) vs. in-group altruism, collective welfare (W) vs. greed, in-group competitiveness, in-group spite (keep) Positive Intergroup Prisoner’s Dilemma–Maximizing Difference …the group account is replaced by two different group accounts. In the within-group pool W, every token contributed increases the payoff of players from the in-group I by $$\frac{m}{n}$$ without affecting the payoff of players from the out-group O. In the between-group pool B, every token contributed increases the payoff of players from the in-group I and from the out-group O by $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $${e}$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$+ \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group spite, out-group competitiveness (W) vs. collective altruism, collective welfare, in-group altruism (B) vs. greed, collective competitiveness, collective spite, in-group competitiveness, in-group spite, out-group spite (keep) Intergroup Parochial and Universal Cooperation Game …adds another group account (universal pool U) to the Intergroup Prisoner’s Dilemma – Maximizing Difference Game. Each token contributed to this pool increases the payoff of players from both the in-group I and the out-group O by $$\frac{q}{N}$$, with N > q > m and $$\frac{q}{N}$$ < $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $$e$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}-$$ $${g}_{Ui}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}{+}\frac{q}{N}\sum_{j=1}^{n}{g}_{Uj}$$ $${+}\frac{q}{N}\sum_{k=n+1}^{n}{g}_{Uk}$$$$-$$ $$\frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite (B) vs. in-group altruism, in-group welfare (W) vs. social welfare (U) vs. greed, in-group competitiveness, in-group spite (keep)
 Game Structure Payoffs (u) Broad affordance(s) Social motive(s)a Sequential resource-allocation games Dictator Game A dictator (D) receives an endowment (e) and freely decides how much to keep versus give (g) to a recipient (R) $$u_{D}=e-g$$ $$u_{R}=g$$ Exploitation altruism, fairness vs. greed, competitiveness, spite Triple Dictator Game … D’s transfer to R is tripled. $$u_{D}=e-g$$ $$u_{R}=3 g$$ Exploitation altruism, fairness, social welfare vs. greed, competitiveness, spite Generosity Game … D’s payoff is fixed and D simply decides on R’s payoff. $$u_{D}=e$$ $$u_{R}=g$$ Exploitation altruism, fairness, social welfare vs. competitiveness, spite Solidarity Game … D and two recipients R1 and R2 each can win e in a lottery. D decides in advance how much of e to give (g) to R1 and/or R2 if D wins e in the lottery and either or both of R1 and R2 lose in the lottery. if either $$R 1$$ or $$R 2$$ loses: $$u_{D}=e-g$$ $$u_{R 1}=g$$ or $$u_{R 2}=g$$ $$u_{R 2}= e$$ or $$u_{R 1}=e$$ if both $$R 1$$ and $$R 2$$ lose: $$u_{D}=e-2^{*} g$$ $$u_{R 1}=g$$ $$u_{R 2}=g$$ Exploitation altruism, fairness vs. greed, competitiveness, spite Faith Game … R can choose whether to receive g or a fixed amount f < $$\frac{e}{2}$$. $$u_{D}=e-g$$ $$u_{R} \in\{g, f\}$$ Dependence N/A Ultimatum Game A proposer (P) receives an endowment (e) and decides how much to keep versus give (g) to a recipient (R). R is empowered to accept versus reject P’s offer which affects both players’ payoffs. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=u_{R}=0$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Impunity Game … R’s decision only affects R’s own payoff. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=e-g$$ $$u_{R}=0$$ P: Exploitation; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: greed, social welfare vs. (none) Spite Game … R’s decision only affects P’s payoff. if $$R$$ accepts: $$u_{P}=e-g$$ $$u_{R}=g$$ if $$R$$ rejects: $$u_{P}=0$$ $$u_{R}=g$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity P: altruism, fairness vs. greed, competitiveness, spite R: altruism, social welfare vs. competitiveness, spite Three-person Ultimatum Game … P divides e between R and a passive bystander (B). R’s decision affects all three players. if $$R$$ accepts: $$u_{P}=e-g_{R}-g_{B}$$ $$u_{R}=g_{R}$$ $$u_{B}=g_{B}$$ if $$R$$ rejects: $$u_{P}=u_{R}=u_{B}=0$$ P: (Exploitation), Temporal conflict, Dependence; R: Reciprocity, Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Rubinstein Bargaining Game … if R rejects P’s offer, bargaining continues with R making a new offer to P and so on. The game ends once an offer is accepted by either player. Bargaining time is costly, with discounting factor 0 < t < 1 decreasing in each round. $$u_{P}=(e-g)^{*} t$$ $$u_{R}=g^{*} t$$ (Exploitation), Reciprocity, Temporal conflict, Dependence altruism, social welfare vs. greed, competitiveness, spite Power-to-Take Game … both P and R have to earn their endowment (eP and eR, respectively) in an effortful task. P then decides on the proportion (t) to take from eR. Before the corresponding amount is transferred to P, R can destroy any proportion (d) of eR. $$u_{P}=e_{P}+t^{*}(1-d)^{*} e_{R}$$ $$u_{R}=(1-t)^{*}(1-d)^{*} e_{R}$$ P: Exploitation, Temporal conflict, Dependence; R: Reciprocity; Temporal conflict P: altruism, fairness vs. greed, competitiveness, spite R: altruism, greed, social welfare vs. competitiveness, spite Trust Game A trustor (I) divides an endowment (e) between herself and a trustee (T). The transferred amount (g) is multiplied by a constant (m) and added to T’s endowment. T can return any amount (r) of g * m to I. $$u_{I}=e-g+r$$ $$u_{T}=e+m^{*} g-r$$ I: (Exploitation), Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. greed, competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Moonlighting Game … I cannot only send g ≤ $$\frac{e}{2}$$ tokens to T but can alternatively take t ≤ $$\frac{e}{2}$$ tokens from T. In turn, T cannot only return r ≤ m * $$\frac{e}{2}$$ tokens but can alternatively reduce I’s payoff by m * p tokens at cost p. $$u_{I}=e-g+t+r-m^{*} p$$ $$u_{T}=e+m^{*} g-t-r-p$$ I: Exploitation, Temporal conflict, Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. greed, competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Distrust Game … I does not receive an initial endowment but T is endowed with eT = m * e + e. I decides how much to take (t) from eT, with t being divided by m. T then decides how much to give (r) to I. $$u_{I}=\frac{t}{m}+r$$ $$u_{T}=e_{T}-t-r$$ I: (Exploitation), Dependence; T: Exploitation, (Reciprocity) I: altruism, social welfare vs. competitiveness, spite; T: altruism, fairness vs. greed, competitiveness, spite Social dilemmas Prisoner’s Dilemma Two players decide independently whether to cooperate or defect. Their payoffs depend on the combination of players’ strategies. The maximum individual payoff results from unilateral defection, the minimum from unilateral cooperation. Exploitation, Dependence altruism, social welfare vs. greed, competitiveness, spite C D C R | R S | T D T | S P | P with T > R > P > S Prisoner’s Dilemma-Alt … a third “withdrawal” option W is added that realizes a fixed payoff E for both players. Exploitation, (Dependence) altruism, social welfare (C) vs. greed, competitiveness, spite (D) C D W C R | R S | T E | E D T | S P | P E | E W E | E E | E E | E with T > R > E > P > S Prisoner’s Dilemma-R … a third (defective) option Drel is added that realizes a lower absolute payoff for the selecting player, but a higher relative payoff in comparison to the other player. Exploitation, Dependence altruism, social welfare (C) vs. greed (D) vs. competitiveness, spite (Drel) C D Drel C R | R S | T E | R D T | S P | P F | S Drel R | E S | F E | E with T > R > P > S > F > E; in addition, R/E > T/S and S/F > 1 Prisoner’s Dilemma with variable dependence …each player independently chooses their dependence on the other player before making a decision on whether to cooperate or defect. similar to the Prisoner’s Dilemma, with T – R and P – S becoming larger (smaller) for high (low) dependence Exploitation, (Dependence) altruism, social welfare vs. greed, competitiveness, spite Public Goods Game Each member i of a group of size N decides how much ( gi) of an individual endowment (e) to contribute to a group account. Contributions are multiplied by a constant m (with 1 < m < N) and shared equally across all group members. $${u}_{i}$$ $$=$$ $$e$$ $$–$$ $${g}_{i}{+}\frac{m*\sum_{j=1}^N {g}_{j}}{N}$$ Exploitation, Dependence altruism, social welfare vs. greed, competitiveness, spite Step-level Public Goods Game … resources in the group account are only shared equally across group members if a contribution threshold t is reached $${u}_{i}$$ $$=$$ $${e}$$ $$–{g}_{i}+$$ $${x}$$ $$*\frac{m*\sum_{j=1}^{N}{g}_{j}}{N}$$ with x = 1 if $$\sum_{j=1}^N {g}_{j}{≥}$$ $${t}$$ and x = 0 otherwise (Exploitation), Temporal conflict, Dependence altruism, social welfare vs. greed, competitiveness, spite Commons Dilemma … group members decide how much (tiz) to take from a common resource Cz (with tiz ≤ Cz) in round z. Following each round, Cz recovers with reproduction rate r > 1: Cz+1 = (Cz – $$\sum_{j=1}^N {t}_{i}^{z}$$) * r. The game ends once the amount Cz+x available in round z + x is depleted, that is, if the collective consumption $$\sum_{j=1}^N {t}_{j}^{z+x}$$ ≥ Cz+x. $${u}_{i}=$$ $$\sum_{z}^{}{t}_{i}^{z}$$ Exploitation, Temporal conflict (assuming z > 1) Dependence altruism, social welfare vs. greed, competitiveness, spite Volunteer’s Dilemma … group members decide between cooperation (volunteering) and defection (somebody else should do the job). If at least one player cooperates, a public good of value v is provided. Cooperation comes with costs c < v. $$u_{i}=v-c$$ if $$i$$ cooperates; $$u_{j}=v$$ if $$i$$ defects but at least one other player cooperates; $$u_{i}=0$$ otherwise Exploitation, Temporal conflict, Dependence altruism, social welfare vs. competitiveness, spite Intergroup Prisoner’s Dilemma Each player i (of N > 3 players) is assigned to one of two group with n = $$\frac{N}{2}$$ members and decides how much (gBi) of their individual endowment (e) to contribute to a group account (between-group pool B). Contributions are multiplied by a constant m (with 1 < m < n). Every token contributed increases the payoff of players from i’s in-group I and decreases the payoff of players from i’s out-group O by $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $$e$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$- \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite vs. greed, collective altruism, collective welfare, in-group competitiveness, in-group spite Intergroup Prisoner’s Dilemma–Maximizing Difference …a second group account (within-group pool W) is added. Every token gWi contributed to this pool increases the payoff of players from the in-group I by $$\frac{m}{n}$$ without affecting the payoff of players from the out-group O. $${u}_{i}$$ $$=$$ $$e$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$- \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite (B) vs. in-group altruism, collective welfare (W) vs. greed, in-group competitiveness, in-group spite (keep) Positive Intergroup Prisoner’s Dilemma–Maximizing Difference …the group account is replaced by two different group accounts. In the within-group pool W, every token contributed increases the payoff of players from the in-group I by $$\frac{m}{n}$$ without affecting the payoff of players from the out-group O. In the between-group pool B, every token contributed increases the payoff of players from the in-group I and from the out-group O by $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $${e}$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}$$$$+ \frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group spite, out-group competitiveness (W) vs. collective altruism, collective welfare, in-group altruism (B) vs. greed, collective competitiveness, collective spite, in-group competitiveness, in-group spite, out-group spite (keep) Intergroup Parochial and Universal Cooperation Game …adds another group account (universal pool U) to the Intergroup Prisoner’s Dilemma – Maximizing Difference Game. Each token contributed to this pool increases the payoff of players from both the in-group I and the out-group O by $$\frac{q}{N}$$, with N > q > m and $$\frac{q}{N}$$ < $$\frac{m}{n}$$. $${u}_{i}$$ $$=$$ $$e$$ $$-$$ $${g}_{Wi}$$$$- {g}_{Bi}-$$ $${g}_{Ui}$$$$+ \frac{m}{n}\sum_{j=1}^{n}{g}_{Wj}$$$${+}\frac{m}{n}\sum_{j=1}^{n}{g}_{Bj}{+}\frac{q}{N}\sum_{j=1}^{n}{g}_{Uj}$$ $${+}\frac{q}{N}\sum_{k=n+1}^{n}{g}_{Uk}$$$$-$$ $$\frac{m}{n}\sum_{k=n+1}^{N}{g}_{Bk}$$ Exploitation, Dependence in-group altruism, in-group welfare, out-group competitiveness, out-group spite (B) vs. in-group altruism, in-group welfare (W) vs. social welfare (U) vs. greed, in-group competitiveness, in-group spite (keep)

Note. N/A = not applicable given that the sub-affordances and corresponding social motives are only relevant in games involving a possibility for exploitation and/or reciprocity. Affordances put in parentheses are present to a relatively weaker degree (see main text for details). In the payoff matrices (Prisoner’s Dilemma and variants thereof), T = temptation, R = reward, P = punishment, S = sucker (see main text for details). For a detailed description of all game variants, see the online supplement https://osf.io/t8m7x/. a Motives listed first (before “vs.”) are afforded to be expressed in prosocial behavior, motives listed second (after “vs.”) are afforded to be expressed in selfish behavior. We only specify motives that are afforded in general, that is, irrespective of the other player’s decision (e.g., in the Ultimatum Game as responder, we do not mention the motive of fairness because this motive is only afforded if the proposer did not realize a fair split). According to the framework relied on here (Thielmann et al., 2020), social motives are afforded by specific sub-affordances of the exploitation and reciprocity affordances.

Real-life examples. The Dictator Game models a situation in which one individual has full power over a resource which the individual can keep to maximize their outcome at no personal cost. A real-life example of this situation might be the decision to donate blood or other valuable resources (e.g., money). Importantly, the game can also be adapted to more closely resemble certain real-life situations while maintaining the structure of and affordances in the game. For instance, if the recipient is a charity (e.g., Eckel & Grossman, 1996; Grossman & Eckel, 2015) or a needy other (e.g., Aguiar et al., 2008; Brañas-Garza, 2006), the game aligns more closely with donation behavior. Likewise, if dictators must earn their endowment in an effortful task (e.g., Cherry et al., 2002; Mittone & Ploner, 2012), the game more closely mirrors everyday situations in which money is earned through effort rather than given as windfall.

Typical findings. Dictators transfer 30% of their endowment to the recipient on average (Engel, 2011). Specifically, around a third of individuals decide to keep their endowment (g = 0), a third give something but less than 50% (0 < g < $e2$), and another third give 50% or more (g$e2$; see also Camerer, 2003). However, giving decreases substantially once dictators have to earn their endowment through individual effort (Engel, 2011). Nonetheless, neither self-earned endowments nor double-blind procedures reduce giving to zero. In addition, meta-analytic evidence shows that individuals systematically differ in their willingness to share, as implied by consistent, medium-sized associations of giving with stable prosocial tendencies, such as Social Value Orientation and Honesty-Humility (Thielmann et al., 2020; Zhao & Smillie, 2015). In fact, the pattern of correlations between personality traits and giving matches the idea that the Dictator Game specifically affords the expression of unconditional concern for others’ welfare due to providing a possibility for exploitation.

Game variants. Several variants of the Dictator Game have been proposed, changing the (sub-)affordances present in the situation (Table 2). In the Triple Dictator Game (Ashraf et al., 2006), the dictator’s allocation g is tripled before being transferred to the recipient (see also Andreoni & Miller, 2002, for more general variants). Thus, the Triple Dictator Game converts the Dictator Game into a non-constant sum game and thereby additionally affords the motive of social welfare concerns to potentially guide prosocial behavior. The same applies to the Generosity Game (Güth et al., 2012) in which the dictator can maximize the recipient’s (and the collective) outcome at no personal cost because the dictator’s outcome is fixed a priori. In turn, selfish behavior may no longer be motivated by greed – since the dictator’s decision does not affect the dictator’s own outcome – but it may still be motivated by spite or competitiveness. Comparing a player’s behavior in the Dictator Game and the Generosity Game thus allows for conclusions about the social motive of greed: A player who maximizes their own outcome in the Dictator Game but who maximizes the other’s outcome in the Generosity Game arguably behaved selfishly in the former game based on greed, but not based on spite or competitiveness.

The Solidarity Game (Selten & Ockenfels, 1998), in turn, involves the exact same (sub-)affordances as the Dictator Game, but the situation may be perceived differently given that the process producing inequality in players’ initial endowments differs (i.e., in the Solidarity Game, inequality between players results from getting lucky in a previous game of chance). Accordingly, evidence suggests that dictators feel more entitled to keep their endowment in the Solidarity Game and, thus, give less than in the Dictator Game (Stavrova & Schlösser, 2015). Finally, the Faith Game (Kiyonari et al., 2006; Kiyonari & Yamagishi, 1999) focuses on the recipient side, asking the recipient to choose whether to play the Dictator Game or to opt-out. As such, the situation modelled in the Faith Game is not characterized by a possibility for exploitation; instead, the focus is on measuring players’ willingness to self-select into a situation that involves dependence under uncertainty and thus affords beliefs about others’ prosociality to guide behavior.

#### Ultimatum Game

Game structure. The Ultimatum Game (Güth et al., 1982) is the predecessor and a variant of the Dictator Game with a more balanced power relation between players. Specifically, the proposer (P) is endowed with e tokens and asked to decide how many tokens g (with 0 ≤ ge) to transfer to the responder (R). Unlike the Dictator Game, however, the responder can react to the proposer’s decision (accordingly, the players’ names differ between games), thus restricting the possibility for exploitation on the proposer’s part. That is, g is an offer that the responder can either accept or reject. If the responder accepts, the endowment is split as proposed, resulting in uP = eg and uR = g. However, if the responder rejects, both players receive nothing (i.e., uP = uR = 0). The responder’s strategy thus entails their decision (accept vs. reject) for each possible offer g and is simply expressed by the smallest offer the responder is willing to accept, the minimum acceptance level (r). This minimum acceptance level is frequently assessed using the strategy method (Selten, 1967) which asks responding players (e.g., responders in the Ultimatum Game) to indicate their entire strategy vector, that is, their responses to all potential first player moves (e.g., all possible proposer offers; for details, see the practical guidelines in the online supplement on the OSF).

The Ultimatum Game is a sequential game of perfect information because the responder “observes” the proposer’s offer (albeit hypothetically in the strategy method) before deciding whether to accept or reject it. For the sake of simplicity, let us assume that offers g must be integers between 0 and e. In this case, the game has two subgame-perfect equilibria: (1) g = 0 and r = 0, and (2) g = 1 and r = 1. To identify these equilibria via backwards induction, we first determine the responder’s best response to each possible offer g. If g = 0, both accept and reject are optimal choices because they result in identical payoffs (i.e., uR = 0). If g > 0, the responder’s optimal choice is to set r equal to or below g and to therefore accept any offer g. Thus, the two possibilities at the final (responder) stage of the game are that the responder accepts either any g (i.e., r = 0) or any g > 0 and resolves indifference in case of g = 0 in favor of rejection (i.e., r = 1). In the former case, the best response for the proposer is to offer g = 0, whereas in the latter case it is to offer g = 1, that is, the minimum amount exceeding zero.

However, the proposer is usually naive about the responder’s minimum acceptance level (r) and might deem a minimum acceptance level of r ≥ 1 likely. Thus, from an affordance perspective, the situation the proposer faces does not only involve a possibility for exploitation (which is arguably weaker as in the Dictator Game due to the decrease in power on the dictator’s part), but also a temporal conflict (because the proposer has to anticipate that the responder might reject a small, i.e., selfish offer, thus requiring the proposer to potentially suppress the selfish impulse to give nothing) and dependence under uncertainty (because the proposer’s outcome ultimately depends on the responder’s reaction). Nonetheless, assuming a selfish responder who is willing to accept any offer g > 0, giving more than the minimum amount (i.e., g > 1) may be an expression of high unconditional concern for others, motivated by altruism and/or fairness.11 Conversely, offers of g ≤ 1 indicate low unconditional concern for others as motivated by greed, competitiveness, and/or spite. On the responder’s part, an affordance-based analysis shows that the game involves a possibility for reciprocity as well as a temporal conflict because accepting the proposer’s offer incurs a higher outcome (if g > 0) than rejecting it, but it may require suppressing one’s immediate impulse to punish the proposer for a very low offer. Thus, accepting an offer g might be an expression of high conditional concern for others as motivated by altruism if g < e (i.e., the responder wants the proposer to earn something) and/or social welfare concerns (i.e., the responder shies away from “burning” the payoff). If g ≥ 1, however, accepting g may also be an expression of low conditional concern for others as motivated by greed, conditional on the ability to self-regulate the impulse to reject an unfair offer. Conversely, rejecting g might not only be an expression of a competitiveness and/or a spite motive (if g < $e2$), but also of a fairness motive (if g$e2$) in the sense of realizing equal (zero) outcomes for both players (Table 2).

Real-life examples. In essence, the Ultimatum Game models any kind of bargaining situation with “take it or leave it” offers. For example, a job applicant negotiating over a higher salary faces a situation comparable to the proposer in the Ultimatum Game: If the applicant requires an excessive salary, the employer might refrain from hiring the applicant. The same applies to a seller haggling over the price of a product with a customer. But also beyond economic transactions, the Ultimatum Game provides valuable information on individuals’ (strategic) concern for others on the one hand (see, e.g., Debove et al., 2016, for a detailed discussion) and reactions to violations of fairness norms on the other (e.g., Güth & Kocher, 2014).

Typical findings. As sketched above, the game-theoretic solution of the Ultimatum Game implies that responders should accept any offer larger than zero and proposers should, in turn, send the smallest non-zero amount possible. In contrast to these predictions, however, meta-analytic evidence suggests that proposers offer about 40% of their endowment on average, with modal and median offers even approaching the equal split (Camerer, 2003; Oosterbeek et al., 2004). Thus, proposers in the Ultimatum Game give considerably more than dictators in the Dictator Game, arguably due to anticipating that responders might otherwise reject the offer and the correspondingly reduced possibility for exploitation. This is also apparent in correlations of personality traits with proposer behavior: Whereas traits related to unconditional concern for others show relatively strong relations with behavior in the Dictator Game, their relations are considerably reduced with proposer behavior in the Ultimatum Game (Thielmann et al., 2020; see also Barends et al., 2019; Hilbig & Zettler, 2009). In turn, responders’ willingness to reject indeed strongly depends on the proposer’s offer: Whereas offers of 40% or more are hardly ever rejected, offers of less than 10% are almost always rejected. Average rejection rates amount to 15-20% (Camerer, 2003; Cooper & Dutcher, 2011). However, proposer and responder behaviors may certainly vary as a function of different factors. For example, proposers typically make smaller relative offers with larger endowment sizes (Oosterbeek et al., 2004) and when they “earned” their role as a proposer (Bearden, 2001). Responders, in turn, are more likely to accept when the direct-response (rather than the strategy method) is used (Oosterbeek et al., 2004) and the higher their level of dispositional forgivingness (Thielmann et al., 2020; Zhao & Smillie, 2015). This latter finding is in line with the idea that the situation responders face in the Ultimatum Game specifically affords the expression of conditional concern for others due to providing a possibility for reciprocity.

Game variants. The power relation between players denotes a key parameter in the Ultimatum Game that determines the affordances the situation provides. In the standard game, the responder has high power: If the responder rejects the proposer’s offer, both players receive nothing. Suleiman (1996) described this feature in terms of a discounting factor δ (with 0 ≤ δ ≤ 1) by which payoffs are multiplied in case the responder rejects (i.e., uP = (eg) * δ and uR = g * δ) and which equals 0 in the standard game. Increasing δ decreases the responder’s veto power while increasing the proposer’s distribution power. Thus, from an affordance perspective, increasing δ increases the proposer’s possibility for exploitation while decreasing the proposer’s dependence on the responder, and it decreases a temporal conflict for both proposer and responder given that the players’ final outcomes decreasingly depend on the responder’s reaction; in the extreme (δ = 1), the game constitutes a Dictator Game in which there is a clear possibility for exploitation on the part of the dictator but no dependence under uncertainty or temporal conflict (and also no possibility for reciprocity on the part of the responder). Varying δ thus changes the affordances in a systematic way, thereby, for example, allowing to investigate the effects of altering the possibility for exploitation (through shifts in power asymmetry) on prosocial behavior of proposers and of altering the possibility for reciprocity on effective versus ineffective rejection decisions of responders.

A similar approach is implemented in game variants modifying the consequences of responders’ rejections. In the Impunity Game (Bolton & Zwick, 1995), rejecting only obliterates the responder’s outcome. Thus, the proposer does not need to fear rejection because the proposer’s outcome no longer depends on the responder’s behavior, rendering the possibility for exploitation relatively strong while dispensing with the affordances of temporal conflict and dependence under uncertainty. Conversely, in the Spite Game (Güth & Huck, 1997), rejecting exclusively obliterates the proposer’s outcome, thus rendering the possibility for exploitation relatively weak and no longer affording a greed motive on the responder’s part, while emphasizing the temporal conflict and dependence affordances on the proposer’s part. Other variants of the standard game are the Three-Person Ultimatum Game (Güth & van Damme, 1998), the Rubinstein Bargaining Game (Rubinstein, 1982), and the Power-to-Take Game (Bosman & van Winden, 2002). Whereas the Three-Person Ultimatum Game and the Power-to-Take Game involve the exact same (sub-)affordances as the standard game (although the affordances may be more or less salient), the Rubinstein Bargaining Game provides different (sub-)affordances, with both players essentially encountering the same situation involving all four affordances focused on here (see Table 2 and the OSF supplement for details; see also Güth & Kocher, 2014, for further variants).

#### Trust Game

Game structure. The Trust Game or Investment Game (Berg et al., 1995; for similar games, see also Camerer & Weigelt, 1988; Kreps, 1990) is a sequential game with two players in asymmetric roles, called trustor (or investor; I) and trustee (T). Initially, trustor and trustee are endowed with eI and eT tokens, respectively (typically eI = eT). The trustor first decides how many tokens g (with 0 ≤ geI) to transfer to the trustee. The transfer is multiplied by a constant m (with m > 1; typically, m = 3) and added to the trustee’s endowment. The trustee can then decide how many tokens r (with 0 ≤ rm * g) of the inflated amount to return to the trustor. The trustor’s payoff thus equals uI = eIg + r whereas the trustee’s payoff equals uT = eT + m * gr. As sketched in Section I.1 for the binary variant of the game, the game-theoretic analysis implies that the trustor should send nothing to the trustee (g = 0) due to anticipating that the trustee has no incentive to return anything in a one-shot game. Thus, the unique subgame-perfect equilibrium is a zero transfer by the trustor and, for any given amount the trustor transfers, a zero return by the trustee.

From an affordance-based perspective, the situation the trustor faces in the Trust Game is primarily characterized by dependence under uncertainty and – albeit to a lesser extent – a possibility for exploitation, thus allowing the expression of beliefs about others’ prosociality and unconditional concern for others (through the motives of altruism and/or social welfare concerns, since the transferred amount is multiplied by m > 1). Moreover, giving may also be an expression of low unconditional concern for others as motivated by greed, conditional on the belief that the trustee will return more than the trustor has sent (i.e., r > g). Of note, as long as both players are initially endowed with the same amount (as is the case in the classic version of the game), the motive of fairness will not be afforded since equality in outcomes is established by default. The situation the trustee faces involves possibilities for exploitation and reciprocity and sub-affordances for the motives of altruism and fairness versus greed, competitiveness, and spite. In general, the trustee in the Trust Game faces a highly similar situation as the dictator in the Dictator Game, with the critical difference that the amount at stake is determined by the trustor’s initial behavior – thus additionally involving a possibility for reciprocity on the trustee’s part (Table 2).

Real-life examples. The Trust Game incorporates a simplified model of various interactions in which one individual is unilaterally dependent on – and thus vulnerable to – another individual (Thielmann & Hilbig, 2015). Examples are transactions via an online purchase system such as eBay or hiring a babysitter to take care of one’s child. Certainly, the Trust Game “excludes a number of aspects that would characterize transactions in real-life such as communication, word-of-mouth, face-to-face interaction, handshakes, promises and such. But that is the beauty of this game. It tries to measure trust in a purely abstract way” (Chaudhuri, 2009, p. 88). Also, elements such as communication, acquaintanceship, and repeated interaction can easily be implemented to more closely approximate corresponding real-life situations (see, e.g., Anderhub et al., 2002; Bicchieri et al., 2011; Ho, 2012).

Typical findings. Trustors typically send half of their endowment to trustees who, in response, return roughly a third (i.e., 37%) of the tripled transfer (Johnson & Mislin, 2011; see also Camerer, 2003). However, there is substantial variation in trustor and trustee behavior across studies and samples. The amount sent by trustors, for example, typically increases with more optimistic beliefs in the trustee’s trustworthiness as well as with individuals’ general willingness to take risks (although results for the latter are mixed; Thielmann & Hilbig, 2015). This is also apparent in the association of personality traits with trustor behavior, showing positive relations with trust propensity and risk-taking, as well as with dispositional tendencies associated with unconditional concern for others (Thielmann et al., 2020). In turn, the amount returned by trustees (i.e., the relative share of the received amount m * g) typically increases with higher shares sent by the trustor and decreases with a higher multiplier m (Johnson & Mislin, 2011). Additionally, the strongest links of personality with trustee behavior are found for dispositional tendencies related to unconditional and conditional concern for others’ welfare (Thielmann et al., 2020), supporting the notion that these person aspects are particularly afforded in the situation the trustee faces in this game.

Game variants. The Moonlighting Game (Abbink et al., 2000) adds one option to both the trustor’s and the trustee’s strategy vector and thereby changes the affordances in the game for both players. Specifically, the trustor can also take tokens from the trustee and the trustee can also invest tokens to reduce the trustor’s outcome. The Moonlighting Game thus provides the trustor with a greater possibility for exploitation than the Trust Game, and it additionally involves a temporal conflict given that the trustor needs to consider the potential negative consequences of taking something from the trustee. Taking, in turn, may be an expression of (low) unconditional concern for others through the motives of greed, competitiveness, and/or spite. For the trustee, the Moonlighting Game provides a strong possibility for reciprocity, both in positive terms (trustworthiness) and in negative terms (punishment), in addition to a possibility for exploitation (Table 2). By contrast, the Distrust Game (Bohnet & Meier, 2005) arguably involves the exact same (sub-)affordances as the Trust Game, although “full trust” is defined as the default option. Thus, the Distrust Game measures a trustor’s willingness to actively distrust the trustee by reducing one’s transfer. Compared to the Trust Game, transfers in the Distrust Game have been shown to be higher (Bohnet & Meier, 2005), suggesting that changing the default option affects trustors’ perception of the situation.

### II.2 Social dilemmas

In essence, “social dilemmas are everywhere” (Weber et al., 2004, p. 281). They refer to “situations in which a non-cooperative course of action is (at times) tempting for each individual in that it yields superior (often short-term) outcomes for self, and if all pursue this non-cooperative course of action, all are (often in the longer-term) worse off than if all had cooperated” (Van Lange et al., 2013, p. 126). As such, a key characteristic of social dilemmas is that players are collectively worse off if all behave in a selfish manner than if all behave in a prosocial manner. However, each individual is best off if they behave in a selfish manner while all others behave in a prosocial manner. In what follows, we introduce some of the most common social dilemmas at the interindividual, intragroup, and intergroup level of interaction: the Prisoner’s Dilemma, the Public Goods Game, and the Intergroup Prisoner’s Dilemma (as well as variants thereof).

#### Prisoner’s Dilemma

Game structure. The Prisoner’s Dilemma is a simultaneous game modeling an interpersonal (two-person) social dilemma (Figure 1A). Each of two players can independently choose between cooperation (C) and defection (D). The players’ payoffs are determined by the combination of choices (own choice, other’s choice) as follows: reward R for mutual cooperation (C, C), punishment P for mutual defection (D, D), temptation T for unilateral defection (D, C), and sucker S for unilateral cooperation (C, D). By definition, T > R > P > S. Thus, since T > R and P > S, defection strictly dominates cooperation: It maximizes each player’s payoff irrespective of the other player’s strategy. Accordingly, mutual defection constitutes the equilibrium in dominant strategies and thus the unique Nash equilibrium (see Section I.1). However, given that R > P and, typically, R > $(T+S)2$ (e.g., Kollock, 1998), mutual cooperation maximizes social welfare.

Regarding the affordances in the game, the Prisoner’s Dilemma first of all provides a possibility for exploitation. Since each individual can obtain their maximum (absolute and relative) outcome through unilateral defection, low unconditional concern for others may be expressed in defection through the motives of greed and/or competitiveness. Also, defection incurs the minimum outcome for the other player (if the other player cooperates) and it may thus also be motivated by spite. Cooperation, in turn, may indicate high unconditional concern for others, expressed through the motives of altruism and/or social welfare concerns (since cooperation maximizes players’ joint outcomes). A fairness motive, by contrast, may not be afforded in the Prisoner’s Dilemma – at least if the game is symmetric – since both mutual cooperation and mutual defection lead to equal outcomes for both players. In addition to a possibility for exploitation, the game involves dependence under uncertainty, thus affording beliefs about others’ prosociality to guide behavior (Table 2).

A key characteristic of the Prisoner’s Dilemma is the relation between players’ potential outcomes, which ultimately determines the affordances in the game by affecting how tempting exploitation is and/or how strongly a player’s outcome depends on the other’s choice. This relation between players’ outcomes can be expressed via different indices. Most prominently, the index K = $(R–P)(T–S)$ (Anatol Rapoport & Chammah, 1965; Vlaev & Chater, 2006) describes the extent of conflict between cooperation and defection. Specifically, it incorporates the idea that the willingness to cooperate should increase the higher the outcomes associated with cooperation (i.e., R and S) and the lower the outcomes associated with defection (i.e., T and P). As such, K also measures the degree to which exploitation is possible, with higher values indicating a lower possibility to exploit. In our example from above (Figure 1A), K = 0.33, thus indicating a relatively high temptation to defect. Overall, indices like the K index are well suited to describe properties of the game matrix and, thus, of the affordances present. Experimental research can, in turn, use these indices to manipulate the situation (and affordances) at hand by changing the relation of outcomes and to thereby investigate corresponding behavioral expressions while allowing inferences on the psychological processes at play (e.g., Schopler et al., 2001; Van Lange & Visser, 1999; Zettler et al., 2013).

Real-life examples. In general, the Prisoner’s Dilemma is a paramount example for measuring “the essence of cooperation […] that two (or more) individuals assist each other to reach the same end” (Hinde & Groebel, 1991, p. 4). For instance, consider the decision to use performance-enhancing drugs in elite sports. Given that drugs should have a similar impact on each athlete, it is to all athletes’ collective advantage not to take the drugs (cooperation) and to thereby maintain a fair competition while avoiding negative side effects (R outcome). However, if any one athlete takes drugs unilaterally (defection), the athlete will have an advantage over other athletes not taking the drugs because the increase in the athlete’s chances of winning arguably outweighs the negative side effects of the drugs (T outcome). In turn, all athletes may eventually end up taking drugs, which gives none of them an advantage but all will suffer the negative side effects (P outcome).

Typical findings. Meta-analytic evidence suggests that about half of players (47%) cooperate in the Prisoner’s Dilemma whereas the other half defects (Sally, 1995). However, there is substantial heterogeneity in cooperation rates, ranging from 5% up to 97% across studies. For example, players become more cooperative with a lower temptation to defect (as, e.g., indicated by a higher K index) and when they can communicate with each other (Balliet, 2010; Sally, 1995). Whereas the former can be interpreted in terms of a lower possibility for exploitation when the temptation to defect decreases, the latter can be interpreted in terms of a lower (dependence under) uncertainty when communication is implemented. Moreover, dispositional tendencies featuring unconditional concern for others, such as Social Value Orientation and Honesty-Humility, and beliefs about others’ prosociality positively relate to cooperation in the Prisoner’s Dilemma (Balliet et al., 2009; Pletzer et al., 2018; Thielmann et al., 2020; Zhao & Smillie, 2015), again supporting that the expression of these psychological processes is afforded. In general, this evidence is once more at odds with the prediction that individuals are simply interested in maximizing their individual outcomes, demonstrating the existence of other-regarding preferences.

Game variants. A key characteristic of the Prisoner’s Dilemma is the rank order of outcomes resulting from the combination of players’ choices (i.e., T > R > P > S). Changing this order results in 78 different games (excluding games with ties between payoffs), 12 of which yield symmetric outcome matrices (Anatol Rapoport & Guyer, 1966; for a broader overview, see Kelley & Thibaut, 1978). Comparing behavior across different of these structurally similar games providing different (sub-)affordances – including coordination games, such as the Stag Hunt (Skyrms, 2001) and the Chicken Game (Anatol Rapoport & Chammah, 1966) – can provide unique insights into the psychological processes (including social motives) driving behavior in any one situation (for an empirical example, see, e.g., Hilbig et al., 2018). The Prisoner’s Dilemma-R (Insko et al., 1992), in turn, adds a third option to the strategy set that particularly affords the expression of competitiveness and spite motives by allowing players to decrease the other’s outcome in absolute and relative terms. The Prisoner’s Dilemma-Alt (Insko et al., 1990; Schopler & Insko, 1992) adds a third “withdrawal” option, which allows players to choose the affordances in the situation by selecting whether they want to be dependent on the other or not. A similar idea is implemented in the Prisoner’s Dilemma with variable dependence (Yamagishi et al., 2005) in which the level of dependence on the other player can vary continuously based on the player’s choice (Table 2; see also the online supplement for details).

#### Public Goods Game

Game structure. The Public Goods Game (Samuelson, 1954) is a symmetric, simultaneous game modeling an intragroup social dilemma of N ≥ 2 players. Each player is endowed with e tokens and decides in private how many tokens g (with 0 ≤ ge) to contribute to a group account (i.e., a public good). The Public Goods Game thus implements a voluntary contribution mechanism; it is therefore often referred to as the Give-Some Dilemma (Dawes, 1980). Tokens contributed to the group account are multiplied by a constant m (with 1 < m < N) and then equally distributed across all group members, irrespective of each player’s contribution. A critical aspect of the game is thus the individual benefit associated with each token contributed, that is, the ratio of the multiplier m to the group size N. This factor called marginal per capita return (MPCR; Isaac et al., 1984) describes the personal costs of contributing to the group account and thus how tempting exploitation is. Formally speaking, player i receives $ui=e–gi+mN∗∑j=1Ngj$. The dominant strategy for a selfish player thus is to contribute nothing, and the unique Nash equilibrium is zero contributions by all players (i.e., gi = 0 for every player i). However, given that contributions are multiplied by m > 1, increasing one’s contribution increases social welfare, and full contribution gi = e by all players maximizes social welfare. From an affordance perspective, the Public Goods Game bears high resemblance to the Prisoner’s Dilemma; in fact, the Prisoner’s Dilemma is identical to a two-player Public Goods Game with binary strategy set. Thus, the game involves a possibility for exploitation (and sub-affordances for the motives of altruism and social welfare concerns vs. greed, competitiveness, and spite) as well as dependence under uncertainty (Table 2).

Real-life examples. The structure of the Public Goods Game mirrors various everyday situations in which consumption of a public resource is non-rival and non-excludable (see e.g., Kollock, 1998). Non-rival means that the consumption of the good by one individual does not reduce the amount of the good available to other individuals. Non-excludable means that no one can be excluded from consuming the good once it is provided. In other words, even those who contribute nothing will benefit from the good, an issue leading to the free-rider problem. Take, for example, the decision to pay taxes. Taxes are used to provide or support various public goods such as clean air, public parks, street lightning, or national security. However, irrespective of whether an individual pays their taxes, all citizens have equal access to these goods and their utility from the good is the same if others consume them, too.

Typical findings. Average contributions in the Public Goods Game amount to roughly 40% of players’ endowment (Zelmer, 2003). However, several factors influence contributions. For example, increasing the MPCR – and thus decreasing the personal costs of contributions and the temptation to defect – typically increases contributions, as does the opportunity to communicate with group members (Balliet, 2010; Ledyard, 1995; Zelmer, 2003). This mirrors findings from the Prisoner’s Dilemma and is also fully compatible with the idea that the possibility for exploitation and dependence under uncertainty are relevant affordances in the Public Goods Game that affect players’ behavior. Likewise, the implementation of potential sanctioning mechanisms has been found to increase contributions (Balliet et al., 2011; Balliet & Van Lange, 2013a), again showing that decreasing the possibility for exploitation may influence behavior. Finally, there is once more consistent evidence for individual differences in contributions that can be accounted for by personality traits related to unconditional concern for others and beliefs about others’ prosociality (Balliet et al., 2009; Balliet & Van Lange, 2013b; Pletzer et al., 2018; Thielmann et al., 2020; Zhao & Smillie, 2015), supporting the relevance of corresponding affordances and related psychological processes for behavior.

Game variants. A common variant of the standard (linear) Public Goods Game is the Step-level Public Goods Game (van de Kragt et al., 1983) in which the inflated group account is only distributed across group members if the sum of contributions reaches a certain contribution threshold. Thus, in terms of affordances involved, dependence under uncertainty is emphasized because contributions will only benefit others (including oneself) if others contribute, too, and there is also a temporal conflict involved given that contributing may ultimately result in a higher outcome than not contributing (because one’s own contribution may be decisive to reach the threshold). In the Commons Dilemma (Hardin, 1968), in turn, players no longer decide how much to give to a common resource, but rather how much to take from an existing common resource. As such, the (﻿sub-﻿)affordances involved in a one-shot game are still the same as in the Public Goods Game, although the situation may be perceived differently due to the different framing (i.e., give vs. take). However, since the game is usually played repeatedly – in which case players should avoid to overconsume, that is, deplete the common resource –, the Commons Dilemma also often involves a temporal conflict between short-term interests (i.e., maximize one’s immediate outcome) and long-term interests (i.e., maximize one’s outcome in the long-run by protecting the common resource from being depleted). Finally, the Volunteer’s Dilemma (Diekmann, 1985) is an adaptation of the (Step-level) Public Goods Game which models situations of helping and volunteering and thus particularly affords altruism and social welfare concerns (vis-à-vis greed) as motives for (unconditional) prosocial behavior (see Table 2 and the OSF online supplement for details).

#### Intergroup Prisoner’s Dilemma

Game structure. The Intergroup Prisoner’s Dilemma (Bornstein, 1992) is an intergroup social dilemma involving N players (with N ≥ 4) who are assigned to one of two groups. Typically, groups are equal in size, that is, each group has n = $N2$ players, with I = {1, …, n} denoting the set of players in the in-group and O = {n + 1, …, N} denoting the set of players in the out-group. As in the Public Goods Game, each player is endowed with e tokens and decides in private how many tokens gB (with 0 ≤ gBe) to contribute to a group account (called between-group pool B). Contributions are multiplied by a constant m (with 1 < m < n) and equally distributed among all in-group players. Importantly, contributions to the between-group pool also have a negative externality on out-group players. Specifically, each token contributed reduces each out-group player’s payoff by the amount $mn$. The individual payoff for each player i thus equals $ui=e–gBi+mn∑j=1ngBj−mn∑k=n+1NgBk$. This implies that each player has a dominant strategy to contribute nothing (given that m < n), and zero contributions by all players thus constitutes the unique Nash equilibrium. However, contributions increase the in-group’s joint payoff in absolute terms and relative to the out-group, with full contributions by all in-group players maximizing in-group social welfare. For the out-group, by contrast, contributions decrease social welfare because they reduce out-group players’ payoffs. Maximum social welfare of all players is therefore achieved if everyone refrains from contributing. Thus, the Intergroup Prisoner’s Dilemma not only models a conflict between self-interest and collective interest at the intragroup level (as, e.g., the Public Goods Game) but also a conflict between in-group and collective (including out-group) interest at the intergroup level. Accordingly, the psychological processes – including social motives – afforded in the Intergroup Prisoner’s Dilemma and related intergroup games may apply to all these different levels. In our motivational analysis, we therefore differentiate between social motives afforded at the collective, in-group, and out-group level (see Table 2).

As follows from the structure of the Intergroup Prisoner’s Dilemma, the game involves a possibility for exploitation, affording unconditional concern for both in-group and out-group members’ welfare to guide behavior. Specifically, contributions to the group account might be motivated by a willingness to benefit the in-group (sometimes referred to as in-group love) or by a willingness to competitively outperform or spitefully harm the out-group (sometimes referred to as out-group hate). Zero contributions, in turn, may either result from low concern for others in general (and the motives of greed and/or collective competitiveness), or from high concern for others in general (and the motive of collective welfare concerns). Moreover, the game involves dependence under uncertainty, affording beliefs about in- and out-group members’ prosociality to guide behavior.

Real-life examples. Historical wars constitute a real-life example of the situation modelled in the Intergroup Prisoner’s Dilemma: The party mobilizing more members to contribute to the conflict by becoming a fighting soldier is more likely to win. Outcomes (e.g., losses/gains of territory) are largely shared collectively among in-group members, irrespective of each member’s individual contribution. Thus, a selfish individual should not contribute because individual contributions are costly (e.g., risk of injury or even death).

Typical findings. Contributions in the Intergroup Prisoner’s Dilemma typically amount to 30-60% of a player’s endowment (e.g., Halevy et al., 2008; Weisel & Böhm, 2015). Thus, contributions are larger than in structurally equivalent Public Goods Games (Bornstein & Ben-Yossef, 1994). This implies that embedding an intragroup conflict within an intergroup conflict reduces selfishness (and the free-rider problem) within the group, albeit at the cost of collective welfare. Moreover, there is initial evidence suggesting that the relation between traits featuring unconditional concern for others do not account for contributions in the Intergroup Prisoner’s Dilemma (Thielmann & Böhm, 2016) – unlike in comparable intragroup games. Taken together, these findings suggest that the differences in affordances introduced by the negative externality contributions have on the out-group particularly affect the possibility for exploitation.

Game variants. The Intergroup Prisoner’s Dilemma is the most prominent example of so-called team games modeling intergroup conflicts (for an overview, see Bornstein, 2003). However, any two-person game can, in principle, be transformed to a team game. In team games, intergroup conflict is modelled via a step-level function: Players of the group with the higher sum of contributions receive a reward r; in case of equal group-level contributions, all players receive a (split) reward s. The ordinal relation of endowment e, reward r, and split reward s determines the game’s conflict structure. For instance, in the Intergroup Public Goods Game (Amnon Rapoport & Bornstein, 1987), r > (s + e) and s > e. Thus, group competition is more salient than in the Intergroup Prisoner’s Dilemma and contributions may indeed be motivated by greed if one’s own contribution is (perceived to be) decisive to outperform the out-group. The Intergroup Prisoner’s Dilemma–Maximizing Difference (Halevy et al., 2008), the Positive Intergroup Prisoner’s Dilemma–Maximizing Difference (Weisel & Böhm, 2015), and the Intergroup Parochial and Universal Cooperation Game (Aaldering & Böhm, 2020), in turn, have directly been derived from the Intergroup Prisoner’s Dilemma to overcome the confound of in-group love and out-group hate underlying contributions in the latter. These games thus allow for a more fine-grained analysis of the determinants of intergroup conflict (Böhm et al., 2020) by providing different sub-affordances for the expression of certain social motives than the standard game (see Table 2 and the online supplement for details). For exaCmple, in the Intergroup Prisoner’s Dilemma–Maximizing Difference, players can decide between keeping their endowment, contributing it to a within-group pool that exclusively benefits the in-group, or contributing it to a between-group pool that benefits both in-group and out-group members to the same degree. Thus, the game particularly affords the expression of collective prosocial motives (i.e., altruism, social welfare concerns) in contributions to the between-group pool versus in-group prosocial motives (i.e., altruism, social welfare concerns) in contributions to the within-group pool. In any case, combining different intergroup conflict games may allow drawing conclusions on the psychological processes involved in behavior (for empirical examples, see Aaldering & Böhm, 2020; Thielmann & Böhm, 2016).

### II.3 Concluding remarks

The literature comprises a vast number of economic games, all of which aim at eliciting specific behaviors in well-defined situations of interdependence. Here, we have outlined how these games differ in terms of their game structure and, thus, affordances provided. Such knowledge can help in various ways. First, knowing the affordances involved in a game allows deriving predictions about which psychological processes (including social motives) may become expressed in – and thus drive – behavior. Second, knowing which structural aspects of a situation are linked to the presence (vs. absence) of an affordance (see Table 1) allows tailored manipulation of the affordances at hand, thereby affecting which psychological processes may eventually become relevant for behavior. Third, knowing the affordances present in a game may allow drawing conclusions on players’ subjective representation of the games. Specifically, if two games produce different behaviors even though providing the exact same (sub-)affordances, this suggests that the situations may be perceived in different ways (Columbus et al., 2019, 2020; Gerpott et al., 2018; Halevy et al., 2012; Halevy & Chou, 2014), thus triggering different psychological processes to affect behavior. Indeed, behaviors across different games – even those involving the same (objective) affordances – show far from perfect inter-correlations (Blanco et al., 2011; Galizzi & Navarro-Martinez, 2018; Haesevoets et al., 2015; McAuliffe et al., 2019; Peysakhovich et al., 2014; Yamagishi et al., 2012), suggesting that the subjective representation of a game (and corresponding affordances) may differ from the objective game structure. Future research is needed to systematically investigate how individuals’ subjective perceptions of games affect the (perceived) affordances, and how these perceptions ultimately translate to behavior (Gerpott et al., in press).

Closely related, the affordance-based framework used here provides a theoretical basis for the combination of games (or variants thereof), so as to isolate certain psychological processes (including social motives) as drivers of behavior. As detailed above, different psychological processes often lead to the same behavior in a game. Thus, from observing a player’s behavior in a single game, it is rarely possible to pinpoint any one particular underlying process. However, a specific elegance of games is that they can be combined to isolate certain processes and social motives (e.g., Haesevoets et al., 2018; Hilbig et al., 2018). For example, in the Prisoner’s Dilemma, defection may either be an expression of unconditional concern for others or of negative beliefs about others’ prosociality. To distinguish between these psychological processes, one may combine the Prisoner’s Dilemma with the Prisoner’s Dilemma-Alt, in which negative beliefs can no longer drive defection. By implication, a player who defects in the Prisoner’s Dilemma but chooses the withdraw option in the Prisoner’s Dilemma-Alt likely defected in the former based on negative beliefs but not based on motives linked to unconditional concern for others. Likewise, one may combine the Ultimatum Game (as proposer) with the Dictator Game (and even add an intermediary game with δ = 0.5, for example) to specifically isolate the social motive of greed. That is, a player who keeps everything in the Dictator Game but who increases their transfer with decreases in δ (and thus power) is arguably motivated by greed rather than by prosocial motives such as altruism or fairness (see, e.g., Barends et al., 2019; Hilbig & Zettler, 2009, for empirical examples).

Moreover, besides combining different games or variants thereof, there are different experimental manipulations available that can serve the purpose to delineate the psychological processes and social motives involved in behavior. For instance, adding random noise to players’ choices in a “trembling hand” fashion (Selten, 1975) can provide information about individuals’ (un)conditional concern for others in situations in which another’s choices cannot be deterministically attributed to their intentions due to potential random “mistakes” (for empirical examples, see e.g., Cushman et al., 2009; Klapwijk & Van Lange, 2009). Likewise, implementing information asymmetry between players – in the sense that some players have information that others do not have – allows studying prosocial behavior in situations where defection may be undetectable by interaction partners, thus increasing the possibility for exploitation. For instance, if the proposer’s endowment size in the Ultimatum Game is private knowledge to the proposer, it is difficult (if not impossible) for responders to evaluate whether the proposer’s offer is fair, and proposers may therefore be more prone to exploit the other (Croson, 1996). Overall, combining different games and/or game variants, or using tailored experimental manipulations of certain aspects in the game, will often provide more fine-grained insights into the psychological processes underlying players’ choices and thereby help illuminate the “black box” determining observable behavior. In other words, games can be adapted in very flexible ways to test and advance theory about human prosociality, and we encourage researchers to make use of this flexibility in their own work.

Economic games offer a theory-driven apparatus to study the many aspects of prosocial behavior. A key advantage of games is that they model various interpersonal interactions in a precise and parsimonious way while allowing for measuring actual, consequential behavior in controlled settings. As such, “games not only capture people’s strategies, they also reflect their choices in social interactions in similar social contexts” (Murnighan & Wang, 2016, p. 89). Of note, although games (necessarily) provide an oversimplified model of social situations, they allow for specific adaptation to more closely mirror real-life interactions, for example by having players earn their endowment through exerting effort or by having them interact with acquainted others. Also, research has successfully transferred games to the field to study prosocial behaviors in more naturalistic settings (see Gneezy & Imas, 2017, for a review) – although some evidence has questioned this transferability (Galizzi & Navarro-Martinez, 2018). Likewise, games can be adapted to allow studying prosocial behavior in different populations, including children (e.g., Böhm & Buttelmann, 2017; Buttelmann & Böhm, 2014; Fehr et al., 2008; Grueneisen & Tomasello, 2019; Jordan et al., 2014). Overall, this shows the flexibility of games to examine prosocial behavior across various settings. Moreover, the present work sought to provide a theoretical framework for the study of prosocial behavior using games. Specifically, we proposed that viewing games through the lens of situational affordances can foster theory-based reliance on the flexibility provided by games, thereby ultimately allowing refined insights into the psychological processes involved in prosocial behavior.

Taken together, economic games have become a standard method for studying prosocial behavior across disciplines for good reason. With the current work, we aimed to provide a theory-based introduction to and summary of economic games for researchers, including the basic knowledge needed to select games that are tailored to test specific hypotheses. We thereby hope to encourage optimal use of games in future research in diverse social and behavioral sciences that will fully exploit the power of games to tackle innovative research questions and to thereby enhance our understanding of human prosociality further.

Contributed to conception: IT, RB, MO, BEH

Drafted and/or revised the article: IT, RB, MO, BEH

Approved the submitted version for publication: IT, RB, MO, BEH

None of the authors has any competing interests.

In an extensive online supplement (https://osf.io/t8m7x/), we additionally provide standardized game instructions that researchers might use in their future work, further (practical) guidelines for the implementation of games, and more detailed descriptions of the presented game variants.

### Acknowledgments

We thank Paul van Lange, Stefan Pfattheicher, and Bettina Rockenbach for their helpful comments and input on previous versions of this manuscript. In addition, we thank Kurt Ackermann, Ananish Chaudhuri, Urs Fischbacher, Andreas Glöckner, Nir Halevy, David Rand, and Toshio Yamagishi for providing game instructions which served as a basis for the standardized instructions.

1.

We want to emphasize that there are entire textbooks available that introduce economic games for experimental scientists (e.g., Camerer, 2003; Chaudhuri, 2009; Colman, 1995). Here, we specifically aim at providing a more hands-on and theory-driven overview that also refers to Interdependence Theory as well as to the affordance concept.

2.

By definition, whenever a game is designed to measure prosocial behavior, it involves a certain conflict of interests for players resulting from some level of negative interdependence between the players’ payoffs. Therefore, we herein exclusively refer to economic games involving such (partial or perfect) conflicting interests between players, leaving aside pure coordination games that are characterized by compatible interests between players due to a positive interdependence between payoffs (see, e.g., Abele et al., 2010; Kelley et al., 2003, for further reading). Specifically, in case of perfect negative interdependence, the situation corresponds to a pure conflict (Schelling, 1960) as mostly modelled in constant-sum games (see Section I.1): An increase in one player’s payoff is necessarily associated with a corresponding decrease in another player’s payoff. By contrast, in case of partial negative interdependence, the situation corresponds to a mixed-motive situation (Schelling, 1960; see also Davis et al., 1976; Komorita & Parks, 1995) as modelled in bargaining games and social dilemmas (see Sections II.1 and II.2). The term mixed-motive refers to a situation in which “two or more parties are faced with a conflict between the motives to cooperate and to compete with each other” (Komorita & Parks, 1995, p. 184).

3.

In addition to Interdependence Theory, other theoretical approaches, such as reinforcement learning (e.g., Camerer & Ho, 1999; Roth & Erev, 1995) or generalized reciprocity models (Romano et al., 2017; Yamagishi et al., 1999), have been proposed in an attempt to provide psychological accounts of behavior in games. Here, however, we focus on Interdependence Theory given (i) its broad application to diverse situations of interdependence, (ii) its pioneering role in the field, and (iii) its direct reference to situational features of games and corresponding affordances.

4.

In Game Theory, the terms “payoff” and “utility” have no inherent meaning other than being a numerical representation of preferences; they are typically used interchangeably (as we will do here). So, in a game-theoretic sense, payoffs are not equivalent to monetary outcomes. Consequently, we will use the term “outcome” whenever referring to monetary or other (material or non-material) consequences. As such, our use of the terms “payoff”, “utility”, and “outcome” is in line with classical Game Theory textbooks (e.g., Mas-Colell et al., 1995).

5.

If the sum of players’ payoffs equals zero (i.e., if a positive payoff for one player is associated with the same negative payoff for another), the game converts to a zero-sum game. In general, each constant-sum game can be transformed to a zero-sum game by normalization of the sum of payoffs to zero. Correspondingly, the terms constant-sum and zero-sum are often used interchangeably.

6.

In games of imperfect information, at least one player does not observe a predecessor’s move. Moreover, another important feature is whether a game involves complete information, that is, whether all players know all players’ payoffs.

7.

Another common efficiency criterion is Pareto-efficiency (also called Pareto-optimality). In contrast to social efficiency, Pareto-efficiency does not require that payoffs are comparable across players but uses only within-person payoff comparisons. For instance, in the Prisoner’s Dilemma, (5,5), (7,1), and (1,7) are Pareto-efficient payoff combinations because each of these has the property that in no other achievable payoff combination at least one person is strictly better off and no-one is worse off.

8.

Psychological processes comprise all kinds of related factors within a person that may become activated by an affordance and, thus, expressed in behavior in a specific situation, such as attitudes, cognitions, emotions, goals, motives, and norms. As such, psychological processes arguably contain both state- and trait-specific variance.

9.

Researchers who translate the standardized game instructions to other languages are very welcome to send these to the corresponding author; we will then include them in the online repository.

10.

Given that the Dictator Game involves only one active player (the dictator), it is equivalent to a single-person decision problem without strategic interaction (e.g., Forsythe et al., 1994).

11.

A proposer assuming that the responder is not selfish – thus being willing to reject offers of g ≥ 1 – might also give g > 1 out of a greed motive. However, for the sake of simplicity, we assume that players expect to interact with selfish others.

Aaldering, H., & Böhm, R. (2020). Parochial vs. universal cooperation: Introducing a novel economic game of within- and between-group interaction. Social Psychological and Personality Science, 11(1), 36–45. https://doi.org/10.1177/1948550619841627
Abbink, K., Irlenbusch, B., & Renner, E. (2000). The moonlighting game: An experimental study on reciprocity and retribution. Journal of Economic Behavior & Organization, 42(2), 265–277. https://doi.org/10.1016/S0167-2681(00)00089-5
Abbink, K., Irlenbusch, B., & Renner, E. (2002). An experimental bribery game. Journal of Law, Economics, and Organization, 18(2), 428–454. https://doi.org/10.1093/jleo/18.2.428
Abele, S., Stasser, G., & Chartier, C. (2010). Conflict and coordination in the provision of public goods: A conceptual analysis of continuous and step-level games. Personality and Social Psychology Review, 14(4), 385–401. https://doi.org/10.1177/1088868310368535
Aguiar, F., Brañas-Garza, P., & Miller, L. M. (2008). Moral distance in dictator games. Judgment and Decision Making, 3(4), 344–354.
Alston, M. (2015). Social work, climate change and global cooperation. International Social Work, 58(3), 355–363. https://doi.org/10.1177/0020872814556824
Anderhub, V., Engelmann, D., & Güth, W. (2002). An experimental study of the repeated trust game with incomplete information. Journal of Economic Behavior and Organization, 48(2), 197–216. https://doi.org/10.1016/S0167-2681(01)00216-5
Andreoni, J., & Miller, J. (2002). Giving according to GARP: An experimental test of the consistency of preferences for altruism. Econometrica, 70(2), 737–753.
Ashraf, N., Bohnet, I., & Piankov, N. (2006). Decomposing trust and trustworthiness. Experimental Economics, 9(3), 193–208. https://doi.org/10.1007/s10683-006-9122-4
Balliet, D. (2010). Communication and cooperation in social dilemmas: A meta-analytic review. Journal of Conflict Resolution, 54(1), 39–57. https://doi.org/10.1177/0022002709352443
Balliet, D., Mulder, L. B., & Van Lange, P. A. M. (2011). Reward, punishment, and cooperation: A meta-analysis. Psychological Bulletin, 137(4), 594–615. https://doi.org/10.1037/a0023489
Balliet, D., Parks, C. D., & Joireman, J. A. (2009). Social value orientation and cooperation in social dilemmas: A meta-analysis. Group Processes & Intergroup Relations, 12(4), 533–547. https://doi.org/10.1177/1368430209105040
Balliet, D., Tybur, J. M., & Van Lange, P. A. M. (2017). Functional Interdependence Theory: An evolutionary account of social situations. Personality and Social Psychology Review, 21(4), 361–388. https://doi.org/10.1177/1088868316657965
Balliet, D., & Van Lange, P. A. M. (2013a). Trust, punishment, and cooperation across 18 societies: A meta-analysis. Perspectives on Psychological Science, 8(4), 363–379. https://doi.org/10.1177/1745691613488533
Balliet, D., & Van Lange, P. A. M. (2013b). Trust, conflict, and cooperation: A meta-analysis. Psychological Bulletin, 139(5), 1090–1112. https://doi.org/10.1037/a0030939
Bardsley, N., Cubitt, R., Loomes, G., Moffatt, P., Starmer, C., & Sugden, R. (2010). Experimental economics: Rethinking the rules. Princeton University Press.
Barends, A. J., de Vries, R. E., & van Vugt, M. (2019). Power influences the expression of Honesty-Humility: The power-exploitation affordances hypothesis. Journal of Research in Personality, 82, 103856. https://doi.org/10.1016/j.jrp.2019.103856
Baron, J. (2001). Purposes and methods [Peer commentary on “Experimental practices in economics: A methodological challenge for psychologists?” by R. Hertwig & A. Ortmann]. Behavioral and Brain Sciences, 24(3), 383–451.
Barrett, S., & Dannenberg, A. (2012). Climate negotiations under scientific uncertainty. Proceedings of the National Academy of Sciences, 109(43), 17372–17376. https://doi.org/10.0.4.49/pnas.1208417109
Baumard, N., André, J.-B., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36(1), 59–78. https://doi.org/10.1017/S0140525X11002202
Baumeister, R. F., Vohs, K. D., & Funder, D. C. (2007). Psychology as the science of self-reports and finger movements: Whatever happened to actual behavior? Perspectives on Psychological Science, 2(4), 396–403. https://doi.org/10.1111/j.1745-6916.2007.00051.x
Bearden, J. N. (2001). Ultimatum bargaining game experiments: The state of the art. SSRN Working Paper Series. https://doi.org/10.2139/ssrn.626183
Bell, R., Buchner, A., & Musch, J. (2010). Enhanced old–new recognition and source memory for faces of cooperators and defectors in a social-dilemma game. Cognition, 117(3), 261–275. https://doi.org/10.1016/j.cognition.2010.08.020
Berg, J., Dickhaut, J., & McCabe, K. A. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142. https://doi.org/10.1006/game.1995.1027
Betsch, C., Böhm, R., Korn, L., & Holtmann, C. (2017). On the benefits of explaining herd immunity in vaccine advocacy. Nature Human Behaviour, 1, 56. https://doi.org/10.1038/s41562-017-0056
Bicchieri, C., Xiao, E., & Muldoon, R. (2011). Trustworthiness is a social norm, but trusting is not. Politics, Philosophy & Economics, 10(2), 170–187. https://doi.org/10.1177/1470594X10387260
Blanco, M., Engelmann, D., & Normann, H. T. (2011). A within-subject analysis of other-regarding preferences. Games and Economic Behavior, 72(2), 321–338. https://doi.org/10.1016/j.geb.2010.09.008
Böhm, R., Betsch, C., & Korn, L. (2016). Selfish-rational non-vaccination: Experimental evidence from an interactive vaccination game. Journal of Economic Behavior & Organization, 131, 183–195. https://doi.org/10.0.3.248/j.jebo.2015.11.008
Böhm, R., & Buttelmann, D. (2017). The impact of resource valence on children’s other-regarding preferences. In Developmental Psychology (Vol. 53, Issue 9, pp. 1656–1665). American Psychological Association. https://doi.org/10.1037/dev0000365
Böhm, R., Rusch, H., & Baron, J. (2020). The psychology of intergroup conflict: A review of theories and measures. Journal of Economic Behavior & Organization, 178, 947–962. https://doi.org/10.1016/j.jebo.2018.01.020
Böhm, R., Theelen, M. M. P., Rusch, H., & Van Lange, P. A. M. (2018). Costs, needs, and integration efforts shape helping behavior toward refugees. Proceedings of the National Academy of Sciences, 115(28), 7284 LP – 7289. https://doi.org/10.1073/pnas.1805601115
Bohnet, I., & Meier, S. (2005). Deciding to distrust. KSG Working Paper No. RWP05-049; FRB Boston Working Paper No. 05-4. https://doi.org/10.2139/ssrn.839225
Bolton, G. E. (1991). A comparative model of bargaining: Theory and evidence. American Economic Review, 81(5), 1096–1136.
Bolton, G. E., & Zwick, R. (1995). Anonymity versus punishment in ultimatum bargaining. Games and Economic Behavior, 10(1), 95–121. https://doi.org/10.1006/game.1995.1026
Bornstein, G. (1992). The free-rider problem in intergroup conflicts over step-level and continuous public goods. Journal of Personality and Social Psychology, 62(4), 597–606. https://doi.org/10.1037/0022-3514.62.4.597
Bornstein, G. (2003). Intergroup conflict: Individual, group, and collective interests. Personality and Social Psychology Review, 7(2), 129–145. https://doi.org/10.1207/S15327957PSPR0702_129-145
Bornstein, G., & Ben-Yossef, M. (1994). Cooperation in intergroup and single-group social dilemmas. Journal of Experimental Social Psychology, 30(1), 52–67. https://doi.org/10.1006/jesp.1994.1003
Bosman, R., & van Winden, F. (2002). Emotional hazard in a power-to-take experiment. The Economic Journal, 112(476), 147–169. https://doi.org/10.1111/1468-0297.0j677
Brañas-Garza, P. (2006). Poverty in dictator games: Awakening solidarity. Journal of Economic Behavior & Organization, 60(3), 306–320. https://doi.org/10.1016/j.jebo.2004.10.005
Buttelmann, D., & Böhm, R. (2014). The ontogeny of the motivation that underlies in-group bias. Psychological Science, 25(4), 921–927. https://doi.org/10.1177/0956797613516802
Camerer, C. F. (2003). Behavioral game theory: Experiments in strategic interaction. Russell Sage Foundation.
Camerer, C. F., & Ho, T.-H. (1999). Experience-weighted attraction learning in normal form games. Econometrica, 67(4), 827–874. https://doi.org/10.1111/1468-0262.00054
Camerer, C. F., & Hogarth, R. M. (1999). The effects of financial incentives in experiments: A review and capital-labor-production framework. Journal of Risk and Uncertainty, 19(1–3), 7–42. https://doi.org/10.1023/A:1007850605129
Camerer, C. F., & Weigelt, K. (1988). Experimental tests of a sequential equilibrium reputation model. Econometrica, 56(1), 1–36. https://doi.org/10.2307/1911840
Chaudhuri, A. (2009). Experiments in economics: Playing fair with money. Routledge.
Cherry, T. L., Frykblom, P., & Shogren, J. F. (2002). Hardnose the dictator. American Economic Review, 92(4), 1218–1221. https://doi.org/10.1257/00028280260344740
Colman, A. M. (1995). Game theory and its applications in the social and biological sciences. Routledge/Taylor & Francis Group.
Colman, A. M. (2006). The puzzle of cooperation. Nature, 440(7085), 744–745. https://doi.org/10.1038/440744b
Colman, A. M., Krockow, E. M., Chattoe-Brown, E., & Tarrant, C. (2019). Medical prescribing and antibiotic resistance: A game-theoretic analysis of a potentially catastrophic social dilemma. PLOS ONE, 14(4), e0215480. https://doi.org/10.1371/journal.pone.0215480
Columbus, S., Münich, J., & Gerpott, F. H. (2020). Playing a different game: Situation perception mediates framing effects on cooperative behaviour. Journal of Experimental Social Psychology, 90, 104006. https://doi.org/10.1016/j.jesp.2020.104006
Columbus, S., Thielmann, I., & Balliet, D. (2019). Situational affordances for prosocial behaviour: On the interaction between Honesty-Humility and (perceived) interdependence. European Journal of Personality, 33(6), 655–673. https://doi.org/10.1002/per.2224
Cooper, D. J., & Dutcher, E. G. (2011). The dynamics of responder behavior in ultimatum games: A meta-study. Experimental Economics, 14(4), 519–546. https://doi.org/10.1007/s10683-011-9280-x
Croson, R. T. A. (1996). Information in ultimatum games: An experimental study. Journal of Economic Behavior & Organization, 30(2), 197–212. https://doi.org/10.1016/S0167-2681(96)00857-8
Cushman, F. A., Dreber, A., Wang, Y., & Costa, J. (2009). Accidental outcomes guide punishment in a ‘“trembling hand”’ game. PLoS ONE, 4(8). https://doi.org/10.1371/journal.pone.0006699.t001
Davis, J. H., Laughlin, P. R., & Komorita, S. S. (1976). The social psychology of small groups: Cooperative and mixed-motive interaction. Annual Review of Psychology, 27(1), 501–541. https://doi.org/10.1146/annurev.ps.27.020176.002441
Dawes, R. M. (1980). Social dilemmas. Annual Review of Psychology, 31, 169–193. https://doi.org/10.1146/annurev.ps.31.020180.001125
De Vries, R. E., Tybur, J. M., Pollet, T. V, & van Vugt, M. (2016). Evolution, situational affordances, and the HEXACO model of personality. Evolution and Human Behavior, 37, 407–421. https://doi.org/10.1016/j.evolhumbehav.2016.04.001
Debove, S., Baumard, N., & André, J.-B. (2016). Models of the evolution of fairness in the ultimatum game: A review and classification. Evolution and Human Behavior, 37(3), 245–254. https://doi.org/10.1016/j.evolhumbehav.2016.01.001
Deutsch, M. (1958). Trust and suspicion. The Journal of Conflict Resolution, 2(4), 265–279. https://doi.org/10.2307/172886
Deutsch, M. (1960). Trust, trustworthiness, and the F scale. The Journal of Abnormal and Social Psychology, 61(1), 138–140. https://doi.org/10.1037/h0046501
Devine, P. G., Plant, E. A., Amodio, D. M., Harmon-Jones, E., & Vance, S. L. (2002). The regulation of explicit and implicit race bias: The role of motivations to respond without prejudice. Journal of Personality and Social Psychology, 82(5), 835–848. https://doi.org/10.1037/0022-3514.82.5.835
Diekmann, A. (1985). Volunteer’s dilemma. Journal of Conflict Resolution, 29(4), 605–610. https://doi.org/10.1177/0022002785029004003
Eckel, C. C., & Grossman, P. J. (1996). Altruism in anonymous dictator games. Games and Economic Behavior, 16(2), 181–191. https://doi.org/10.1006/game.1996.0081
Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14(4), 583–610. https://doi.org/10.1007/s10683-011-9283-7
Fehr, E., Bernhard, H., & Rockenbach, B. (2008). Egalitarianism in young children. Nature, 454(7208), 1079–1083. https://doi.org/10.1038/nature07155
Forsythe, R., Horowitz, J. L., Savin, N. E., & Sefton, M. (1994). Fairness in simple bargaining experiments. Games and Economic Behavior, 6(3), 347–369. https://doi.org/10.1006/game.1994.1021
Freedman, G., & Flanagan, M. (2017). From dictators to avatars: Furthering social and personality psychology through game methods. Social and Personality Psychology Compass, 11(12), e12368. https://doi.org/10.1111/spc3.12368
Fudenberg, D., & Tirole, J. (1991). Game theory. MIT Press.
Galizzi, M. M., & Navarro-Martinez, D. (2018). On the external validity of social preference games: A systematic lab-field study. Management Science. https://doi.org/10.1287/mnsc.2017.2908
Gerpott, F. H., Balliet, D., Columbus, S., Molho, C., & De Vries, R. E. (2018). How do people think about interdependence? A multidimensional model of subjective outcome interdependence. Journal of Personality and Social Psychology, 115(4), 716–742. https://doi.org/10.1037/pspp0000166
Gerpott, F. H., Thielmann, I., & Balliet, D. (in press). Interdependence approaches to the person and the situation. In D. Wood, S. J. Read, P. D. Harms, & A. Slaughter (Eds.), Emerging approaches to measuring and modeling the person and situation. Elsevier.
Gneezy, U., & Imas, A. (2017). Lab in the field: Measuring preferences in the wild. Handbook of Economic Field Experiments, 1, 439–464. https://doi.org/10.1016/bs.hefe.2016.08.003
Grossman, P. J., & Eckel, C. C. (2015). Giving versus taking for a cause. Economics Letters, 132, 28–30. https://doi.org/10.1016/j.econlet.2015.04.002
Grueneisen, S., & Tomasello, M. (2019). Children use rules to coordinate in a social dilemma. Journal of Experimental Child Psychology, 179, 362–374. https://doi.org/10.1016/j.jecp.2018.11.001
Güth, W., & Huck, S. (1997). From ultimatum bargaining to dictatorship—an experimental study of four games varying in veto power. Metroeconomica, 48(3), 262–299. https://doi.org/10.1111/1467-999X.00033
Güth, W., & Kocher, M. G. (2014). More than thirty years of ultimatum bargaining experiments: Motives, variations, and a survey of the recent literature. Journal of Economic Behavior & Organization, 108, 396–409. https://doi.org/10.1016/j.jebo.2014.06.006
Güth, W., Levati, M. V., & Ploner, M. (2012). An experimental study of the generosity game. Theory and Decision, 72(1), 51–63. https://doi.org/10.1007/s11238-011-9253-8
Güth, W., Schmittberger, R., & Schwarze, B. (1982). An experimental analysis of ultimatum bargaining. Journal of Economic Behavior & Organization, 3(4), 367–388. https://doi.org/10.1016/0167-2681(82)90011-7
Güth, W., & van Damme, E. (1998). Information, strategic behavior, and fairness in ultimatum bargaining: An experimental study. Journal of Mathematical Psychology, 42(2–3), 227–247. https://doi.org/10.1006/jmps.1998.1212
Haesevoets, T., Reinders Folmer, C., Bostyn, D. H., & Van Hiel, A. (2018). Behavioural consistency within the Prisoner’s Dilemma Game: The role of personality and situation. European Journal of Personality, 32(4), 405–426. https://doi.org/10.1002/per.2158
Haesevoets, T., Reinders Folmer, C., & Van Hiel, A. (2015). Cooperation in mixed-motive games: The role of individual differences in selfish and social orientation. European Journal of Personality, 29(4), 445–458. https://doi.org/10.1002/per.1992
Halevy, N., Bornstein, G., & Sagiv, L. (2008). “In-group love” and “out-group hate” as motives for individual participation in intergroup conflict: A new game paradigm. Psychological Science, 19(4), 405–411. https://doi.org/10.1111/j.1467-9280.2008.02100.x
Halevy, N., & Chou, E. Y. (2014). How decisions happen: Focal points and blind spots in interdependent decision making. Journal of Personality and Social Psychology, 106(3), 398–417. https://doi.org/10.1037/a0035351
Halevy, N., Chou, E. Y., & Murnighan, J. K. (2012). Mind games: The mental representation of conflict. Journal of Personality and Social Psychology, 102(1), 132–148. https://doi.org/10.1037/a0025389
Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248. https://doi.org/10.1126/science.162.3859.1243
Hilbig, B. E., Kieslich, P. J., Henninger, F., Thielmann, I., & Zettler, I. (2018). Lead us (not) into temptation: Testing the motivational mechanisms linking Honesty-Humility to cooperation. European Journal of Personality, 32(2), 116–127. https://doi.org/10.1002/per.2149
Hilbig, B. E., & Zettler, I. (2009). Pillars of cooperation: Honesty-Humility, social value orientations, and economic behavior. Journal of Research in Personality, 43(3), 516–519. https://doi.org/10.1016/j.jrp.2009.01.003
Hinde, R. A., & Groebel, J. (1991). Introduction. In R. A. Hinde & J. Groebel (Eds.), Cooperation and prosocial behavior (pp. 1–8). Cambridge University Press.
Ho, B. (2012). Apologies as signals: With evidence from a trust game. Management Science, 58(1), 141–158. https://doi.org/10.1287/mnsc.1110.1410
Holmes, J. G. (2004). The benefits of abstract functional analysis in theory construction: The case of interdependence theory. Personality and Social Psychology Review, 8(2), 146–155. https://doi.org/10.1207/s15327957pspr0802_8
Insko, C. A., Schopler, J., Hoyle, R. H., Dardis, G. J., & Graetz, K. A. (1990). Individual-group discontinuity as a function of fear and greed. Journal of Personality and Social Psychology, 58(1), 68–79. https://doi.org/10.1037/0022-3514.58.1.68
Insko, C. A., Schopler, J., Kennedy, J. F., Dahl, K. R., Graetz, K. A., & Drigotas, S. M. (1992). Individual-group discontinuity from the differing perspectives of Campbell’s Realistic Group Conflict Theory and Tajfel and Turner’s Social Identity Theory. Social Psychology Quarterly, 55(3), 272–291. https://doi.org/10.2307/2786796
Isaac, R. M., Walker, J. M., & Thomas, S. H. (1984). Divergent evidence on free riding: An experimental examination of possible explanations. Public Choice, 43(2), 113–149. https://doi.org/10.1007/BF00140829
Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865–889. https://doi.org/10.1016/j.joep.2011.05.007
Jordan, J. J., McAuliffe, K., & Warneken, F. (2014). Development of in-group favoritism in children’s third-party punishment of selfishness. Proceedings of the National Academy of Sciences of the United States of America, 111(35), 12710–12715. https://doi.org/10.1073/pnas.1402280111
Kahneman, D., Knetsch, J. L., & Thaler, R. H. (1986). Fairness and the assumptions of economics. Journal of Business, 59(4), S285–S300.
Kelley, H. H., Holmes, J. G., Kerr, N. L., Reis, H. T., Rusbult, C. E., & Van Lange, P. A. M. (2003). An atlas of interpersonal situations. Cambridge University Press. https://doi.org/10.1017/CBO9780511499845
Kelley, H. H., & Thibaut, J. W. (1978). Interpersonal relations: A theory of interdependence. Wiley.
Kiyonari, T., & Yamagishi, T. (1999). A comparative study of trust and trustworthiness using the game of enthronement. The Japanese Journal of Social Psychology, 15(2), 100–109.
Kiyonari, T., Yamagishi, T., Cook, K. S., & Cheshire, C. (2006). Does trust beget trustworthiness? Trust and trustworthiness in two games and two cultures: A research note. Social Psychology Quarterly, 69(3), 270–283. https://doi.org/10.1177/019027250606900304
Klapwijk, A., & Van Lange, P. A. M. (2009). Promoting cooperation and trust in “noisy” situations: The power of generosity. Journal of Personality and Social Psychology, 96(1), 83–103. https://doi.org/10.1037/a0012823
Klein, S. A., & Hilbig, B. E. (2019). On the lack of real consequences in consumer choice research—and its consequences. Experimental Psychology, 66(1), 68. https://doi.org/10.1027/1618-3169/a000420
Klein, S. A., Hilbig, B. E., & Heck, D. W. (2017). Which is the greater good? A social dilemma paradigm disentangling environmentalism and cooperation. Journal of Environmental Psychology, 53, 40–49. https://doi.org/10.1016/j.jenvp.2017.06.001
Kollock, P. (1998). Social dilemmas: The anatomy of cooperation. Annual Review of Sociology, 24, 183–214. https://doi.org/10.1146/annurev.soc.24.1.183
Komorita, S. S., & Parks, C. D. (1995). Interpersonal relations: Mixed-motive interaction. Annual Review of Psychology, 46(1), 183–207. https://doi.org/10.1146/annurev.ps.46.020195.001151
Kreps, D. M. (1990). Corporate culture and economic theory. In J. E. Alt & K. A. Shepsle (Eds.), Perspectives on positive political economy (pp. 90–143). Oxford University Press.
Kuhlman, D. M., & Marshello, A. F. (1975). Individual differences in game motivation as moderators of preprogrammed strategy effects in prisoner’s dilemma. Journal of Personality and Social Psychology, 32(5), 922–931. https://doi.org/10.1037/0022-3514.32.5.922
Le, B. M., Impett, E. A., Lemay, E. P., Jr., Muise, A., & Tskhay, K. O. (2018). Communal motivation and well-being in interpersonal relationships: An integrative review and meta-analysis. Psychological Bulletin, 144(1), 1–25. https://doi.org/10.1037/bul0000133
Ledyard, J. O. (1995). Public goods: A survey of experimental research. In J. H. Kagel & A. E. Roth (Eds.), The handbook of experimental economics (pp. 111–194). Princeton University Press.
Lodgaard, S. (2010). Nuclear disarmament and non-proliferation: Towards a nuclear-weapon-free world? Taylor & Francis.
Luce, R. D., & Raiffa, H. (1957). Games and decisions: Introduction and critical survey. John Wiley & Sons.
Marx, S. M., & Weber, E. U. (2012). Decision making under climate uncertainty: The power of understanding judgment and decision processes. In Climate change in the Great Lakes region: Navigating an uncertain future. Michigan State University Press.
Mas-Colell, A., Whinston, M. D., & Green, J. R. (1995). Microeconomic theory (Vol. 1). Oxford University Press.
McAuliffe, W. H. B., Forster, D. E., Pedersen, E. J., & McCullough, M. E. (2019). Does cooperation in the laboratory reflect the operation of a broad trait? European Journal of Personality, 33(1), 89–103. https://doi.org/10.1002/per.2180
McClintock, C. G. (1972). Social motivation: A set of propositions. Behavioral Science, 17(5), 438–455. https://doi.org/10.1002/bs.3830170505
Messick, D. M., & McClintock, C. G. (1968). Motivational bases of choice in experimental games. Journal of Experimental Social Psychology, 4(1), 1–25. https://doi.org/10.1016/0022-1031(68)90046-2
Mischel, W., & Shoda, Y. (1995). A cognitive-affective system theory of personality: Reconceptualizing situations, dispositions, dynamics, and invariance in personality structure. Psychological Review, 102(2), 246–268. https://doi.org/10.1037/0033-295X.102.2.246
Mittone, L., & Ploner, M. (2012). Asset legitimacy and distributive justice in the dictator game: An experimental analysis. Journal of Behavioral Decision Making, 25(2), 135–142. https://doi.org/10.1002/bdm.708
Murnighan, J. K., & Wang, L. (2016). The social world as an experimental game. Organizational Behavior and Human Decision Processes, 136, 80–94. https://doi.org/10.1016/j.obhdp.2016.02.003
Myerson, R. B. (1991). Game theory: Analysis of conflict. Harvard University Press.
Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314(5805), 1560–1563. https://doi.org/10.1126/science.1133755
Oosterbeek, H., Sloof, R., & van de Kuilen, G. (2004). Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics, 7(2), 171–188. https://doi.org/10.1023/B:EXEC.0000026978.14316.74
Osborne, M. J. (2002). An introduction to game theory. New York: Oxford University Pres.
Peysakhovich, A., Nowak, M. A., & Rand, D. G. (2014). Humans display a “cooperative phenotype” that is domain general and temporally stable. Nature Communications, 5, 4939. https://doi.org/10.1038/ncomms5939
Pletzer, J. L., Balliet, D., Joireman, J. A., Kuhlman, D. M., Voelpel, S. C., & Van Lange, P. A. M. (2018). Social value orientation, expectations, and cooperation in social dilemmas: A meta‐analysis. European Journal of Personality, 32(1), 62–83. https://doi.org/10.1002/per.2139
Plous, S. (1988). Disarmament, arms control, and peace in the nuclear age: Political objectives and relevant research. Journal of Social Issues, 44(2), 133–154. https://doi.org/10.1111/j.1540-4560.1988.tb02066.x
Podsakoff, P. M., Ahearne, M., & MacKenzie, S. B. (1997). Organizational citizenship behavior and the quantity and quality of work group performance. Journal of Applied Psychology, 82(2), 262–270. https://doi.org/10.1037/0021-9010.82.2.262
Pruitt, D. G., & Kimmel, M. J. (1977). Twenty years of experimental gaming: Critique, synthesis, and suggestions for the future. Annual Review of Psychology, 28, 363–392. https://doi.org/10.1146/annurev.ps.28.020177.002051
Raihani, N. J., & Bell, V. (2018). Conflict and cooperation in paranoia: A large-scale behavioural experiment. Psychological Medicine, 48(9), 1523–1531. https://doi.org/10.1017/S0033291717003075
Rand, D. G., Greene, J. D., & Nowak, M. A. (2012). Spontaneous giving and calculated greed. Nature, 489, 427–430. https://doi.org/10.1038/nature11467
Rapoport, Amnon, & Bornstein, G. (1987). Intergroup competition for the provision of binary public goods. Psychological Review, 94(3), 291–299. https://doi.org/10.1037/0033-295X.94.3.291
Rapoport, Anatol, & Chammah, A. M. (1965). Prisoner’s dilemma: A study in conflict and cooperation. University of Michigan Press.
Rapoport, Anatol, & Chammah, A. M. (1966). The game of chicken. American Behavioral Scientist, 10(3), 10-14,23-28. https://doi.org/10.1177/000276426601000303
Rapoport, Anatol, & Guyer, M. (1966). A taxonomy of 2x2 games. General Systems, 203–214.
Reis, H. T. (2008). Reinvigorating the concept of situation in social psychology. Personality and Social Psychology Review, 12(4), 311–329. https://doi.org/10.1177/1088868308321721
Rodebaugh, T. L., Heimberg, R. G., Taylor, K. P., & Lenze, E. J. (2016). Clarifying the behavioral economics of social anxiety disorder: Effects of interpersonal problems and symptom severity on generosity. Clinical Psychological Science, 4(1), 107–121. https://doi.org/10.1177/2167702615578128
Romano, A., Balliet, D., & Wu, J. (2017). Unbounded indirect reciprocity: Is reputation-based cooperation bounded by group membership? Journal of Experimental Social Psychology, 71, 59–67. https://doi.org/10.1016/j.jesp.2017.02.008
Roth, A. E., & Erev, I. (1995). Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term. Games and Economic Behavior, 8(1), 164–212. https://doi.org/10.1016/s0899-8256(05)80020-x
Rubinstein, A. (1982). Perfect equilibrium in a bargaining model. Econometrica, 50(1), 97–109. https://doi.org/10.2307/1912531
Rusbult, C. E., & Van Lange, P. A. M. (2003). Interdependence, interaction and relationships. Annual Review of Psychology, 54, 351–375. https://doi.org/10.1146/annurev.psych.54.101601.145059
Sally, D. (1995). Conversation and cooperation in social dilemmas: A meta-analysis of experiments from 1958 to 1992. Rationality and Society, 7(1), 58–92. https://doi.org/10.1177/1043463195007001004
Samuelson, P. A. (1954). The pure theory of public expenditure. Review of Economics and Statistics, 36, 387–389. https://doi.org/10.2307/1925895
Schelling, T. C. (1960). The strategy of conflict. Harvard University Press.
Schopler, J., & Insko, C. A. (1992). The discontinuity effect in interpersonal and intergroup relations: Generality and mediation. In W. Stroebe & M. Hewstone (Eds.), European review of social psychology, Vol. 3. (pp. 121–151). John Wiley & Sons. https://doi.org/10.1080/14792779243000041
Schopler, J., Insko, C. A., Wieselquist, J., Pemberton, M., Witcher, B., Kozar, R., Roddenberry, C., & Wildschut, T. (2001). When groups are more competitive than individuals: The domain of the discontinuity effect. Journal of Personality and Social Psychology, 80(4), 632–644. https://doi.org/10.1037/0022-3514.80.4.632
Selten, R. (1967). Die Strategiemethode zur Erforschung des eingeschränkt rationalen Verhaltens im Rahmen eines Oligopolexperimentes [The strategy method as a tool to analyze bounded rationality in oligopoly experiments]. In H. Sauermann (Ed.), Beiträge zur Experimentellen Wirtschaftsforschung (pp. 136–168). J. C. B. Mohr.
Selten, R. (1975). Reexamination of the perfectness concept for equilibrium points in extensive games. International Journal of Game Theory, 4(1), 25–55. https://doi.org/10.1007/bf01766400
Selten, R., & Ockenfels, A. (1998). An experimental solidarity game. Journal of Economic Behavior & Organization, 34(4), 517–539. https://doi.org/10.1016/S0167-2681(97)00107-8
Skyrms, B. (2001). The stag hunt. Proceedings and Addresses of the American Philosophical Association, 75(2), 31–41. https://doi.org/10.2307/3218711
Stavrova, O., & Schlösser, T. M. (2015). Solidarity and social justice: Effect of individual differences in justice sensitivity on solidarity behaviour. European Journal of Personality, 29(1), 2–16. https://doi.org/10.1002/per.1981
Suleiman, R. (1996). Expectations and fairness in a modified ultimatum game. Journal of Economic Psychology, 17(5), 531–554. https://doi.org/10.1016/S0167-4870(96)00029-3
Tadelis, S. (2013). Game theory: An introduction. Princeton University Press.
Tarrant, C., Colman, A. M., Chattoe-Brown, E., Jenkins, D. R., Mehtar, S., Perera, N., & Krockow, E. M. (2019). Optimizing antibiotic prescribing: collective approaches to managing a common-pool resource. Clinical Microbiology and Infection, 25(11), 1356–1363. https://doi.org/10.1016/j.cmi.2019.03.008
Thibaut, J. W., & Kelley, H. H. (1959). The social psychology of groups. Transaction Publishers.
Thielmann, I., & Böhm, R. (2016). Who does (not) participate in intergroup conflict? Social Psychological and Personality Science, 7(8), 778–787. https://doi.org/10.1177/1948550616660160
Thielmann, I., Böhm, R., & Hilbig, B. E. (2015). Different games for different motives: Comment on Haesevoets, Folmer, and Van Hiel (2015). European Journal of Personality, 29(4), 506–508. https://doi.org/10.1002/per.2007
Thielmann, I., & Hilbig, B. E. (2015). Trust: An integrative review from a person-situation perspective. Review of General Psychology, 19(3), 249–277. https://doi.org/10.1037/gpr0000046
Thielmann, I., Spadaro, G., & Balliet, D. (2020). Personality and prosocial behavior: A theoretical framework and meta-analysis. Psychological Bulletin, 146(1), 30–90. https://doi.org/10.1037/bul0000217
Tomasello, M., & Vaish, A. (2013). Origins of human cooperation and morality. Annual Review of Psychology, 64, 231–255. https://doi.org/10.1146/annurev-psych-113011-143812
van de Kragt, A. J. C., Orbell, J. M., & Dawes, R. M. (1983). The minimal contributing set as a solution to public goods problems. American Political Science Review, 77(1), 112–122. https://doi.org/10.2307/1956014
van Dijk, E., & De Dreu, C. K. W. (2021). Experimental games and social decision making. Annual Review of Psychology, 72, 415–438. https://doi.org/10.1146/annurev-psych-081420-110718
Van Lange, P. A. M., & Balliet, D. (2015). Interdependence theory. In M. Mikulincer, P. R. Shaver, J. A. Simpson, & J. F. Dovidio (Eds.), APA handbook of personality and social psychology, Volume 3: Interpersonal relations. (pp. 65–92). American Psychological Association. https://doi.org/10.1037/14344-003
Van Lange, P. A. M., Joireman, J. A., Parks, C. D., & Van Dijk, E. (2013). The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120(2), 125–141. https://doi.org/10.1016/j.obhdp.2012.11.003
Van Lange, P. A. M., & Visser, K. (1999). Locomotion in social dilemmas: How people adapt to cooperative, tit-for-tat, and noncooperative partners. Journal of Personality and Social Psychology, 77(4), 762–773. https://doi.org/10.1037/0022-3514.77.4.762
Vlaev, I., & Chater, N. (2006). Game relativity: How context influences strategic decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(1), 131–149. https://doi.org/10.1037/0278-7393.32.1.131
von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press.
Weber, J. M., Kopelman, S., & Messick, D. M. (2004). A conceptual review of decision making in social dilemmas: Applying a logic of appropriateness. Personality and Social Psychology Review, 8(3), 281–307. https://doi.org/10.1207/s15327957pspr0803_4
Weisel, O., & Böhm, R. (2015). “Ingroup love” and “outgroup hate” in intergroup conflict between natural groups. Journal of Experimental Social Psychology, 60, 110–120. https://doi.org/10.1016/j.jesp.2015.04.008
Yamagishi, T., Horita, Y., Mifune, N., Hashimoto, H., Li, Y., Shinada, M., Miura, A., Inukai, K., Takagishi, H., & Simunovic, D. (2012). Rejection of unfair offers in the ultimatum game is no evidence of strong reciprocity. Proceedings of the National Academy of Sciences, 109(50), 20364–20368. https://doi.org/10.1073/pnas.1212126109
Yamagishi, T., Jin, N., & Kiyonari, T. (1999). Bounded generalized reciprocity: Ingroup boasting and ingroup favoritism. In S. R. Thye (Ed.), Advances in group processes (Vol. 16, pp. 161–197).
Yamagishi, T., Kanazawa, S., Mashima, R., & Terai, S. (2005). Separating trust from cooperation in a dynamic relationship: Prisoner’s dilemma with variable dependence. Rationality and Society, 17(3), 275–308. https://doi.org/10.1177/1043463105055463
Zelmer, J. (2003). Linear public goods experiments: A meta-analysis. Experimental Economics, 6(3), 299–310. https://doi.org/10.1023/A:1026277420119
Zettler, I., Hilbig, B. E., & Heydasch, T. (2013). Two sides of one coin: Honesty-Humility and situational factors mutually shape social dilemma decision making. Journal of Research in Personality, 47(4), 286–295. https://doi.org/10.1016/j.jrp.2013.01.012
Zhao, K., & Smillie, L. D. (2015). The role of interpersonal traits in social decision making: Exploring sources of behavioral heterogeneity in economic games. Personality and Social Psychology Review, 19(3), 277–302. https://doi.org/10.1177/1088868314553709
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.