The identifiable victim effect describes the stronger tendency to help a specific victim than to help a group of unidentified statistical victims. Our reanalysis of a meta-analysis on the effect by Lee and Freely (2016) using robust Bayesian meta-analysis suggested publication bias in the literature and the need to revisit the phenomenon. We conducted a pre-registered far replication and extension of Studies 1 and 3 in Small et al. (2007), a seminal demonstration of the identifiable victim effect, with hypothetical donations. We examined the impact of deliberative thinking on the identifiable victim effect both by directly informing participants of the effect (Study 1) and by providing an identified victim with statistical information (Study 3). We found no empirical support for the identifiable victim effect (ηp2 = .000, 95% CI [.000, .003]) and subsequently no support for debiasing such a phenomenon (ηp2 = .001, 95% CI[.000, .012]). These findings suggest that the identifiable victim may be better framed in terms of ‘scope-insensitivity’. In other words, rather than providing more to a single identified victim, participants seem to be insensitive to the number of victims affected. However, our study involved only hypothetical donations rather than a real-effort real-donation paradigm as in Small et al. (2007). Therefore, we hope that our results spark motivation for future high-powered replications with real money donations, ideally carried out as registered reports and in collaboration with proponents of the original effect. Materials, data, and code were made available on the OSF: https://osf.io/n4jkh/ .

The identifiable victim effect is the tendency to offer more support to an identifiable individual over a group of unidentified victims who are described using numerical statistics (Jenni & Loewenstein, 1997). This inconsistent valuation results in inefficient resource allocation. Small et al. (2007) showed that the identifiable victim effect could be weakened by deliberative thinking, meaning that being informed of the effect and thinking analytically about one’s own altruistic behavior may reduce motivation to offer assistance to a single beneficiary. However, the effect was diminished not because participants gave more to the statistical victims, but because they gave less to an identified victim. In Study 1 of Small et al. (2007), participants in an explicit learning condition were taught about the identifiable victim effect before making a donation decision. Participants who were briefed about the phenomenon in this way donated less to an identifiable victim compared to the control group. Study 3 found similar results in a condition where the identified victim was presented together with victim statistics: those in the joint presentation condition donated less than those in the identified victim condition, presumably because it reminded them of the many other victims who would not receive help.

We report a replication of Small et al. (2007), in which we had two major goals. Our first goal was to conduct an independent preregistered well-powered conceptual replication of the classic identifiable victim effect on hypothetical donations. This included two manipulations aimed at debiasing the effect: An explicit learning technique, which consisted of informing people about the effect and an implicit learning technique, which consisted of showing the identifiable victim jointly with victim information. Our second goal was to examine associations between affective feelings and hypothetical donations examining the impact of identifiability and explicit learning on affective feelings. Additionally, we added an extension examining associations with perceived impact of the donation.

The phenomenon of disproportional generosity provoked by identifiable in comparison to unidentifiable individuals appears to be supported by substantial empirical research (Bergh & Reinstein, 2021; Caviola et al., 2020; Erlandsson et al., 2014; Friedrich & McGuire, 2010; S. Lee & Feeley, 2016; Loewenstein et al., 2006; Slovic, 2007; Small & Loewenstein, 2003). Kogut and Ritov (2005) demonstrated that this effect was restricted to a single target with their name, age, or face displayed, which resulted in larger hypothetical donations compared to a group of unidentified victims.1 Other moderators were proposed to account for the identifiable victim effect, including the number of identified or unidentified victims, entitativity, cause of plight, perceived responsibility, emotions displayed by the victim, and sense of belonging (Erlandsson et al., 2015; Ritov & Kogut, 2011; Small & Verrochi, 2009; Smith et al., 2013).

There are several possible explanations for the identifiable victim effect. One explanation is proportion dominance, which refers to the phenomenon that people show higher sensitivity to proportions than to absolute values (Baron, 1997). This heuristic suggests that people pay more attention to proportions or percentages than to absolute numbers. Therefore, when evaluating options to save lives, a higher proportion of lives saved seems to result in more helping (Erlandsson et al., 2014; Jenni & Loewenstein, 1997). In the case of an identifiable victim, given that the victim serves as the only reference, the proportion is perceived as 100%. For statistical victims, on the other hand, the reference group may consist of millions of people. Even though the absolute number of lives saved would be higher than a single individual, the proportion of lives saved decreases, thus reducing willingness to help.

Another explanation for this effect is the ‘affect heuristic’ (Slovic et al., 2007). This heuristic describes the tendency to rely on emotional and affective states when evaluating a stimulus, and is believed to be activated when evaluating a specific identified victim (S. Lee & Feeley, 2018; Small & Loewenstein, 2003). The personalized information received about a specific victim is argued to elicit stronger affective reactions such as sympathy and distress, which likely motivates the willingness to offer more support to that individual. In contrast, a general number representing statistical unidentified victims may fail to induce any major affective responses, and therefore lead to less willingness for altruistic behavior.

Finally, the effect may also be explained by the perceived impact of the donation (Erlandsson et al., 2015). If participants have a stronger belief in the impact of their donations, they are likely to be more willing to give. Duncan (2004) reported that donations towards identifiable victims were perceived to have stronger impact, likely because it was easier to picture how the money or resources would benefit the individual, compared to statistical victims who were depicted in an abstract manner or as a number. However, findings seem mixed regarding the mediating effect of perceived impact on the identifiable victim effect. For example, Lee and Feeley (2016) and Friedrich and McGuire (2010) did not find support for this factor, reporting no differences in the helping behavior between a personalized individual and an anonymous group of people.

Lee and Freely (2016) conducted a meta-analysis that summarized 41 effects from 22 experiments on the identifiable victim effect and found a ‘significant yet modest IVE [identifiable victim effect]’ (S. Lee & Feeley, 2016, p. 199) referring to an aggregated effect of r = .05. However, there is reason to believe that this effect might be even weaker when publication bias is accounted for: the three highest powered studies in the dataset show effects that are almost zero, including one study with 12802 participants (r = 0.004). Lee and Freely examined the possibility of publication bias using visual inspection of funnel plots. However, this approach does not perform well under some conditions like high heterogeneity (Bartoš, Maier, Quintana, et al., 2022; Bartoš, Maier, Wagenmakers, et al., 2022; Carter et al., 2019; Hong & Reed, 2021; Kvarven et al., 2020; Lau et al., 2006; Maier, VanderWeele, et al., 2022), which is present in Lee and Freely’s meta-analysis (QT [40] = 104.65, p < .001, I2 = 61.8%).

Many have proposed alternative bias correction techniques (for reviews see Carter et al., 2019; and Renkewitz & Keiner, 2019), but these only perform well under some meta-analytic conditions in terms of effect size, heterogeneity, and publication bias. As it is not possible to know the meta-analytic conditions without having adjusted for publication bias, this situation poses the following Catch-22 problem: In order to adjust for publication bias, one needs to know the data generating process, but in order to know the data generating process, one needs to have adjusted for publication bias (Bartoš, Maier, Shanks, et al., 2022). Robust Bayesian Meta-Analysis (RoBMA) is a novel method that aims to overcome this problem using Bayesian model-averaging (Bartoš, Maier, Wagenmakers, et al., 2022; Bartoš & Maier, 2020; Maier, Bartoš, et al., 2022). Instead of selecting a single model, RoBMA applies multiple models simultaneously and allows the data to guide the inference to be based most strongly on those models that predicted the data best. This multi-model inference avoids the Catch-22 problem discussed above. Specifically, RoBMA includes models of selection for significance (Vevea & Hedges, 1995) and models based on the relationship between effect sizes and standard errors (precision effect test & precision effect estimate with standard errors, PET-PEESE). Rather than selecting a single model, Bayesian model-averaging bases the inference on all models (the two publication bias correction methods above as well as methods assuming no publication bias) and weighs them based on how well they predict the data. Therefore, it is much more robust to model misspecification compared to previous publication bias adjustment methods.

RoBMA outperformed other methods for publication bias correction in a large simulation study (Hong & Reed, 2021, reanalysis with RoBMA in Bartoš, Maier, Wagenmakers, et al., 2022), which combined the simulation environments from four previous studies (Alinaghi & Reed, 2018; Bom & Rachinger, 2019; Carter et al., 2019; Stanley et al., 2017). In addition, RoBMA has also been shown to perform better than other methods on empirical data by comparing the estimates of bias-adjusted meta-analyses to registered replication reports (Bartoš, Maier, Wagenmakers, et al., 2022). Here we use the version of RoBMA (also known as RoBMA-PSMA [publication selection model averaging]) as in Bartoš, Maier, Wagenmakers, et al. (2022), as it has been vetted extensively in simulation studies and applied examples (in the same paper). For details about the 36 models that are included, and the corresponding prior distributions and prior model probabilities see Bartoš, Maier, Wagenmakers, et al. (2022). RoBMA quantifies evidence using Bayes factors. Bayes factors compare the likelihood of the data under competing models (in our case, the alternative hypothesis in comparison to the null hypothesis). In our paper we report BF01. In other words, Bayes factors have the null in the numerator and the alternative in the denominator, and denote evidence in favor of the null hypothesis. As a rule of thumb for Bayes factors with the null in the numerator, Bayes factors between 1 and 3 are often regarded as weak evidence for the null, Bayes factors between 3 and 10 are often regarded as moderate evidence for the null, and Bayes factors larger than 10 are often regarded as strong evidence for the null (e.g., Jeffreys, 1939; M. D. Lee & Wagenmakers, 2013, p. 105; Wasserman, 2000). However, we caution that these rules of thumb should merely aid interpretation and not be taken as absolute thresholds. Bayes factors are continuous measures of the strength of evidence, and any discretization inevitably results in loss of information.

When applying RoBMA to the data by Lee and Freely (2016), we found moderate evidence for publication bias (BF01 = 0.11) and strong evidence for the absence of the average effect (BF01 = 14.93), with a model-averaged mean effect size estimate of r = 0.002 (95% CI [0; 0.004]).2 In addition, we find weak evidence against heterogeneity (BF01 = 1.24). We plotted the pattern of bias in Lee and Freely in Figure 1. The left panel shows the regression line of effect sizes on standard errors. This relationship indicates that studies with smaller standard errors show smaller effects, a pattern that is indicative of publication bias. The right panel shows the relative publication probabilities for nonsignificant in comparison to significant p-values. This panel indicates that nonsignificant studies (p > .05) are considerably less likely to be published than significant studies. Note that most of the posterior probability among the publication bias models is on the selection models rather than models assuming a relationship between effect sizes and standard errors (see supplementary materials).

Figure 1.
Footprint of Publication Bias in Lee and Freely (2016)

Note. The left panel shows the PET-PEESE regression line (i.e., the relationship between effect sizes and standard errors) and the right panel shows the relative publication probabilities based on the selection models. The left panel displays a regression line of effect sizes on standard errors, the intercept of this line indicates the hypothetical estimate of a study with infinite precision; we can see that it is very close to 0. The right panel displays estimates for the relative publication probabilities of nonsignificant studies in comparison to significant studies model averaged across the different selection models included in RoBMA.

Figure 1.
Footprint of Publication Bias in Lee and Freely (2016)

Note. The left panel shows the PET-PEESE regression line (i.e., the relationship between effect sizes and standard errors) and the right panel shows the relative publication probabilities based on the selection models. The left panel displays a regression line of effect sizes on standard errors, the intercept of this line indicates the hypothetical estimate of a study with infinite precision; we can see that it is very close to 0. The right panel displays estimates for the relative publication probabilities of nonsignificant studies in comparison to significant studies model averaged across the different selection models included in RoBMA.

Close modal

We chose Studies 1 and 3 of Small et al. (2007) for replication due to the article’s considerable impact. At the time of writing (April 2023), there were 1210 Google Scholar citations of the target article. Beyond the direct citation count, Small et al. (2007) have influenced several other highly cited articles (> 1000 times at the time of writing; e.g., Bekkers & Wiepking, 2011; Slovic, 2007) and popular science and philosophy books such as ‘The Life You Can Save’ (Singer, 2019) and ‘Poor Economics’ (Banerjee & Duflo, 2011), which have guided both research and policy. Furthermore, charities often feature pictures of identified victims in advertisements, hoping to employ this effect to increase charitable giving (e.g., https://www.savethechildren.org.uk/), underscoring the applied importance of Small et al.’s findings.

To our knowledge, there has been one direct replication of Small et al. (2007): a Spanish language unpublished doctoral thesis failed to find support for the results of Study 1 (Charris, 2018). However, Charris (2018) only found weak evidence against the effect in a Bayesian analysis and no evidence for the null using the TOST procedure to test for equivalence (e.g., Lakens et al., 2018). Charris (2018) concluded that his study lacked statistical power and does not allow rejecting the identifiable victim effect. In other words, more evidence is needed using high-powered direct replications. Several other recent studies have also questioned the robustness of the phenomenon, but usually only in conceptual replications. For example, Hart, Lane, and Chinn (2018) failed to find support for variations in people’s prosocial responsiveness focusing on a single victim than many individuals. Recently, Moche and Västfjäll (2021) and Moche (2022) also failed to replicate the effect across 6 of 7 well-powered studies. A field experiment also failed to provide evidence for the effect (Lesner & Rasmussen, 2014). These failed replications are surprising given that other high-powered studies did find evidence for the identifiable victim effect (e.g., Caviola et al., 2020; Galak et al., 2011; Sudhir et al., 2016).

However, conceptual replications are limited in their ability to inform about previous findings, as when conceptual replications failed it can be argued that the differences in methodology are the explanation for the different results (Chambers, 2017, p. 16). This may interact with a file-drawer and publication bias problems in a literature, that may result in a literature with successful conceptual replications but few shared null results.

The combination of the mixed evidence from replications, the above meta-analysis reanalysis, and the impact of Small et al.’s findings, suggests that more research is needed to revisit and reassess the identifiable victim effect using high-power preregistered replications (Isager et al., 2021). We note that we initially set out to conduct a direct close replication, yet decided on first running a far conceptual replication using the same design with an important adjustment of the dependent variable to use hypothetical donations rather than real donations. We did this for a number of reasons. First, this project was related to a different replication project we conducted in Majumder et al. (2023) in which we failed to replicate the identifiable victim effect demonstrated by Kogut and Ritov (2005) who showed the effect using hypothetical donations, as many other studies examining the identifiable victim effect have. We aimed to make the two replications as similar as possible in their dependent variables to allow one replication to possibly inform the other. Second, we acknowledge the differences between hypothetical and real-life behavior, yet thought it best to ensure that the effect holds with simpler hypothetical donations before embarking on a more complex and costly real donation study. Mean donations are typically higher for hypothetical donations than for real donations (Bekkers, 2006); however, we are not aware of any evidence of mechanisms that result in differences between conditions when switching from real to hypothetical donations.

Given this important adjustment regarding the dependent variable, we categorized this replication as far and conceptual, even though much of the rest of the study remains the same. Thus, we caution against over-interpreting from this replication to the original article’s real donations effect replicating, though we hope the community would find this informative in the generalizability of the original’s design to hypothetical scenarios. We discuss this point and implications in the general discussion.

Small et al. (2007) proposed that thinking analytically about the value of lives reduced giving to an identifiable victim but not to statistical victims. They also suggested that implicitly inducing analytical reasoning about the value of lives reduced donations to an identifiable victim but not to statistical victims. They conducted four experiments, and the current replication focused on Studies 1 and 3.

Study 1 Design and Findings

In Study 1, participants were randomly assigned to one of two conditions, with the intervention group learning about the identifiable victim effect from previous research (explicit learning condition), whereas another served as a control group. They were further randomly assigned to either the statistical victim condition, in which they read information either about the problem of starvation in different African countries, or to the identifiable victim condition, in which they received a brief description of an African girl from the Save the Children website. They were then instructed to donate any five one-dollar bills received earlier from a survey to victims they had read about in the letter. After their donation, participants rated different affective reactions they experienced towards the described victim(s). These items included feeling upset, touched, sympathetic, and morally responsible, as well as the perceived appropriateness of donating to help the described victims.

To summarize, their Study 1 design was a 2 (Identifiability: identifiable vs. statistical) x 2 (Explicit Learning: intervention vs. control) between-subjects factorial design. Their results showed that in the control condition without the intervention, donations to the identifiable victim were higher than donations to statistical victims. However, the pattern was different for the participants who were assigned to the explicit learning intervention conditions and learned of the identifiable victim effect before asking to donate, with the donations being similar towards the identifiable victim compared to towards statistical victims. The explicit learning intervention, therefore, seemed to have eliminated the additional donations given towards an identifiable victim.3 In addition, they showed that aggregated feelings predicted donation behavior better in the identifiable victim/no intervention condition than in the other conditions.

Study 3 Design and Findings

In Study 3, Small et al. (2007) further studied the effect of implicit learning by adding a third identifiability condition, a joint condition (also referred to as “implicit learning condition”) that included both a picture of the single victim and general victim statistics, resulting in a three conditions design (identifiable vs. statistical vs. joint). The donation in this joint condition was intended for the described identified victim. The presentation of victim statistics was meant to implicitly eliminate the identifiable victim effect in the joint condition arguably because providing statistics alongside the victim reminds the potential donor of the many people who would not receive help. Study 3 did not investigate how feelings predicted donations. In summary, the Study 3 design included one factor with three levels/conditions: identifiable victim, statistical victims, and the joint/implicit learning condition.

Small et al. (2007) found support for implicit learning, as donations to the identified victim were lower in the joint condition compared to the identifiable victim condition.

In this replication, we merged Studies 1 and 3 in Small et al. (2007) into a single experimental design to study both the explicit and implicit ways of debiasing the identifiable victim effect. Our study was a 3 x 2 experimental design varying identifiability of the victim (identifiable victim, statistical victims, and joint - identifiable victim alongside statistical victims) and Explicit Learning (present or not). We summarized the design in Table 1.

Table 1.
Replication and Extension: Experimental Design
 Identifiability
(IV1; between-subject) 
 Identifiable victim condition Statistical victim condition Joint condition
(Implicit Learning) 
Explicit Learning
(IV2; between-subject) 
Explicit learning intervention Identifiable
Explicit 
Statistical
Explicit 
Joint
Explicit 
No intervention
(Control) 
Identifiable
Control 
Statistical
Control 
Joint
Control 
 Identifiability
(IV1; between-subject) 
 Identifiable victim condition Statistical victim condition Joint condition
(Implicit Learning) 
Explicit Learning
(IV2; between-subject) 
Explicit learning intervention Identifiable
Explicit 
Statistical
Explicit 
Joint
Explicit 
No intervention
(Control) 
Identifiable
Control 
Statistical
Control 
Joint
Control 

Note. Joint condition displayed both an identifiable victim and general victim statistics.

The Identifiability factor, therefore, included the implicit learning intervention from Study 3 in Small et al. (2007). Extending the original studies, the explicit learning intervention was also manipulated on the joint condition and participants in the joint condition also rated affective feelings. We note that in-line with Small et al. (2007), the donations in the joint condition went towards the identified victim (rather than the statistical victims that were also described in this condition). Mirroring Small et al. (2007)’s Study 1, we assessed aggregated affective feelings as a predictor of hypothetical donations.

We summarized the hypotheses of the current replication in Table 2. To replicate the results of the original study, our Hypothesis 1 tests the identifiable victim effect based on the contributions toward different victims. We combined the original Hypotheses 1 and 2 stated in Small et al. (2007) into Hypothesis 2 to investigate whether being informed about the identifiable victim effect affected donations towards the different victims. In Hypothesis 3, we explored whether learning about the identifiable victim effect affected donations regardless of Identifiability. Hypothesis 4 describes the main effect of implicit learning (i.e., the joint condition; being presented with victim statistics in the identifiable victim condition) to replicate Study 3 from Small et al. (2007). We proposed Hypothesis 5 to examine the impact of Identifiability and Explicit Learning on affective feelings, and added Hypothesis 6 to test our extension regarding perceived impact of donation. For each of the hypotheses, we had hypotheses serving as replications mirroring the target article’s designs (Study 1 without the joint conditions, and Study 3 without the Explicit Learning conditions), and additional extension hypotheses that aim to make the most of the unified design using all relevant conditions (with joint, and with explicit learning intervention).

Table 2.
Replication and Extension: Summary of Hypotheses
HypothesesLabelHypothesis descriptionConditions comparisons for hypotheses
Donations    
1a (Identifiability main effect, without joint) [S1] Identifiable victim effect in donations People donate more when presented with an identifiable victim than when presented with statistical victims Identifiable (Explicit & Control) >
Statistical (Explicit & Control) 
1b (Identifiability main
effect, with joint) [E] * 
Identifiable (Explicit & Control) > Statistical (Explicit & Control) ~= Joint (Explicit & Control) 
2a (Interaction effect,
without joint) [S1] 
Explicit learning reduces identifiable victim effect in donations The identifiable victim effect is weaker for people who were explicitly informed about the identifiable victim effect. Identifiable-Explicit minus Identifiable-Control >
Statistical-Explicit minus Statistical-Control 
2b (Interaction effect,
with joint) [E] * 
Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control 
3a (Explicit Learning
main effect, without joint) [S1] * 
Explicit learning reduces donations People who were explicitly informed about the identifiable victim effect tend to donate less than those uninformed of the effect. Explicit (Identifiable, Statistical) <
Control (Identifiable, Statistical) 
3b (Explicit Learning
main effect, with joint) [E] 
Explicit (Identifiable, Statistical, and Joint) <
Control (Identifiable, Statistical, and Joint) 
4 (Identifiability with Implicit Learning main effect, without Explicit) [S3] * Statistical information reduces donations towards identified victim (Implicit learning) People donate less to an identifiable victim when the identifiable victim is presented alongside information about statistical victims (joint condition) Identifiable (Control) >
Statistical (Control) ~=
Joint (Control) 
Affective feelings    
5a (Identifiability main effect, without joint) [S1] * Identifiable victim effect in affective feelings People rate higher affective feelings towards an identifiable victim than towards statistical victims and to an identifiable victim presented alongside statistical victims Identifiable (Explicit & Control) >
Statistical (Explicit & Control) 
5b (Explicit Learning
main effect, without joint) [S1] * 
Explicit learning reduces affective feelings People who were explicitly informed about the identifiable victim effect tend to donate less than those uninformed of the effect. Explicit (Identifiable, Statistical) <
Control (Identifiable, Statistical) 
5c (Interaction effect, without joint) [S1] * Explicit learning reduces identifiable victim effect in affective feelings The identifiable victim effect in affective feelings is weaker for people who were explicitly informed about the identifiable victim effect Identifiable-Explicit minus Identifiable-Control >
Statistical-Explicit minus Statistical-Control 
5d (Interaction effect, with joint) [E] Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control 
Perceived impact    
6 (Interaction effect, with joint) [E] Identifiable victim effect in perceived impact People rate higher impact for donations to an identifiable victim than towards 1) statistical victims and 2) an identifiable victim presented with victim statistics Identifiable (Explicit & Control) >
Statistical (Explicit & Control) ~=
Joint (Explicit & Control) 
HypothesesLabelHypothesis descriptionConditions comparisons for hypotheses
Donations    
1a (Identifiability main effect, without joint) [S1] Identifiable victim effect in donations People donate more when presented with an identifiable victim than when presented with statistical victims Identifiable (Explicit & Control) >
Statistical (Explicit & Control) 
1b (Identifiability main
effect, with joint) [E] * 
Identifiable (Explicit & Control) > Statistical (Explicit & Control) ~= Joint (Explicit & Control) 
2a (Interaction effect,
without joint) [S1] 
Explicit learning reduces identifiable victim effect in donations The identifiable victim effect is weaker for people who were explicitly informed about the identifiable victim effect. Identifiable-Explicit minus Identifiable-Control >
Statistical-Explicit minus Statistical-Control 
2b (Interaction effect,
with joint) [E] * 
Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control 
3a (Explicit Learning
main effect, without joint) [S1] * 
Explicit learning reduces donations People who were explicitly informed about the identifiable victim effect tend to donate less than those uninformed of the effect. Explicit (Identifiable, Statistical) <
Control (Identifiable, Statistical) 
3b (Explicit Learning
main effect, with joint) [E] 
Explicit (Identifiable, Statistical, and Joint) <
Control (Identifiable, Statistical, and Joint) 
4 (Identifiability with Implicit Learning main effect, without Explicit) [S3] * Statistical information reduces donations towards identified victim (Implicit learning) People donate less to an identifiable victim when the identifiable victim is presented alongside information about statistical victims (joint condition) Identifiable (Control) >
Statistical (Control) ~=
Joint (Control) 
Affective feelings    
5a (Identifiability main effect, without joint) [S1] * Identifiable victim effect in affective feelings People rate higher affective feelings towards an identifiable victim than towards statistical victims and to an identifiable victim presented alongside statistical victims Identifiable (Explicit & Control) >
Statistical (Explicit & Control) 
5b (Explicit Learning
main effect, without joint) [S1] * 
Explicit learning reduces affective feelings People who were explicitly informed about the identifiable victim effect tend to donate less than those uninformed of the effect. Explicit (Identifiable, Statistical) <
Control (Identifiable, Statistical) 
5c (Interaction effect, without joint) [S1] * Explicit learning reduces identifiable victim effect in affective feelings The identifiable victim effect in affective feelings is weaker for people who were explicitly informed about the identifiable victim effect Identifiable-Explicit minus Identifiable-Control >
Statistical-Explicit minus Statistical-Control 
5d (Interaction effect, with joint) [E] Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control 
Perceived impact    
6 (Interaction effect, with joint) [E] Identifiable victim effect in perceived impact People rate higher impact for donations to an identifiable victim than towards 1) statistical victims and 2) an identifiable victim presented with victim statistics Identifiable (Explicit & Control) >
Statistical (Explicit & Control) ~=
Joint (Explicit & Control) 

Note. For the interaction effects, we visually examined that the effect is in the correct direction if the interaction test is significant. The preregistration only specified the tests including the joint condition (i.e., did not specify H1 & H2). However, we added these hypotheses to ensure a fair comparison to the original article. Donations mentioned in the hypotheses refer to hypothetical donations. [S1] mirrors the target article’s Study 1. [S3] mirrors the target article’s Study 3. [E] indicates an extension. * indicates analysis was not pre-registered and added for completeness of reporting addressing peer review.

Given the conflicting findings regarding the influence of the perceived impact of donations we discussed above (Duncan, 2004; Friedrich & McGuire, 2010), we aimed to extend the replication study by also considering the perceived impact of donations. We, therefore, included an additional measure of perceived impact of the donation to investigate whether people consider donations more impactful towards an identifiable victim or statistical victims.

We provided all materials, data, and code at: https://osf.io/n4jkh/. We preregistered the study, and the preregistration can be accessed at: https://osf.io/dc9kb/.

We report all measures, manipulations, and exclusions conducted for this investigation. The study was preregistered with power analyses reported in the supplementary materials, and analyses were only conducted after all data had been collected. Deviations from our preregistration are stated in the ‘Deviations from preregistration’ section of the supplementary materials and also at the appropriate places in the methods section of the main text of the manuscript.

Participants

The study received ethics approval from the University of Hong Kong (EA1908020). A total of 1004 Amazon Mechanical Turk (MTurk) participants were recruited from a US sample using CloudResearch/TurkPrime (Litman et al., 2017; Mage = 39.4, SD = 12.4; 465 females, 533 males, 6 prefer not to say). We compared the target article and the replication samples in Table 3.

Table 3.
Samples: Comparison of Original Study and Replication
Demographics Small et al. (2007)  Replication 
Sample size Study 1: 121
Study 3: 159 
1004 
Geographic origin US American US American 
Gender Not specified 533 males, 465 females
6 prefer not to say 
Median age (years) Not specified 36 
Average age (years) Not specified 39.4 
Age range (years) Not specified 20-91 
Medium (location) Laboratory Computer (online) 
Compensation US$5 Around US$1 
Year 2007 2020 
Demographics Small et al. (2007)  Replication 
Sample size Study 1: 121
Study 3: 159 
1004 
Geographic origin US American US American 
Gender Not specified 533 males, 465 females
6 prefer not to say 
Median age (years) Not specified 36 
Average age (years) Not specified 39.4 
Age range (years) Not specified 20-91 
Medium (location) Laboratory Computer (online) 
Compensation US$5 Around US$1 
Year 2007 2020 

We collected as many participants as we could afford with the available funding. The full report of power analysis can be found in the supplementary materials under the section ‘Power analysis of the original study effect’ and indicates that for the lowest powered effect (the interaction between Explicit Learning and Identifiability), a sample size of 314 would be sufficient to achieve 95% power for the original effect size. In addition, sensitivity power analyses indicate that our sample size would have 95% power to detect a very small effect size of ηp2 = 0.012 with an alpha level of .05.

Exclusion Criteria

We pre-registered that “We will focus our analyses on the full sample. However, as a supplementary analysis and to examine any potential issues, we will also determine further findings reports with exclusions”, with several exclusions criteria for the supplementary analyses: low English proficiency (scored lower than 4 on a scale of 0 to 6); not being serious in completing the survey (scored lower than 3 on a scale of 0 to 4); correctly guessed the hypotheses; already seen the survey before; failure to complete the survey or completed in less than a minute; and not from the United States.

Fifty-six responses met the exclusion criteria. We found no major differences between the pre- and post-exclusion results. As preregistered, we focused on the full sample for data analysis. We summarized the results after exclusion in the supplementary materials (‘Exclusion based on preregistration criteria’), with a comparison of the findings (‘Pre-exclusions versus post-exclusions’).

We had preregistered using median absolute deviance (MAD) to detect univariate outliers; however, we realized that this procedure is not relevant to our dataset due to the boundedness of all the scales used.

Design and Procedure

We combined Studies 1 and 3 of Small et al. (2007) into a unified between-subject design with a 3 (Identifiability: identifiable vs. statistical, vs. joint) by 2 (Explicit Learning: intervention vs. control) random-assignment experimental design, and with donations and feelings as the dependent variables. We provided additional details regarding the procedure in the ‘Procedure’ subsection in the supplementary materials and the Qualtrics survey is provided with the preregistration in the OSF folder.

Manipulations

Explicit Learning

Participants were randomly assigned into either the explicit learning intervention condition or to the control condition (evenly presented with the Qualtrics randomizer). Participants in the explicit learning intervention condition were instructed to read a passage about prior research findings on the identifiable victim effect used in the original studies. In other words, they were taught about the phenomenon before the donation.

Identifiability

Participants were then randomly assigned to one of the three Identifiability conditions. Those in the identifiable victim condition read about a child from Zambia suffering from starvation, accompanied by a black-and-white photograph and a short description. Those in the statistical victim condition read about numerical victim statistics to illustrate the millions of people living in a similar plight to the child described in the identifiable victim condition. The joint condition was a combination of the previous two conditions, with the same Zambian child presented in a photo with a brief description, along with the victim statistics provided in the statistical victim condition; the order of the presentation was randomized (evenly presented with the Qualtrics randomizer).

Forced Manipulation Comprehension Checks

To ensure reading and comprehension of the scenarios, we added checks that the participants had to answer correctly in order to be able to proceed to the next page that presented the dependent measures. This is a noted deviation to the target article’s design, which we added to address concerns that online sample participants may not have read or were inattentive to the scenario and the manipulation.

Measures

Hypothetical Donation

Participants were then presented with the following continuation of the scenario: “Imagine that you have just earned 5USdollarsandyouaregivenanopportunitytodonateanyamountofthemoneytotheorganizationSavetheChildren.Theythenindicatetheirhypotheticaldonationsfrom0to5US in increments of 1(0, 1,2, 3,4, or $5). The donation was to the specific victim in the identifiable and joint conditions and to the anonymous group in the statistical victim condition.

Affective Reactions (with Perceived Impact Extension)

Participants indicated their affective reactions at the time of donation on a 5-point Likert scale, ranging from 1 (Not at all) to 5 (Extremely). The affective measures were: 1) Upset: “How upsetting is the described situation of the victims to you?”, 2) Sympathetic: “How sympathetic did you feel while reading the description of the victims?”, 3) Responsibility: “How much do you feel it is your moral responsibility to help out the victims?”, 4) Touched: “How touched were you by the described situation of the victims?”, 5) Appropriateness: “To what extent do you feel that it is appropriate to give money to aid the victims?”, and 6) Perceived impact (extension): “How confident were you that donating your money to the described victims could have a significant impact?”. In line with Small et al. (2007) we investigated the effect of each feeling individually as well as the effect on aggregated feelings (without the extension perceived impact).

Replication Closeness Evaluation

We summarized our evaluation criteria of the replication closeness based on Lebel et al. (2018) in Table 4, categorizing the replication as ‘far’, given the adjustment we made to the dependent variable being a hypothetical scenario instead of a behavioral measure examining real-life donations.

Table 4.
Classification of the Replication, Based on Lebel et al. (2018) 
Design facet Replication Details of deviation 
IV operationalization Same 
DV operationalization Different Hypothetical donations
See “DV stimuli” below in subsection “Hypothetical imaginary donation" for details 
IV stimuli Similar Updated victim information
We presented the participants with the most updated victim information retrieved from the website of Save the Children given that those used in the original study was dated two decades ago.
Explicit learning intervention applied to the joint condition
To combine the original studies 1 and 3 into a single study, we applied the explicit learning intervention to the joint condition as an extension. 
DV stimuli Different Question adjustment
We made minor changes to the questions evaluating participants’ feelings to ensure the reported emotion status was linked to the victim information they have just read.
Hypothetical imaginary donation
We asked the participants to indicate how much they would hypothetically donate from 0to5 to the corresponding victim(s) instead of donating the money to Save the Children after signing a charity request letter.
Feelings variables measurement
We considered affective feelings in the joint condition which was not measured in the original study.
We also added an extension of ‘perceived impact of donation’ into the scale. 
Procedural details Similar No reward acquired from a pre-survey
Participants would not conduct an irrelevant survey prior to the experiment to earn $5 for the donation. 
Physical settings Different Online survey
Participants conducted an online survey in Qualtrics on the MTurk platform whereas the original study surveyed inside a student center of a university in Pennsylvania. 
Contextual variables Different MTurk workers as participants
We recruited participants on the MTurk platform while the original study recruited participants sitting inside the school center of a university in Pennsylvania. 
Replication classification Far replication  
Design facet Replication Details of deviation 
IV operationalization Same 
DV operationalization Different Hypothetical donations
See “DV stimuli” below in subsection “Hypothetical imaginary donation" for details 
IV stimuli Similar Updated victim information
We presented the participants with the most updated victim information retrieved from the website of Save the Children given that those used in the original study was dated two decades ago.
Explicit learning intervention applied to the joint condition
To combine the original studies 1 and 3 into a single study, we applied the explicit learning intervention to the joint condition as an extension. 
DV stimuli Different Question adjustment
We made minor changes to the questions evaluating participants’ feelings to ensure the reported emotion status was linked to the victim information they have just read.
Hypothetical imaginary donation
We asked the participants to indicate how much they would hypothetically donate from 0to5 to the corresponding victim(s) instead of donating the money to Save the Children after signing a charity request letter.
Feelings variables measurement
We considered affective feelings in the joint condition which was not measured in the original study.
We also added an extension of ‘perceived impact of donation’ into the scale. 
Procedural details Similar No reward acquired from a pre-survey
Participants would not conduct an irrelevant survey prior to the experiment to earn $5 for the donation. 
Physical settings Different Online survey
Participants conducted an online survey in Qualtrics on the MTurk platform whereas the original study surveyed inside a student center of a university in Pennsylvania. 
Contextual variables Different MTurk workers as participants
We recruited participants on the MTurk platform while the original study recruited participants sitting inside the school center of a university in Pennsylvania. 
Replication classification Far replication  

Note. IV= Independent variable. DV= Dependent variable.

We followed and extended the analyses conducted by the target article. We provided a comparison of the statistical tests reported in the original study and the replication in the supplementary materials.

Descriptive Statistics

We summarized the descriptive statistics for hypothetical donations, aggregated feelings, and perceived impact of the donation in Tables 5-7 and statistical tests for hypothetical donations in Tables 8 and 9. We provided the results for the individual measures of feelings in the supplementary materials.

Table 5.
Hypothetical Donations: Descriptives
 Identifiable victim
condition 
Statistical victim
condition 
Joint condition Total 
Explicit learning intervention condition 2.84 [1.89]
{1.36}*
(170) 
2.74 [1.98]
{1.26}*
(159) 
2.23 [1.91]
{N/A}
(173) 
2.60 [1.94]
(502) 
No intervention condition 2.58 [1.87]
{2.83}*
(165) 
2.72 [1.92]
{1.17}*
(176) 
2.48 [1.99]
{1.43}**
(161) 
2.60 [1.93]
(502) 
Total 2.71 [1.88]
(335) 
2.73 [1.95]
(335) 
2.35 [1.95]
(334) 
2.60 [1.93]
(1004) 
 Identifiable victim
condition 
Statistical victim
condition 
Joint condition Total 
Explicit learning intervention condition 2.84 [1.89]
{1.36}*
(170) 
2.74 [1.98]
{1.26}*
(159) 
2.23 [1.91]
{N/A}
(173) 
2.60 [1.94]
(502) 
No intervention condition 2.58 [1.87]
{2.83}*
(165) 
2.72 [1.92]
{1.17}*
(176) 
2.48 [1.99]
{1.43}**
(161) 
2.60 [1.93]
(502) 
Total 2.71 [1.88]
(335) 
2.73 [1.95]
(335) 
2.35 [1.95]
(334) 
2.60 [1.93]
(1004) 

Note. Statistics are presented in the following format: mean [standard deviation] {Small et al. (2007)’s reported means} (condition sample size). *Based on Small et al. (2007) Study 1. **Based on Small et al. (2007) Study 3.

Table 6.
Aggregated Feelings: Descriptives
 Identifiable victim Statistical victim Joint Total 
Explicit learning intervention 3.82 [0.91]
(170) 
3.82 [1.02]
(159) 
3.60 [0.97]
(173) 
3.75 [0.97]
(502) 
No explicit learning intervention 3.81 [0.90]
(165) 
3.84 [1.00]
(176) 
3.77 [1.05]
(161) 
3.81 [0.98]
(502) 
Total 3.81 [0.91]
(335) 
3.83 [1.01]
(335) 
3.69 [1.01]
(334) 
3.78[0.98]
(1004) 
 Identifiable victim Statistical victim Joint Total 
Explicit learning intervention 3.82 [0.91]
(170) 
3.82 [1.02]
(159) 
3.60 [0.97]
(173) 
3.75 [0.97]
(502) 
No explicit learning intervention 3.81 [0.90]
(165) 
3.84 [1.00]
(176) 
3.77 [1.05]
(161) 
3.81 [0.98]
(502) 
Total 3.81 [0.91]
(335) 
3.83 [1.01]
(335) 
3.69 [1.01]
(334) 
3.78[0.98]
(1004) 

Note. Statistics are presented in the order of Mean [Standard deviation] (condition sample size). We reported the same information for the non-aggregated feelings in the supplementary materials. Aggregated feelings were calculated following the approach by Small et al. (2007): Upset, sympathetic, touched, responsible, and appropriateness. The Cronbach’s alpha for the five feelings measures was 0.90.

Table 7.
Perceived Impact (Extension): Descriptives
 Identifiable victim Statistical victim Joint Total 
Explicit learning intervention 3.47 [1.23]
(170) 
2.91 [1.37]
(159) 
2.94 [1.35]
(173) 
3.11 [1.34]
(502) 
No explicit learning intervention 3.31 [1.25]
(165) 
2.99 [1.34]
(176) 
3.11 [1.42]
(161) 
3.14 [1.34]
(502) 
Total 3.39 [1.24]
(335) 
2.96 [1.35]
(335) 
3.02 [1.39]
(334) 
3.12 [1.34]
(1004) 
 Identifiable victim Statistical victim Joint Total 
Explicit learning intervention 3.47 [1.23]
(170) 
2.91 [1.37]
(159) 
2.94 [1.35]
(173) 
3.11 [1.34]
(502) 
No explicit learning intervention 3.31 [1.25]
(165) 
2.99 [1.34]
(176) 
3.11 [1.42]
(161) 
3.14 [1.34]
(502) 
Total 3.39 [1.24]
(335) 
2.96 [1.35]
(335) 
3.02 [1.39]
(334) 
3.12 [1.34]
(1004) 

Note. Statistics are presented in the order of Mean [Standard deviation] (condition sample size).

Hypothetical Donations

We plotted hypothetical donations by conditions (including joint condition) in Figure 2. We summarized the inferential tests of our replication in comparison to Small et al. (2007) in Table 8.

Figure 2.
Hypothetical donations: Interaction of Identifiability and Explicit Learning

Note. Created in JASP (2023) version 0.16.

Figure 2.
Hypothetical donations: Interaction of Identifiability and Explicit Learning

Note. Created in JASP (2023) version 0.16.

Close modal
Table 8.
Hypothetical Donations: Statistical Tests for Identifiability and Explicit Learning
 F p BF01 ηp2 95% CI 
H1: Identifiability 
Without joint condition [S1]
H1a: Identifiable (Explicit & Control) vs. Statistical (Explicit & Control) 
Target article 6.75 < .05 N/A .06 [.00, .15] 
Replication 0.01 .923 11.57 .00 [.00, .003] 
With joint condition [E]
H1b: Identifiable (Explicit & Control) vs. Statistical (Explicit & Control) vs. Joint (Explicit & Control) 
Replication 3.91 .020 1.77 .01 [.00, .021] 
H2: Interaction: Identifiability and Explicit Learning 
Without joint condition [S1]
H2a: (Identifiable-Explicit vs. Statistical-Explicit vs. Identifiable-Control vs. Statistical-Control) 
Target article 5.32 < .05 N/A .04 [.00, .14] 
Replication 0.654 .419 6.30 .001 [.000, .011] 
With joint condition [E]
H2b: (Identifiable-Explicit vs. Statistical-Explicit vs. Joint-Explicit vs. Identifiable-Control vs. Statistical-Control vs. Joint-Control) 
Replication 1.48 .228 12.74 .003 [.000, .012] 
H3: Explicit learning intervention 
Without joint condition [S1]
H3a: Explicit (Identifiable & Statistical) vs. Control (Identifiable & Statistical) 
Target article 4.15 < .05 N/A .03 [.00, .12] 
Replication 0.89 .346 7.51 .00 [.000, .012] 
With joint condition [E]
H3b: Explicit (Identifiable-, Statistical, & Joint) vs. Control (Identifiable, Statistical, & Joint) 
Replication 0.005 .943 14.15 .00 [.000, .002] 
H4: Implicit learning and Identifiability 
Without explicit learning [S3] *
H4: Identifiable (Control) > Statistical (Control) ~= Joint (Control) 
Target article 5.67 < .01 N/A .07 [.01, .15] 
Replication 0.61 .541 25.03 .00 [.00, .015] 
 F p BF01 ηp2 95% CI 
H1: Identifiability 
Without joint condition [S1]
H1a: Identifiable (Explicit & Control) vs. Statistical (Explicit & Control) 
Target article 6.75 < .05 N/A .06 [.00, .15] 
Replication 0.01 .923 11.57 .00 [.00, .003] 
With joint condition [E]
H1b: Identifiable (Explicit & Control) vs. Statistical (Explicit & Control) vs. Joint (Explicit & Control) 
Replication 3.91 .020 1.77 .01 [.00, .021] 
H2: Interaction: Identifiability and Explicit Learning 
Without joint condition [S1]
H2a: (Identifiable-Explicit vs. Statistical-Explicit vs. Identifiable-Control vs. Statistical-Control) 
Target article 5.32 < .05 N/A .04 [.00, .14] 
Replication 0.654 .419 6.30 .001 [.000, .011] 
With joint condition [E]
H2b: (Identifiable-Explicit vs. Statistical-Explicit vs. Joint-Explicit vs. Identifiable-Control vs. Statistical-Control vs. Joint-Control) 
Replication 1.48 .228 12.74 .003 [.000, .012] 
H3: Explicit learning intervention 
Without joint condition [S1]
H3a: Explicit (Identifiable & Statistical) vs. Control (Identifiable & Statistical) 
Target article 4.15 < .05 N/A .03 [.00, .12] 
Replication 0.89 .346 7.51 .00 [.000, .012] 
With joint condition [E]
H3b: Explicit (Identifiable-, Statistical, & Joint) vs. Control (Identifiable, Statistical, & Joint) 
Replication 0.005 .943 14.15 .00 [.000, .002] 
H4: Implicit learning and Identifiability 
Without explicit learning [S3] *
H4: Identifiable (Control) > Statistical (Control) ~= Joint (Control) 
Target article 5.67 < .01 N/A .07 [.01, .15] 
Replication 0.61 .541 25.03 .00 [.00, .015] 

Note. ANOVA tests. N = 1004. CI = confidence interval. N/A = could not be recalculated. BF01 denotes the Bayes factor in favor of the null. Bayes factors based on Cauchy prior with rscale = 0.707. ηp2 for original study recalculated based on F-statistics and degrees of freedom. [S1] mirrors the target article’s Study 1. [S3] mirrors the target article’s Study 3. [E] indicates an extension. * indicates analysis was not pre-registered.

H1a, H2a, and H3a: Identifiability and Explicit Learning Main Effects and Interaction (without Joint Condition) [Replication]

Following the analyses conducted in Study 1 of Small et al. (2007),4 we carried out a 2 (Explicit Learning) × 2 (Identifiability) two-way ANOVA (i.e., cells Identifiable-Explicit, Statistical-Explicit, Identifiable-Control, and Statistical-Control) to examine the following hypotheses: 1) H1a: People donate more when presented with an identifiable victim than when presented with statistical victim, 2) H2a: The identifiable victim effect (H1a) is weaker for people who were explicitly informed about the identifiable victim effect 3) H3a: People that were explicitly informed about the identifiable victim effect tend to donate less than those uninformed about the effect.

We supplemented the frequentist analyses with a Bayesian analysis to allow quantifying evidence for the null. As parameter prior distribution we use a Cauchy (0, 0.707), which is a common choice in Bayesian analysis. Because the Cauchy distribution is very fat-tailed, this prior gives a lot of mass to a wide range of plausible effect sizes while at the same time not reducing the ability to obtain evidence for smaller effects by much (Wagenmakers et al., 2020).

We found no support for the main effect of Identifiability (H1a), Explicit Learning (H3a), or their interaction (H2a), with similar hypothetical donation amounts in the identifiable victim and statistical victim conditions. We therefore concluded failure to replicate the identifiable victim effect (H1a), and failure to replicate that explicitly learning about the effect impacted the effect itself (H2a).

H1b, H2b and H3b: Identifiability and Explicit Learning Main Effects and Interaction (with Joint Condition) [Extension]

We ran an additional more complex version of the analysis above, which included a comparison to the joint condition (which was added in the target article’s Study 3) and was only possible because of our unified design combining replications of the target article’s Studies 1 and 3. We conducted a 2 (Explicit Learning) × 3 (Identifiability) two-way ANOVA to examine if the provision of additional quantitative information together with an identified victim would debias the identifiable victim effect.

We found no support for the main effect of Explicit Learning and no interaction effect of Identifiability and Explicit Learning, that explicitly learning about the identifiable victim effect reduces people’s (hypothetical) donations (H3b). We found some support for main effect of Identifiability (H1b), F(2, 998) = 3.91, p = .02, ηp2 = .008. 95% CI [.000, .021], though Bayesian analysis indicates weak support for the null (BF01 = 1.77). The different conclusions from the two analyses can be explained by the large sample that increases the likelihood of significant p-values, even when the evidence is low from a Bayesian perspective (Maier & Lakens, 2022).

To better understand the Identifiability main effect, we also examined the post-hoc comparisons comparing the different Identifiability conditions with Bonferroni correction. We found no support for differences between statistical and identifiable victim conditions, t(998) = 0.097, p = 1.00, BF01 = 11.57, d = 0.01, 95% CI [-0.16, 0.14], and near threshold for the comparison between identifiable and joint, t(998) = 2.37, p = .053, d = 0.18 [0.03, 0.34] with donations slightly lower in the joint condition. We found support for differences between the statistical and the joint condition t(998) = 2.46, p = .041, d = 0.19 [0.04, 0.34]. Given the weak near threshold unexpected effect, we caution against over-interpretation of the Identifiability main effect or the contrasts.

H4: Identifiability with Implicit Learning (Joint Condition) Main Effect (without Explicit Learning) [Replication]

We conducted the analyses mirroring the analyses of Study 3 in Small et al (2007), without including the explicit learning intervention conditions (i.e., H4: Identifiable-Control > Joint-Control ~= Joint-Control). Although this was conducted by the target, it was not included in the pre-registration, which was focused on the unified design and included the explicit conditions (see below). We therefore labeled this analysis exploratory. We found no support for an implicit learning effect, F(2, 499) = 0.61, p = .541, ηp2 = .002, 95% CI [.00, .02], and with strong evidence against the effect in a complementary Bayesian analysis (BF01 = 25.03). Therefore, we did not conduct any follow-up tests comparing differences between specific cells.

Feelings

The Cronbach’s alpha for the feelings variables was 0.90. We therefore followed the methodology by Small et al (2007) and aggregated the five feelings into a single measure of aggregated feelings, combining: 1) feeling upset, 2) feeling sympathetic towards the victim(s), 3) feeling touched by the situation, 4) feeling morally responsible, and 5) feeling that it is appropriate to donate to the cause. We summarized the results of the hypotheses tested on aggregated feelings in Table 9.

Table 9.
Aggregated Feelings: Statistical Tests for Identifiability and Explicit Learning
 df F p BF01 ηp2 95% CI 
Identifiability main effect, without joint [S1] * H5a: Identifiable (Explicit & Control) > Statistical (Explicit & Control) 
Target article 1, 114 1.80 .18 N/A .02 [.00, .09] 
Replication 1, 114 0.09 .764 11.08 .00 [.00, .01] 
Explicit learning intervention main effect, without joint [S1] * H5b: Explicit (Identifiable, Statistical) vs. Control (Identifiable, Statistical) 
Target article 1, 114 0.24 .63 N/A .00 [.00, .05] 
Replication 1, 114 0.01 .940 11.56 .00 [.00, .002] 
Interaction effect, without joint [S1] * H5c: (Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control) 
Target article 1, 114 2.00 .16 N/A .02 [.00, .09] 
Replication 1, 114 0.04 .842 8.13 .00 [.00, .01] 
Interaction effect, with joint [E] H5d: (Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control) 
Replication 2, 998 0.792 .453 44.85 .002 [.00, .009] 
 df F p BF01 ηp2 95% CI 
Identifiability main effect, without joint [S1] * H5a: Identifiable (Explicit & Control) > Statistical (Explicit & Control) 
Target article 1, 114 1.80 .18 N/A .02 [.00, .09] 
Replication 1, 114 0.09 .764 11.08 .00 [.00, .01] 
Explicit learning intervention main effect, without joint [S1] * H5b: Explicit (Identifiable, Statistical) vs. Control (Identifiable, Statistical) 
Target article 1, 114 0.24 .63 N/A .00 [.00, .05] 
Replication 1, 114 0.01 .940 11.56 .00 [.00, .002] 
Interaction effect, without joint [S1] * H5c: (Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control) 
Target article 1, 114 2.00 .16 N/A .02 [.00, .09] 
Replication 1, 114 0.04 .842 8.13 .00 [.00, .01] 
Interaction effect, with joint [E] H5d: (Identifiable-Explicit minus Identifiable-Control > Statistical-Explicit minus Statistical-Control ~= Joint-Explicit minus Joint-Control) 
Replication 2, 998 0.792 .453 44.85 .002 [.00, .009] 

Note. ANOVA test. N = 1004. CI = confidence interval. Aggregated feelings refer to averaging the feelings of being upset, being sympathetic, being touched, moral responsibility, and donation appropriateness into a single composite. The Cronbach’s alpha for the five feelings measures was 0.90. BF01 denotes the Bayes factor in favor of the null. Bayes factors based on Cauchy prior with rscale = 0.707. [S1] mirrors the target article’s Study 1. [E] indicates an extension. * indicates analysis was not pre-registered.

H5a/b/c: Identifiability and Explicit Learning Main Effects and Interaction on Aggregated Feelings (without Joint Condition) [Replication]

In Small et al. (2007), aggregated feelings were measured and analyzed in Study 1, and the joint condition was introduced in Study 3. We therefore first conducted a matched analysis to their Study 1 without the joint condition. We note that our pre-registration originally focused on the analyses that included the joint condition, yet deviated from the target’s Study 1, which is reported in the following section.

We found no support for and with Bayesian analyses evidence against the main effect of Identifiability (H5a: F(1, 666) = 0.09, p = .764, BF01 = 11.08, ηp2=0.00, 95% CI [0.00, 0.01]), main effect of Explicit Learning and (H5b: F(1, 666) = 0.01, p = .940, BF01 = 11.56, ηp2=0.00, 95% CI [0.00, 0.002]), and their interaction (H5c: F(1, 666) = 0.04, p = .842, BF01 = 8.13, ηp2=0.00, 95% CI [0.00, 0.01]).

H5d: Identifiability and Explicit Learning Main Effects and Interaction on Aggregated Feelings (with Joint Condition) [Extension]

We also ran a pre-registered extension analysis with the joint condition that went beyond the target article’s Study 1. We conducted a 2 (Explicit Learning) × 3 (Identifiability) two-way ANOVA on aggregated feelings (mean across all feelings measures apart from ‘perceived impact’). Results were similar to those without the joint condition, with no support for the main effects of Identifiability, the main effect of Explicit Learning, or the interaction on aggregated feelings. The Bayesian analysis further suggests evidence against an effect. Thus, we concluded that H5a/b/c and H5d were not supported.

Exploratory: Identifiability and Explicit Learning interaction on singular feelings

We also ran a 2 (Explicit Learning) × 3 (Identifiability) two-way ANOVA to determine if separate affective reactions were affected by the Identifiability and Explicit Learning interaction. We summarized our findings in a table in the supplementary materials in the section ‘Descriptive Statistics and Tests for Disaggregated Feelings’.

The target article reported no support for any effects regarding the feeling variables. In this replication, we found some support for a main effect of Identifiability on moral responsibility, F(2, 998) = 3.82, p = .022, ηp2 = .008 [.000, .021] and appropriateness to donate, F(2, 998) = 3.71, p = .025, ηp2 = .007 [.000, .020]. However, a Bayesian analysis suggests weak evidence against an effect on moral responsibility (BF01 = 2.05) and appropriateness of donation (BF01 = 2.24). We therefore conclude that there is not enough evidence to claim an effect on disaggregated feelings (especially given the large sample size and number of statistical tests).

Associations between Aggregated Feelings and Hypothetical Donations

We examined the associations between five aggregated feelings (as above) and hypothetical donations, summarized in Table 10. The target article found that aggregated feelings predicted donations more strongly in the identifiable victim/no intervention condition than in the other conditions. In our data, there were strong positive relationships between aggregated feelings and hypothetical donations across all six conditions, with the effects in all conditions overlapping with the confidence intervals of all the effects in the other conditions, showing no indication for differences. We concluded these results as inconsistent with the target article’s findings.

Table 10.
Correlations between Aggregated Feelings and Hypothetical Donations across Conditions
 Target article Replication 
Conditions r p n r 95% CI p 
Identifiable/ Explicit learning .34 N/A 170 .63 [0.53, 0.72] < .001 
Identifiable/ No explicit learning .55 < .01 165 .58 [0.47, 0.67] < .001 
Statistical/ Explicit learning .33 N/A 159 .64 [0.54, 0.73] < .001 
Statistical/ No explicit learning .39 N/A 176 .56 [0.45, 0.65] < .001 
Joint/ Explicit learning N/A N/A 173 .63 [0.53, 0.71] < .001 
Joint/ No explicit learning N/A N/A 161 .59 [0.58, 0.80] < .001 
 Target article Replication 
Conditions r p n r 95% CI p 
Identifiable/ Explicit learning .34 N/A 170 .63 [0.53, 0.72] < .001 
Identifiable/ No explicit learning .55 < .01 165 .58 [0.47, 0.67] < .001 
Statistical/ Explicit learning .33 N/A 159 .64 [0.54, 0.73] < .001 
Statistical/ No explicit learning .39 N/A 176 .56 [0.45, 0.65] < .001 
Joint/ Explicit learning N/A N/A 173 .63 [0.53, 0.71] < .001 
Joint/ No explicit learning N/A N/A 161 .59 [0.58, 0.80] < .001 

Note. CI = confidence interval. N/A = unreported in the original studies. Aggregated feelings refer to averaging the feelings of being upset, being sympathetic, being touched, moral responsibility, and donation appropriateness into a single composite. The Cronbach’s alpha for the five feelings measures was 0.90.

Perceived impact (Extension)

H6: Effect of Identifiability and Explicit Learning on Perceived Impact of Donation

We ran a 2 (Explicit Learning) x 3 (Identifiability) two-way ANOVA to determine how the perceived impact of the donation differed depending on these two factors. We find evidence for an effect of Identifiability on perceived impact, F(2, 998) = 10.5, p = .00003, ηp2= .021 [.006, .040], BF01 = 0.003. However, we do not find evidence for an effect of Explicit Learning on perceived impact, F(1, 998) = 0.11, p = 0.74, ηp2= .000 [.000, .004], BF01 = 13.69, or for an interaction between Explicit Learning and Identifiability, F(2, 998) = 1.48, p = 0.229, ηp2= .002 [.000, .012], BF01 = 11.05.

As we only found evidence for an effect of Identifiability, we follow up with post-hoc tests of this factor. We found support for higher perceived impact in the identifiable victim condition (M = 3.39, SD = 1.24) compared to both the statistical victim (M = 2.96, SD = 1.35), t(668) = 4.35, p < .001, d = 0.34, 95% CI [0.18, 0.49], and joint condition (M = 3.02, SD = 1.39), t(667) = 3.67, p < .001, d = 0.28, 95% CI [0.13, 0.44]. Both of these results also held up in the Bayesian analysis (BF01 = 0.001 and 0.02).

Further, the perceived impact of donation was correlated to hypothetical donations, r(1002) = 0.54, p < .001. The correlation is comparable for all cells of our design ranging from 0.48 in the identifiable victim/no intervention condition to 0.60 in the statistical victim/intervention condition (see supplementary materials for the correlation in each cell of our design).

We conducted a replication and extension of Small et al.’s (2007) Studies 1 and 3 to examine the discrepancy in human hypothetical prosocial donations towards a single identifiable victim compared to a group of anonymous statistical victims. We found no support for the identifiable victim effect in hypothetical donation tasks and Bayesian analyses indicated evidence in support of no effect. We found further support for this null effect in our reanalysis of a large meta-analysis on the effect conducted by Lee and Freely (2016) using advanced publication bias adjustment methods. We also failed to demonstrate that either explicitly learning about the effect explicitly (through reading prior research) or implicitly (being given statistical victim information along with the personalized victim) weakens the hypothetical donations gap. Thus, we conclude that we failed to find support for the target article’s findings regarding the identifiable victim effect and the interventions that weakened the effect in hypothetical donations. We provided a comparison of the results between the original study and replication in Table 11.

Table 11.
Replication Results Summary: Comparison between the Target Article and the Replication
Study Independent variable Dependent
variable 
ηp2 effect size and
95% Confidence intervals 
Replication
summary 
BF01 Evidence for null 
    Target article Replication   
1a Identifiability main effect, without joint Donation .06
[.00, .15] 
.00
[.00, .003] 
No signal – inconsistent 11.57 
2a Identifiability x Explicit Learning interaction effect,
without joint 
Donation .04
[.00,.14] 
.001
[.00, .011] 
No signal – inconsistent 6.30 
3a Explicit Learning main effect,
without joint 
Donation .03
[.00, .12] 
.001
[.00, 0.01] 
No signal - inconsistent 7.51 
Implicit learning main effect, without Explicit Donation .07
[.01, .15] 
.00
[.00, .015] 
No signal - inconsistent 25.03 
5a Identifiability main effect, without joint Affective feelings 0.02
[.00, .09] 
.00
[.00, .01] 
No signal - inconsistent 11.08 
5c Identifiability x Explicit Learning interaction effect, without joint Affective feelings 0.02
[.00, .09] 
.00
[.00, 0.006] 
No signal - inconsistent 8.13 
Study Independent variable Dependent
variable 
ηp2 effect size and
95% Confidence intervals 
Replication
summary 
BF01 Evidence for null 
    Target article Replication   
1a Identifiability main effect, without joint Donation .06
[.00, .15] 
.00
[.00, .003] 
No signal – inconsistent 11.57 
2a Identifiability x Explicit Learning interaction effect,
without joint 
Donation .04
[.00,.14] 
.001
[.00, .011] 
No signal – inconsistent 6.30 
3a Explicit Learning main effect,
without joint 
Donation .03
[.00, .12] 
.001
[.00, 0.01] 
No signal - inconsistent 7.51 
Implicit learning main effect, without Explicit Donation .07
[.01, .15] 
.00
[.00, .015] 
No signal - inconsistent 25.03 
5a Identifiability main effect, without joint Affective feelings 0.02
[.00, .09] 
.00
[.00, .01] 
No signal - inconsistent 11.08 
5c Identifiability x Explicit Learning interaction effect, without joint Affective feelings 0.02
[.00, .09] 
.00
[.00, 0.006] 
No signal - inconsistent 8.13 

Note. H = Hypotheses. Replication summary is using the LeBel et al. (2019) criteria.

In our extension adding a measure of perceived impact, we found support for perceived impact of the hypothetical donations as higher for an identifiable victim compared to statistical victims, and with support for an association between perceived impact and hypothetical donations, though it somehow failed to translate to an effect on hypothetical donations. Further research is needed to try and understand the links between perceived impact, hypothetical donations, intent to donate, and actual donations.

We caution that our results should not be considered a ‘final word’ on this effect but rather a motivation for future replication efforts in the form of high-powered registered reports examining hypothetical donations, donation intent, real money donations, and associated perceptions such as perceived impact. In addition, we see many promising theoretical directions for further work in this area and possibilities for rethinking and reframing the original theory.

Identifiable Victim Effect or Scope Insensitivity?

Majumder et al. (2023) recently reported a failed replication of Kogut and Ritov (2005) and suggested that the identifiable victim effect may be reframed, that instead of larger donations towards an identifiable victim, the effect might be viewed as similar donations towards an identifiable victim as a group of unidentified or statistical victims with no donation adjustment per group size. This cognitive phenomenon is usually discussed under the term ‘scope insensitivity’, and describes that people do not value a good (here helping children in need) in proportion to its scope or size (Baron & Greene, 1996; Desvousges et al., 1993; Kahneman & Knetsch, 1992). Scope insensitivity has also been shown to be a factor in charitable giving (Hsee et al., 2013; Maier, Caviola, et al., 2022; Västfjäll & Slovic, 2020) and has been discussed as a reason for neglecting to help save human lives, for example, in the context of genocides (Cameron & Payne, 2011; Dickert et al., 2012, 2015; Slovic & Västfjäll, 2010). We see much need for research that would help clarify the different aspects of the phenomenon, to disentangle Identifiability (whether targets are identified or not), from singularity (one versus group), from group size (in scope insensitivity), and to then revisit the classics and examine each of these factors, separately and jointly. Across several replications, we struggled to find support for seminal articles in this domain (most recently, in Mayiwar et al., 2023), and it would seem that these challenges are also shared by the very scholars who initially reported these phenomena (e.g., Moche & Västfjäll, 2021)

Evidence for Irrational Decision Making?

It is unclear whether this phenomenon can be considered evidence for irrational decision-making in the context of identifiable victims. On the one hand, as Majumder et al. (2023) argued, the larger group of victims should elicit more empathic concern, distress, and consequently, willingness to contribute. Not observing this pattern violates the principle of proportionality (i.e., larger issues should be tackled with more resources). On the other hand, from a cost-effectiveness perspective, it makes sense to contribute more where the donation is most effective rather than where the problem is biggest. According to the theory of impact philanthropy proposed by Duncan (2004), the tendency for people to offer help lies in their perception of the difference they can make with their donations. In our study, we found that participants did not necessarily perceive a hypothetical donation to the larger group as more impactful, but rather that they may in fact consider donations to the identifiable victim more impactful, in line with Duncan (2004). People might also perceive donating to the identified victim as more impactful due to proportion dominance. In other words, they may donate less to statistical victims, given that they perceive a lower impact of their contribution when they can only help a smaller proportion of affected individuals (e.g., Erlandsson et al., 2014).

Therefore, effectiveness-based reasoning would imply the opposite compared to the principle of proportionality – donating more to the identified victim. A potential explanation of the null effects in our study would be that participants apply both reasoning based on proportionality and based on effectiveness, and the two cancel each other out, resulting in an overall null effect. Future research may measure participants’ effectiveness focus and tendency to allocate resources based on proportionality to directly investigate how these two factors affect donations to the identifiable victim.

Limitations and More Future Directions

A core limitation that may explain the discrepancy between the results of the original studies and our replication is our adjustment from real to hypothetical donations. In Small et al. (2007), participants received money as a reward after filling in an unrelated survey about the use of various technology products. Participants then received a blank envelope and a charity request letter to decide how much they would be willing to donate. Answering the unrelated technology survey allowed participants to assess how much effort they invested to earn money, making it easier to grasp the subjective value of the money than in our study. Second, given this cover story, participants may not have realized that the experimenters were investigating their donation behavior. Third, participants might donate differently with real in comparison to imaginary money, as they would, for instance, likely deliberate more when making choices involving real donations.

In our replication, we asked the participants to imagine they had just earned $5 and how much of this they would like to give to the corresponding victims. Generosity reflected in the hypothetical donation is usually higher than that expected in the original studies (Bekkers, 2006). Though a direct comparison between the two studies is problematic given the passing of time and the very different measures, looking at the raw numbers in our replication people indicated higher hypothetical donations (Table 5 in the ‘Results’ section), compared to the real donations reported in the target article. However, we note that our conclusions do not depend on average donations but on the differences between conditions. We are not aware of any evidence that would suggest that these effects stand a better chance of working in real life setting than they do in hypothetical scenarios. Nevertheless, a replication in a field setting or an experiment with real donations would be valuable in the future, though we recommend adjusting expectations and taking into account that observed effects might be much weaker than initially thought.

Second, we made additional adjustments and also added forced comprehension checks, to ensure that participants read and understood the hypothetical donation situation and choice. It is possible that this may have somehow impacted participants’ responses since they might disrupt feelings of empathy. In addition, participants may believe that the information about the identified victim effect was supplied to them in order to answer the comprehension checks rather than in order to use it in the subsequent donation task. However, we note that if the effect was indeed affected by such factors, it may indicate that the initial demonstrations were atleast partially motivated by socially desirability responding (McKenzie et al., 2018), and/or that the effect is more contextual, weaker, and less robust than initially thought.

Third, our study was conducted online rather than in person (as in Small et al., 2007). On the one hand, this difference may also be considered a strength, as the online data collection allowed us to collect a larger and broader sample than would have been possible in a lab study. On the other hand, the increased anonymity in online settings could reduce participants’ willingness to donate, even though it is less clear how this would affect the differences between conditions. This research was also conducted during the Covid-19 pandemic, which might have affected participants’ financial status and their psyche more broadly. These two factors might have resulted in our participants having little money for donations or being pre-occupied with financial and existential concerns. Hypothetical donations, therefore, may have been limited by resource constraints, or their ‘mental account’ of how much they are willing to contribute to donation tasks (Sussman et al., 2015; Thaler, 1985, 1999).

We conducted a replication and extension of Small et al. (2007) with modified setting and using hypothetical donations. Contrary to the target article’s findings, we did not find support for the identifiable victim effect and did not find support for explicit and implicit interventions as weakening the effect. We emphasize that our paper should not be considered conclusive evidence against the identifiable victim effect, given the differences in the experimental setup. Instead, we believe that the failure to find the effect on hypothetical donations in combination with the publication bias-adjusted meta-analysis constitutes a cautionary note. We, therefore, conclude that our paper shows a pressing need for more replications with real donations in the form of registered replication reports (Chambers, 2013), ideally conducted as adversarial collaborations between proponents and critics of the identifiable victim effect.

The author(s) declared no potential conflicts of interest with respect to the authorship and/or publication of this article.

The author(s) received no financial support for the research and/or authorship of this article.

Maximilian Maier built on the thesis work by Yik Chun, verified all analyses, added additional analyses (Bayesian), the RoBMA reanalysis of Lee and Freely (2016), new visualizations, and wrote an initial journal submission manuscript.

Yik Chun Wong conducted the replication as part of her dissertation.

Gilad was the advisor for the dissertation. Gilad supervised each step in the project, conducted the preregistrations, and ran data collection.

Maximilian and Gilad finalized the journal submissions, revised and responded to peer review.

We thank František Bartoš for conducting an independent verification report of the RoBMA analysis.

We thank Vanessa Cheung for helpful feedback on a previous draft of this manuscript.

Contributor Roles Taxonomy
Role Maximilian Maier Yik Chun Wong Gilad Feldman 
Conceptualization   
Pre-registration  
Data curation   
Formal analysis  
Funding acquisition   
Investigation  
Pre-registration peer review / verification   
Data analysis peer review/verification   
Methodology 
Project administration   
Resources   
Software  
Supervision   
Validation   
Visualization  
Writing-original draft  
Writing-review and editing  
Role Maximilian Maier Yik Chun Wong Gilad Feldman 
Conceptualization   
Pre-registration  
Data curation   
Formal analysis  
Funding acquisition   
Investigation  
Pre-registration peer review / verification   
Data analysis peer review/verification   
Methodology 
Project administration   
Resources   
Software  
Supervision   
Validation   
Visualization  
Writing-original draft  
Writing-review and editing  
1.

Though we note a recent failed replication of the Kogut and Ritov (2005) by Majumder et al. (2023).

2.

Due to the lack of publication bias correction methods that can accommodate a three-level structure, we accounted for the dependency by only using the most precise estimate within each experiment. Often there were multiple estimates with the same precision within a study. In this case, we selected randomly and bootstrapped 500 times. Using the median of these bootstraps, this analysis comes to the conclusions regarding evidence for publication bias and evidence for an effect. Unlike the main analysis we find moderate rather than weak evidence against heterogeneity. In addition, as funnel plot based methods are sometimes criticized for finding bias for reasons other than publication bias (Lau et al., 2006; Maier, VanderWeele, et al., 2022), we also reanalysed the meta-analysis using only the selection models in RoBMA. This lead to the same conclusions. As only one of the authors is familiar with RoBMA we also requested an independent verification to double check our analysis, the corresponding r script is available in the supplementary materials. as the analysis including selection models.

3.

We note that this differed from our expectations, given that in the charitable giving literature interventions are typically meant to increase donations, and therefore we had expected that such an intervention would increase donations towards statistical victims to the level of donations towards the identifiable victim.

4.

We had preregistered to check normality and kurtosis of dependent variables. However, we realized that given our large sample size due to central limit theorem the sample means would still be normally distributed even if the data is not and therefore did not conduct these tests.

Alinaghi, N., & Reed, W. R. (2018). Meta-analysis and publication bias: How well does the FAT-PET-PEESE procedure work? Research Synthesis Methods, 9(2), 285–311. https://doi.org/10.1002/jrsm.1298
Banerjee, A., & Duflo, E. (2011). Poor economics: A radical rethinking of the way to fight global poverty. Public Affairs.
Baron, J. (1997). Confusion of relative and absolute risk in valuation. Journal of Risk and Uncertainty, 14(3), 301–309. https://doi.org/10.1023/a:1007796310463
Baron, J., & Greene, J. (1996). Determinants of insensitivity to quantity in valuation of public goods: Contribution, warm glow, budget constraints, availability, and prominence. Journal of Experimental Psychology: Applied, 2(2), 107–125. https://doi.org/10.1037/1076-898x.2.2.107
Bartoš, F., & Maier, M. (2020). RoBMA: An R package for robust Bayesian meta-analyses. R package version 2.1.1. https://CRAN.R-project.org/package=RoBMA
Bartoš, F., Maier, M., Quintana, D. S., & Wagenmakers, E.-J. (2022). Adjusting for publication bias in JASP and r — Selection models, PET-PEESE, and robust Bayesian meta-analysis. Advances in Methods and Practices in Psychological Sciences. In press. https://doi.org/10.31234/osf.io/75bqn
Bartoš, F., Maier, M., Shanks, D., Stanley, T. D., Sladekova, M., & Wagenmakers, E.-J. (2022). Meta-Analyses in Psychology Often Overestimate Evidence for and Size of Effects. https://doi.org/10.31234/osf.io/tkmpc
Bartoš, F., Maier, M., Wagenmakers, E.-J., Doucouliagos, H., & Stanley, T. D. (2022). Robust Bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods. Research Synthesis Methods. Advance online publication. https://doi.org/10.1002/jrsm.1594
Bekkers, R. (2006). Words and Deeds of Generosity: Are Decisions About Real and Hypothetical Money Really Different. Working paper, Department of Sociology, Utrecht University. https://renebekkers.files.wordpress.com/2015/12/15_12_01_words_and_deeds.pdf
Bekkers, R., & Wiepking, P. (2011). A literature review of empirical studies of philanthropy: Eight mechanisms that drive charitable giving. Nonprofit and Voluntary Sector Quarterly, 40(5), 924–973. https://doi.org/10.1177/0899764010380927
Bergh, R., & Reinstein, D. (2021). Empathic and numerate giving: The joint effects of victim images and charity evaluations. Social Psychological and Personality Science, 12(3), 407–416. https://doi.org/10.1177/1948550619893968
Bom, P. R., & Rachinger, H. (2019). A kinked meta-regression model for publication bias correction. Research Synthesis Methods, 10(4), 497–514. https://doi.org/10.1002/jrsm.1352
Cameron, C. D., & Payne, B. K. (2011). Escaping affect: How motivated emotion regulation creates insensitivity to mass suffering. Journal of Personality and Social Psychology, 100(1), 7–42. https://doi.org/10.1023/a:1007850605129
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. https://doi.org/10.1177/2515245919847196
Caviola, L., Schubert, S., Nemirow, J. (2020). The many obstacles to effective giving. Judgment Decision Making, 15(2), 159–172. https://doi.org/10.1017/s1930297500007312
Chambers, C. (2013). Registered Reports: A new publishing initiative at Cortex. Cortex, 49(3), 609–610. https://doi.org/10.1016/j.cortex.2012.12.016
Chambers, C. (2017). The seven deadly sins of psychology. Princeton University Press.
Charris, R. A. (2018). A Systematic Replication of the identifiable victim effect (Small, Loewenstein Slovic 2007) [Unpublished doctoral dissertation, University of Los Andes (Colombia)]. https://repositorio.uniandes.edu.co/bitstream/handle/1992/34740/u808458.pdf?sequence=1
Desvousges, W. H., Johnson, F. R., Dunford, R. W., Hudson, S. P., Wilson, K. N., Boyle, K. J. (1993). Measuring natural resource damages with contingent valuation: Tests of validity and reliability. In B. H. Baltagi F. Moscore (Eds.), Contributions to Economic Analysis (pp. 91–164). Elsevier. https://doi.org/10.1016/b978-0-444-81469-2.50009-2
Dickert, S., Västfjäll, D., Kleber, J., Slovic, P. (2012). Valuations of human lives: Normative expectations and psychological mechanisms of (ir)rationality. Synthese, 189(S1), 95–105. https://doi.org/10.1007/s11229-012-0137-4
Dickert, S., Västfjäll, D., Kleber, J., Slovic, P. (2015). Scope insensitivity: The limits of intuitive valuation of human lives in public policy. Journal of Applied Research in Memory and Cognition, 4(3), 248–255. https://doi.org/10.1016/j.jarmac.2014.09.002
Duncan, B. (2004). A theory of impact philanthropy. Journal of Public Economics, 88(9–10), 2159–2180. https://doi.org/10.1016/s0047-2727(03)00037-9
Erlandsson, A., Björklund, F., Bäckström, M. (2014). Perceived utility (not sympathy) mediates the proportion dominance effect in helping decisions. Journal of Behavioral Decision Making, 27(1), 37–47. https://doi.org/10.1002/bdm.1789
Erlandsson, A., Björklund, F., Bäckström, M. (2015). Emotional reactions, perceived impact and perceived responsibility mediate the identifiable victim effect, proportion dominance effect and in-group effect respectively. Organizational Behavior and Human Decision Processes, 127, 1–14. https://doi.org/10.1016/j.obhdp.2014.11.003
Friedrich, J., McGuire, A. (2010). Individual differences in reasoning style as a moderator of the identifiable victim effect. Social Influence, 5(3), 182–201. https://doi.org/10.1080/15534511003707352
Galak, J., Small, D., Stephen, A. T. (2011). Microfinance Decision Making: A Field Study of Prosocial Lending. Journal of Marketing Research, 48(SPL), S130–S137. https://doi.org/10.1509/jmkr.48.spl.s130
Hart, P. S., Lane, D., Chinn, S. (2018). The elusive power of the individual victim: Failure to find a difference in the effectiveness of charitable appeals focused on one compared to many victims. Plos One, 13(7), e0199535. https://doi.org/10.1371/journal.pone.0199535
Hong, S., Reed, W. R. (2021). Using Monte Carlo experiments to select meta-analytic estimators. Research Synthesis Methods, 12(2), 192–215. https://doi.org/10.1002/jrsm.1467
Hsee, C. K., Zhang, J., Lu, Z. Y., Xu, F. (2013). Unit asking: A method to boost donations and beyond. Psychological Science, 24(9), 1801–1808. https://doi.org/10.1177/0956797613482947
Isager, P. M., van Aert, R. C. M., Bahník, Š., Brandt, M. J., DeSoto, K. A., Giner-Sorolla, R., Krueger, J. I., Perugini, M., Ropovik, I., van’t Veer, A. E., Vranka, M., Lakens, D. (2021). Deciding what to replicate: A decision model for replication study selection under resource and knowledge constraints. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000438
JASP Team. (2023). JASP (Version 0.16).
Jeffreys, H. (1939). Theory of probability (1st ed.). Oxford University Press.
Jenni, K., Loewenstein, G. (1997). Explaining the identifiable victim effect. Journal of Risk and Uncertainty, 14(3), 235–257. https://doi.org/10.1023/a:1007740225484
Kahneman, D., Knetsch, J. L. (1992). Valuing public goods: The purchase of moral satisfaction. Journal of Environmental Economics and Management, 22(1), 57–70. https://doi.org/10.1016/0095-0696(92)90019-s
Kogut, T., Ritov, I. (2005). The “identified victim” effect: An identified group, or just a single individual? Journal of Behavioral Decision Making, 18(3), 157–167. https://doi.org/10.1002/bdm.492
Kvarven, A., Strømland, E., Johannesson, M. (2020). Comparing meta-analyses and preregistered multiple-laboratory replication projects. Nature Human Behaviour, 4(4), 423–434. https://doi.org/10.1038/s41562-019-0787-z
Lakens, D., Scheel, A. M., Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/2515245918770963
Lau, J., Ioannidis, J. P. A., Terrin, N., Schmid, C. H., Olkin, I. (2006). The case of the misleading funnel plot. BMJ, 333(7568), 597–600. https://doi.org/10.1136/bmj.333.7568.597
Lebel, E. P., McCarthy, R. J., Earp, B. D., Elson, M., Vanpaemel, W. (2018). A unified framework to quantify the credibility of scientific findings. Advances in Methods and Practices in Psychological Science, 1(3), 389–402. https://doi.org/10.1177/2515245918787489
Lebel, E. P., Vanpaemel, W., Cheung, I., Campbell, L. (2019). A brief guide to evaluate replications. Meta-Psychology, 3, 1–9. https://doi.org/10.15626/mp.2018.843
Lee, M. D., Wagenmakers, E.-J. (2013). Bayesian cognitive modeling: A practical course. Cambridge University Press.
Lee, S., Feeley, T. H. (2016). The identifiable victim effect: a meta-analytic review. Social Influence, 11(3), 199–215. https://doi.org/10.1080/15534510.2016.1216891
Lee, S., Feeley, T. H. (2018). The identifiable victim effect: Using an experimental-causal-chain design to test for mediation. Current Psychology, 37(4), 875–885. https://doi.org/10.1007/s12144-017-9570-3
Lesner, T. H., Rasmussen, O. D. (2014). The identifiable victim effect in charitable giving: evidence from a natural field experiment. Applied Economics, 46(36), 4409–4430. https://doi.org/10.1080/00036846.2014.962226
Litman, L., Robinson, J., Abberbock, T. (2017). TurkPrime.com: A versatile crowdsourcing data acquisition platform for the behavioral sciences. Behavior Research Methods, 49(2), 433–442. https://doi.org/10.3758/s13428-016-0727-z
Loewenstein, G. F., Small, D. A., Strnad, J. F. (2006). Statistical, identifiable, and iconic victims. Behavioral Public Finance, 32–46, 32–46. https://doi.org/10.2139/ssrn.678281
Maier, M., Bartoš, F., Wagenmakers, E.-J. (2022). Robust Bayesian meta-analysis: Addressing publication bias with model-averaging. Psychological Methods. Advance online publication. https://doi.org/10.1037/met0000405
Maier, M., Caviola, L., Schubert, S., Harris, A. J. L. (2022). Investigating (Sequential) Unit Asking: An Unsuccessful Quest for Scope Sensitivity in Willingness to Donate Judgements. https://doi.org/10.31234/osf.io/ps34b
Maier, M., Lakens, D. (2022). Justify Your Alpha: A Primer on Two Practical Approaches. Advances in Methods and Practices in Psychological Science. In press. https://doi.org/10.31234/osf.io/ts4r6
Maier, M., VanderWeele, T. J., Mathur, M. B. (2022). Using selection models to assess sensitivity to publication bias: A tutorial and call for more routine use. Campbell Systematic Reviews, 18(3), e1256. https://doi.org/10.1002/cl2.1256
Majumder, R., Tai, Y. L. C., Ziano, I., Feldman, G. (2023). Revisiting the impact of singularity on the Identified Victim Effect: An unsuccessful replication and extension of Kogut and Ritov (2005a) Study 2. https://osf.io/9qcpj/
Mayiwar, L., Rudko, I., Jeong, Y., Rekdal, R., Do, T. V., Yu, S., Dorotic, M., Feldman, G. (2023). Replication and extensions of Experiments 1a and 3 in Vastfall et al. (2014). Open Science Framework. https://doi.org/10.17605/OSF.IO/A2TGB
McKenzie, C. R., Sher, S., Leong, L. M., Müller-Trede, J. (2018). Constructed preferences, rationality, and choice architecture. Review of Behavioral Economics, 5(3–4), 337–370. https://doi.org/10.1561/105.00000091
Moche, H. (2022). Unequal Valuations of Lives and What to Do About It: The Role of Identifiability, Numbers, and Age in Charitable Giving [Doctoral dissertation, Linköping University Electronic Press]. https://doi.org/10.3384/9789179295226
Moche, H., Västfjäll, D. (2021). Helping the child or the adult? Systematically testing the identifiable victim effect for child and adult victims. Social Influence, 16(1), 78–92. https://doi.org/10.1080/15534510.2021.1995482
Renkewitz, F., Keiner, M. (2019). How to detect publication bias in psychological research: A comparative evaluation of six statistical methods. Zeitschrift für Psychologie, 227(4), 261–279. https://doi.org/10.1027/2151-2604/a000386
Ritov, I., Kogut, T. (2011). Ally or adversary: The effect of identifiability in inter-group conflict situations. Organizational Behavior and Human Decision Processes, 116(1), 96–103. https://doi.org/10.1016/j.obhdp.2011.05.005
Singer, P. (2019). The life you can save: How to do your part to end world poverty. The Life You Can Save. org.
Slovic, P. (2007). “If I look at the mass I will never act”: Psychic numbing and genocide. Judgment and Decision Making, 2(2), 79–95. https://doi.org/10.1017/s1930297500000061
Slovic, P., Finucane, M. L., Peters, E., MacGregor, D. G. (2007). The affect heuristic. European Journal of Operational Research, 177(3), 1333–1352. https://doi.org/10.1016/j.ejor.2005.04.006
Slovic, P., Västfjäll, D. (2010). Affect, moral intuition, and risk. Psychological Inquiry, 21(4), 387–398. https://doi.org/10.1080/1047840x.2010.521119
Small, D. A., Loewenstein, G. (2003). Helping a victim or helping the victim: Altruism and identifiability. Journal of Risk and Uncertainty, 26(1), 5–16. https://doi.org/10.1023/a:1022299422219
Small, D. A., Loewenstein, G., Slovic, P. (2007). Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102(2), 143–153. https://doi.org/10.1016/j.obhdp.2006.01.005
Small, D. A., Verrochi, N. M. (2009). The face of need: Facial emotion expression on charity advertisements. Journal of Marketing Research, 46(6), 777–787. https://doi.org/10.1509/jmkr.46.6.777
Smith, R. W., Faro, D., Burson, K. A. (2013). More for the many: The influence of entitativity on charitable giving. Journal of Consumer Research, 39(5), 961–976. https://doi.org/10.1086/666470
Sudhir, K., Roy, S., Cherian, M. (2016). Do sympathy biases induce charitable giving? The effects of advertising content. Marketing Science, 35(6), 849–869. https://doi.org/10.1287/mksc.2016.0989
Sussman, A. B., Sharma, E., Alter, A. L. (2015). Framing charitable donations as exceptional expenses increases giving. Journal of Experimental Psychology: Applied, 21(2), 130–139. https://doi.org/10.1037/xap0000047
Thaler, R. (1985). Mental accounting and consumer choice. Marketing Science, 4(3), 199–214. https://doi.org/10.1287/mksc.4.3.199
Thaler, R. (1999). Mental accounting matters. Journal of Behavioral Decision Making, 12(3), 183–206. https://doi.org/10.1002/(sici)1099-0771(199909)12:3
Västfjäll, D., Slovic, P. (2020). A psychological perspective on charitable giving and monetary donations: The role of affect. In T. Zaleskiewicz J. Traczyk (Eds.), Psychological perspectives on financial decision making (pp. 331–345). Springer. https://doi.org/10.1007/978-3-030-45500-2_14
Vevea, J. L., Hedges, L. V. (1995). A general linear model for estimating effect size in the presence of publication bias. Psychometrika, 60(3), 419–435. https://doi.org/10.1007/bf02294384
Wagenmakers, E. J., Ravenzwaaji, D., Ron, J. (2020, May 14). Concerns about the default Cauchy are often exaggerated: A demonstration with JASP 0.12. Bayesian Spectacles. https://www.bayesianspectacles.org/concerns-about-the-default-cauchy-are-often-exaggerated-a-demonstration-with-jasp-0-12/
Wasserman, L. (2000). Bayesian model selection and model averaging. Journal of Mathematical Psychology, 44(1), 92–107. https://doi.org/10.1006/jmps.1999.1278
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material