People display systematic priorities to self-related stimuli. As the self is not a unified entity, however, it remains unclear which aspects of the self are crucial to producing this stimulus prioritization. To explore this issue, we manipulated the valence of the self-concept (good me vs. bad me) — a core identity-based facet of the self — using a standard shape-label association task in which participants initially learned the associations (e.g., circle/good-self, triangle/good-other, diamond/bad-self, square/bad-other), after which they completed shape-label matching and shape-categorization tasks, such that attention was directed to different aspects of the stimuli (i.e., self-relevance and valence). The results revealed that responses were more efficient to the good-self shape (vs. other shapes), regardless of the task that was undertaken. A hierarchical drift diffusion model (HDDM) analysis indicated that this good-self prioritization effect was underpinned by differences in the rate of information uptake. These findings demonstrate that activation of the good-self representation exclusively facilitates perceptual decision-making, thereby furthering understanding of the self-prioritization effect.
Introduction
To optimize social-cognitive functioning, people need to prioritize processing so that stimuli relevant to their goals are selected for action. As such, a crucial stimulus property is self-relevance. People prioritize information related to themselves compared to others, such as the cocktail party effect (Moray, 1959) and the self-referential advantage in memory (Rogers, Kuiper, & Kirker, 1977). These self-prioritization effects on stimulus processing even extend to arbitrary stimuli. For example, when people learned associations between neutral shapes (of equal familiarity) and personal labels (You, Friend, Stranger) representing themselves, a close other, or a stranger and then responded whether the shapes and labels matched these associations, there was an immediate and highly robust advantage for the self-pair (e.g., square-you) compared to other pairs (e.g., circle-friend; see Sui, He, & Humphreys, 2012). This self-prioritization effect during perceptual matching is maintained throughout the life span (Sui & Humphreys, 2017). Additional findings indicate that self-association modulates access to visual awareness under continuous flash suppression (Macrae, Visokomogilski, Golubickis, Cunningham, & Sahraie, 2017), and the effects are more pronounced in explicit (e.g., self-relevant) than implicit tasks (e.g., self-irrelevant, judging the orientation of stimuli; Falbén et al., 2019; Reuther & Chakravarthi, 2017). Evidence from mathematical modeling analysis has further shown that self-association changes particular functional processes (Golubickis et al., 2017; Sui, Enock, Ralph, & Humphreys, 2015). In the perceptual matching task, people first learned associations between one personal label and two shapes (e.g., self-triangle, self-square), after which they were asked to identify single or pairs of shapes as referring to the self or a close other. When the shapes referred to the self, there was a substantial benefit from presenting two shapes than one shape. This enhanced redundancy gains suggest that self associations are integrated into a single representation so that people respond to an integrated self during perceptual processing (Sui & Humphreys, 2015). Using a hierarchical drift diffusion model (HDDM), researchers have demonstrated that self-relevance influences both perceptual and decisional processes that underlie visual processing (Golubickis et al., 2019;Macrae et al., 2017).
Despite long-standing interest in these self-prioritization effects on performance, most evidence comes from studies in which participants are instructed to refer a stimulus to the global self (e.g., a triangle represents the self). One possibility to account for self-prioritization is that these self-referential tasks may activate a default (currently accessible) self-concept in individuals that modulates stimulus processing and subsequently leads to the self-prioritization effect. Although there are individual variations in a default self-concept, one could expect that the effect of the default self on performance should be greater than other aspects of the self. For example, the self is an inherently multifaceted, dynamic construct that is influenced by current goals, temporary contexts, chronic experiences, and established self-knowledge (Higgins, 1987;C. Hu et al., 2016; Reich, Kessel, & Bernieri, 2013). Key but controversial questions remain regarding which aspects of self are crucial to producing the self-prioritization effects and at which level(s) self-relevance affects performance.
A recent dominant explanation for self-prioritization effects is that they reflect the intrinsic positive valence of self-related stimuli. Supporting evidence comes from the elimination of self-prioritization in face perception when people are required to evaluate unfavorable personality traits in relation to themselves (Ma & Han, 2010) and reduced self-prioritization in perceptual matching when people’s mood is low (Sui, Ohrling, & Humphreys, 2016). Relatedly, researchers have reported that the attentional benefits of self-relevance are greater when targets are probed by positive identity-based (vs. irrelevant) cues (Macrae, Visokomogilski, Golubickis, & Sahraie, 2018). The converging evidence implies that positively identity-related self-concepts (e.g., good me) may be crucial to the presence of self-prioritization. Recent studies suggest that the morally good self is the core component of the self (De Freitas, Cikara, Grossmann, & Schlegel, 2017, 2018; De Freitas, Sarkissian, et al., 2018; Strohminger, Knobe, & Newman, 2017). The self-prioritization effects in cognitive psychology are also consistent with the theory of self-enhancement in that individuals are motivated to ignore or downplay negative information, thereby protecting the self-concept from challenge and enabling people to maintain an unrealistically positive conception of themselves (Sedikides & Strube, 1997). Work in social psychology has repeatedly demonstrated the effects of valence on self-referential processing and self-evaluation (i.e., good-self vs. bad-self;,Greenwald, 1980; Pronin, 2008; Sedikides & Strube, 1997). Researchers have reported that participants spend more time reading positive than negative information about themselves (Baumeister & Cairns, 1992), unfavorable self-related events are more likely to be forgotten than their favorable counterparts (X. Hu, Bergström, Bodenhausen, & Rosenfeld, 2015), and positive outcomes are more likely to be ascribed to the self than others, with negative outcomes exhibiting the opposite attributional pattern (Pronin, 2008). Notwithstanding these consistent findings, there is little direct evidence of whether the self-prioritization effect on stimulus processing results from a core identity-centred aspect of the self (e.g., good me); that is, whether the positive (vs. negative) valence of the self-concept is crucial to the emergence of self-bias and at which level(s) the identity-centred aspect of the self influence task performance.
We set out to address these issues using a standard shape-label association task that has been used to explore self-prioritization during perceptual decision-making (Sui, Rotshtein, & Humphreys, 2013). Participants first associated good and bad aspects of the self (and stranger) with different geometric shapes, then judged whether a subsequent series of shape-label pairings matched or mismatched the previously learned associations (Sui et al., 2012). Previous studies have consistently shown that people reliably favour self-related shapes compared to shapes associated with others (e.g., Sui et al, 2012). We therefore considered whether this self-prioritization effect in perceptual matching is modulated by valence, such that self-bias is sensitive to the identity-based aspect of the self (i.e., good me) with which stimuli are associated. In addition, we had participants carry out a shape-categorization task in which they were required to classify briefly presented shape stimuli according to valence, self-relevance, or importance.1 This task probed stimulus prioritization in a task context in which the self-relevance of the material was orthogonal to the dimension of interest. From the previous findings, we predict that self-prioritization will be greater when geometric shapes are paired with the good- compared to bad self (or good other), regardless of the task that is undertaken on the stimuli.
The perceptual matching task and categorization tasks were fit with a hierarchical drift diffusion model (HDDM) analysis that has widely been used to decompose the processes underpinning task performance (Ratcliff, 1978; Ratcliff, Smith, Brown, & McKoon, 2016; Wiecki, Sofer, & Frank, 2013). Data will be submitted to the HDDM analysis to examine the level(s) at which the identity-centered aspect of the self influence stimulus processing. DDM assumes that, in a speeded decision-making task (e.g., perceptual matching, item classification), people make decisions by gradually accumulating evidence that is sampled from a noisy environment, until a threshold is reached. Typically, there are four parameters to model decisional processing. Drift rate (v) estimates the rate of information acquisition, which is an index of task difficulty or stimulus quality. Threshold separation (a, also called boundary separation) represents the level of caution; increasing threshold separation results in fewer errors but at the cost of slower responding. A single starting value (z) represents an a priori bias or preference for one or other response, and the parameter (t0) represents all non-decisional processes (e.g., stimulus encoding, response execution). Human and animal studies have linked these parameters to different neural and psychological processes in speeded binary forced-choice tasks (Forstmann, Ratcliff, & Wagenmakers, 2016; Johnson, Hopwood, Cesario, & Pleskac, 2017; Voss, Rothermund, & Voss, 2004). For example, Golubickis et al. (2017) manipulated the temporal construal of the self and found that only stimuli associated with the current self (vs. future self) were prioritized and that the effect originated in the drift rate. From this, we expect identity-based self-prioritization (good me) to be underpinned by a stimulus bias (i.e., rate of evidence accumulation) during decisional processing.
Disclosures
The pilot study was pre-registered at https://osf.io/324up/; the confirmatory study was pre-registered athttps://osf.io/abf6q/. All deviations from the original plan are reported (see the Deviations from pre-registration in the Supplementary Materials).
All stimuli and scripts used for experimental presentation and data collection are available at the Open Science Framework (OSF) (https://osf.io/4zvkm/) or Github: (https://github.com/hcp4715/moralSelf_ddm). All the raw data (in CSV format), summary results (in JASP format), and related R scripts are also available.
We reported all the main results in the text. Participants in both studies completed a series of questionnaires after completing the experimental tasks. Questionnaire data are not reported in the current study (but see Liu et al. (2020)). Additional methodological details, results, and plots can be found in the Supplementary Materials (see Table 1).
Content . | Experiments . | Short title . | Information . |
---|---|---|---|
1 | Pilot | Deviations | Deviations from pre-registration. |
2 | Pilot | Fig. S1 | Robust check of Bayesian t-tests (matching task). |
Fig. S2 | Robust check of Bayesian t-tests (categorization task). | ||
3 | Pilot | ex-Gaussian | Details about the ex-Gaussian analysis. |
Fig. S3 | Results from the ex-Gaussian analysis. | ||
4 | Confirmatory | Deviations | Deviations from pre-registration. |
5 | Confirmatory | Fig. S4 | Robust check of Bayesian t-tests (matching task). |
Fig. S5 | Robust check of Bayesian t-tests (categorization task). | ||
6 | Both | Table S1 | Results of cross-task correlation (pilot and confirm study separately). |
7 | Confirmatory | Table S2 | Model comparison in DDM analysis. |
Fig. S6 | Drift rate for mismatching trials in matching task. | ||
Table S3 | Comparison of initial bias (z) and non-decision time (t) from DDM. | ||
8 | Both | Suppl. Results | Results from comparing Bad-other and Good-self in all tasks. |
Content . | Experiments . | Short title . | Information . |
---|---|---|---|
1 | Pilot | Deviations | Deviations from pre-registration. |
2 | Pilot | Fig. S1 | Robust check of Bayesian t-tests (matching task). |
Fig. S2 | Robust check of Bayesian t-tests (categorization task). | ||
3 | Pilot | ex-Gaussian | Details about the ex-Gaussian analysis. |
Fig. S3 | Results from the ex-Gaussian analysis. | ||
4 | Confirmatory | Deviations | Deviations from pre-registration. |
5 | Confirmatory | Fig. S4 | Robust check of Bayesian t-tests (matching task). |
Fig. S5 | Robust check of Bayesian t-tests (categorization task). | ||
6 | Both | Table S1 | Results of cross-task correlation (pilot and confirm study separately). |
7 | Confirmatory | Table S2 | Model comparison in DDM analysis. |
Fig. S6 | Drift rate for mismatching trials in matching task. | ||
Table S3 | Comparison of initial bias (z) and non-decision time (t) from DDM. | ||
8 | Both | Suppl. Results | Results from comparing Bad-other and Good-self in all tasks. |
Pilot Study
Materials and Methods
Participants
Thirty-five college students (14 females, age: 21.65 ± 2.03) from Tsinghua University were recruited via an advertisement on the campus and compensated ¥60 (~$ 8.7) per hour. All participants were right-handed and had normal or corrected-to-normal visual acuity. Informed consent was obtained from participants prior to the experiment and the protocol was reviewed and approved by the Ethics Committee at the Department of Psychology, Tsinghua University.
A priori power analysis was conducted with G*Power 3.1.9.2 (Faul, Erdfelder, Buchner, & Lang, 2009; Faul, Erdfelder, Lang, & Buchner, 2007), based on a series of experiments with a similar design to an unpublished study (C.-P. Hu, 2017; see the open notebook: https://osf.io/nukwz/). A sample size of 32 participants was determined to be sufficient to have a similar effect size (Cohen’s d = 0.6) with a desired power of .90 and α = .05 for the critical comparison in reaction times (RTs) between good-self vs. bad-self.
Data from six participants were excluded from data analysis, four of them due to a procedural error during data collection, and two because of chance levels of performance in the matching task. Thus, data from 29 participants (13 females, age: 21.55 ± 1.99 years) were included in the analysis.
Stimuli and tasks
The experiment was conducted on a PC with a 22-in CRT monitor (1024 × 768 at 100Hz) using Matlab (2016a, MATLAB) and PsychToolbox-3 (Brainard, 1997). All stimuli were displayed in white against a grey background. Participants carried out the experiment individually in a quiet testing room. They first completed 48 practice trials for the perceptual-matching task, followed by two blocks of perceptual-matching task, each with 120 experimental trials. After that, they completed 6 blocks of shape-categorization task, each had 144 trials, with five short interleaved perceptual-matching blocks of 48 trials (see Figure 1A). To avoid forming shape-key associations in the categorization task, different pairs of buttons were used, one pair for each type of categorization task. The associations between buttons and categories (Good/Bad, Self/Other, Important/Un-important) were counterbalanced across participants (see below).
Perceptual-Matching Task
Prior to the task, participants were asked to select a gender-matched forename from a list of common names for people they did not know personally (i.e., stranger condition). They then learned the association between four geometric shapes and four labels. One of the four geometric shapes (square, diamond, trapezoid, circle) was randomly assigned to a good or bad aspect of the participant and the stranger (good-self, bad-self, good-other, bad-other). For example, a participant was instructed, “a square represents the good-self, the morally good aspect of yourself; a diamond represents the bad-self, the immoral aspect of yourself; a trapezoid with a good-other (i.e., the morally good aspect of the stranger [replaced with the named participant had chosen]); and a circle with a bad-other (i.e., the immoral aspect of the stranger”). The shape-label assignment and the order of presentation of shape-label associations were counterbalanced across participants. The instruction was presented until participants pressed the space bar to begin the practice phase.
The shape-label learning phase took approximately 1 minute to complete. Following the learning phase, participants immediately carried out the shape-label matching task to judge whether a shape-label pair, which was presented for 100 ms in the centre of the screen, matched or mismatched the previously learned associations (see Figure 1B). The same four shapes were used throughout all the experimental trials. The shapes were presented with 3.7° × 3.7° of visual angle and the labels with 3.6° × 1.6° of visual angle above and below a central fixation cross with 0.8° × 0.8° of visual angle. The distance between the centre of the shape or the label and the fixation cross was 3.5° of visual angle. There were 60 experimental trials per condition (match good-self, match bad-self, match good-other, match bad-other, mismatch good-self, mismatch bad-self, mismatch good-other, mismatch bad-other).
Shape-Categorization Task
Following the perceptual-matching task, participants immediately carried out the shape-categorization task, in which a shape with 3.7° × 3.7° of visual angle was presented for 200 ms in the centre of the screen. Participants were instructed to discriminate the stimulus based on its identity (self vs. other), valence (good vs. bad), or its relative importance (important vs. unimportant) in different blocks (see Figure 1C). The order of the blocks was counterbalanced across participants. There were 72 trials per condition in total (with four person-valence combinations for the self-relevance and valence tasks).
Data Analyses
All raw data were first preprocessed by R 3.5.3 (R Core Team, 2018) to remove participants with chance levels of performance, practice trials, and trials with RTs faster than 200 ms. We also re-coded trials with no response as incorrect trials (3.98% for matching task, 2.95% for categorization task) for ANOVAs, and these trials were excluded from the HDDM analysis.
The sensitivity (d prime) of shape stimuli in the matching task was measured using a signal detection approach in which the performance in each matching condition was combined with that in the mismatching condition with the same shape to form a measure of d prime (Sui et al., 2012). Based on previous research, mismatching trials were excluded from the RT (C.-P. Hu, 2017; Sui et al., 2012). Although the distribution of raw RTs were not normally distributed, a recent simulation study showed that transformation of RT data does not necessarily improve statistical power (Schramm & Rouder, 2019). Therefore, the averaged RTs for matching trials were used for analysis (Sui et al., 2012).
ANOVAs for the Matching Task and the Categorization Task
The summary data (d-prime, accuracy, and mean RTs of each condition for each participant) were analyzed using JASP 0.10.0.0 (C.-P. Hu, Kong, Wagenmakers, Ly, & Peng, 2018; Love et al., 2019; Wagenmakers et al., 2018). We tested the self-prioritization effect and the valence effect using both Frequentist repeated measures ANOVAs (rmANOVA) and the Bayes factor (BF) version. We also conducted planned one-tailedt-tests. Specifically, we were interested in four contrasts: good-self vs bad-self, good-other vs. bad-other, good-self vs. good-other, and bad-self vs. bad-other. The first two comparisons reveal the valence effect in both the self and other conditions, while the latter two comparisons reveal the self-relevance effect in the positive and negative conditions. The effect sizes of the repeated measures ANOVAs (omega-squared, ω2) andt-tests (Cohen’s d) are reported, with the 95% confidence intervals of Cohen’sd. Note that the Cohen’s d estimated by JASP is the Cohen’s dz (for different indices of Cohen’s d, see Lakens (2013)).
Bayes Factors were calculated by using the default prior in JASP 0.10.0.0 (C.-P. Hu et al., 2018; Wagenmakers et al., 2018). That is, for the repeated measure ANOVA, the distribution of fixed effects is Cauchy distribution with a scale parameter γ = 0.5, the prior for random effects is Cauchy distribution with γ = 1, and the prior for covariates is Cauchy distribution with γ = 0.354. For thet-test, we used a Cauchy distribution with scale parameter γ = 0.707. Criteria for interpreting BF was based on Jeffreys (Jeffreys, 1961; Wagenmakers et al., 2018), i.e., 0 < BF < 3 indicates anecdotal evidence, 3 < BF < 6 indicates weak evidence, 6 < BF < 10 indicates moderate evidence, BF > 10 indicates strong evidence, and BF > 100 indicates overwhelming evidence. For the Bayesian ANOVAs analysis, we also reported the results from Bayesian model averaging (indexed by BFincl) (Etz & Wagenmakers, 2017), which retain model selection uncertainty by averaging the conclusions from each candidate model, weighted by that model’s posterior plausibility.
Modelling
In our pre-registration, we planned to model the data using ex-Gaussian and drift diffusion model (DDM). The DDM analysis were not reported (seeDeviations from pre-registration).
Results
Perceptual-Matching Task
ANOVAs
The repeated-measures ANOVAs on RTs showed strong evidence for the main effect of Valence (Good vs. Bad), F(1, 28) = 38.126,p < 0.001, ω2 = 0.181,BF10 = 1.326e+6,BFincl = 1.1e+6. But no clear evidence for the main effect of self-relevance effect F(1, 28) = 0.078,p = 0.782, ω2 < 0.0001,BF10 = 0.198, BFincl = 0.349. Only very weak evidence for the interaction between Self-relevance × Valence in the perceptual matching task was observed, F(1, 28) = 3.939, p = 0.057, ω2 = 0.0161,BF10 = 1.41,BFincl = 0.99.
The planned exploratory contrasts showed that there were faster responses to the good-self than bad-self association in the perceptual matching task (mean ± std: 655 ± 85 ms vs. 744 ± 70 ms),t(28) = –5.669, p < .001,Cohen’s dz = –1.053, 95% CI [–1.503 –0.591], BF10 = 4443, as well as faster responses to the good-other than bad-other association (mean ± std: 678 ± 93 ms vs. 725 ± 76 ms), t(28) = –3.164, p = .0037, Cohen’s dz = –0.587, 95% CI [–0.978 –0.188], BF10 = 10.6 (see Figure 2A left). These effects were robust across different priors (see Supplementary Materials). There was no evidence for differences for the other two contrasts (i.e., good-self vs good-other or bad-self vs. bad-other).
There was strong evidence for the main effect of Valence (good vs. bad) ond prime, F(1, 28) = 10.74,p = 0.0028, ω2 = 0.0324,BF10 = 11.1,BFincl = 8.44. But no evidence for the main effect of Self-relevance, F(1, 28) = 0.813,p = 0.375, ω2 < 0.0001,BF10 = 0.284,BFincl = 0.297; or for the interaction between Self-relevance × Valence, F(1, 28) = 1.89,p = 0.18, ω2 = 0.0033,BF10 = 0.59,BFincl = 0.197.
Planned exploratory contrast analyses showed that d prime was larger in the good-self compared to the bad-self in the perceptual matching task (mean ± std: 1.749 ± 0.936 vs. 1.233 ± 1),t(28) = 3.26, p = 0.0029, Cohen’s d = 0.606, 95% CI[0.204 0.998],BF10 = 13.1 see Figure 2A right. This effect was robust across different priors (see Supplementary Materials). There was no evidence of differences for the other contrasts (i.e., good-other vs. bad-other, good-self vs. good-other or bad-self vs. bad-other).
Shape-Categorization Task
ANOVAs
The three-way repeated-measures ANOVA on RTs showed weak evidence for the interaction between Self-relevance × Valence in the categorization task: F(1, 28) = 3.553, p = 0.0699, ω2 = 0.006, BF10 = 1.34,BFincl = 1.69, but no clear evidence for the main effect of Valence, Self-relevance, task type, or for the other two-way interactions, or the three-way interaction. Planned exploratory contrast analyses, in which we collapsed data from the two different tasks (valence-based categorization and identity-based categorization), showed that responses to the good-self were faster than to the bad-self (mean ± std: 513 ± 52 vs. 535 ± 58), t(28) = –3, p = 0.0056, Cohen’s d = –0.558, 95% CI[–0.946 –0.162],BF10 = 7.5, but no evidence for the difference between the other contrasts of interest (see Figure 2B left).
The three-way repeated measure ANOVA on accuracy in the categorization task found weak evidence for the interaction between Self-relevance × Valence, F(1, 28) = 4.043, p = 0.0541, ω2 = 0.0109, BF10 = 2.29,BFincl = 2, but no evidence for other main effects or interactions. Planned exploratory contrasts, after collapsing data from the different tasks, revealed more accurate responses for the good-self than the bad-self (mean ± std: 0.923 ± 0.08 vs. 0.884 ± 0.098), t(28) = 3.591, p = 0.0012, Cohen’s d = 0.667, 95% CI[0.259 1.065],BF10 = 27.6. Also, the good-self conditions were more accurate than the good-other conditions (mean ± std: 0.923 ± 0.08 vs. 0.883 ± 0.097), t(28) = 3.43,p = 0.0019, Cohen’s d = 0.637, 95% CI[0.233 1.033], BF10 = 19.2 (see Figure2B right), but no evidence for differences between the other contrasts of interest.
Confirmatory Study
Materials & Methods
Participants
The sample size of the study was determined in a dynamic way (Schönbrodt, Wagenmakers, Zehetleitner, & Perugini, 2017). Specifically, we kept collecting data and analysing the strength of evidence for the critical hypothesis, including the interaction between Self-Relevance × Valence on RT data and two Bayes factor paired t-tests (good-self vs. bad-self, good-self vs. good-other). We stopped recruiting new participants when both paired t-tests reached BF10 ≤ 0.1 or BF10 ≥ 10. Participants who were already recruited at that moment continued to complete the experiment. See https://osf.io/w6hrj/ for the change of Bayes factor during the data collection. In total, 46 college students (27 females, age: 20.91 ± 2.58) were recruited. Four participants were excluded from data analysis because of procedural failures.
Stimuli and Tasks
The data was collected using the same settings as described in the pilot study, with several differences:
In the shape-categorization task, the shapes were presented for 100 ms, instead of 200 ms in the pilot experiment, and feedback was Chinese character ‘Correct’ or ‘Incorrect’, instead of happy or sad symbolic faces.
There were only two different types of blocks in the categorization task in the confirmatory study because the importance judgments resulted in unbalanced trials between participants.
There were more trials per condition: 72 experimental trials for the matching task and 90 trials for the categorization task.
The questionnaires were different from the pilot study.
Data Analyses
As in the pilot study, the data were cleaned and analyzed in both Frequentist hypothesis testing (i.e., ANOVA and t-tests) and the Bayes factor version.
Diffusion Modelling
To examine the processes underpinning task performance, we used a drift-diffusion model (DDM) to decompose the RTs and accuracy data. As mentioned above, the diffusion process is characterized by different parameters: drift rate (v), starting value (z), threshold separation (a), and non-decisional processes (t0).
We estimated the parameters of the DDM using a hierarchical Bayesian model (HDDM) (http://ski.clps.brown.edu/hddm_docs; Wiecki et al., 2013), with a default group prior roughly matching the parameter values reported by Matzke and Wagenmakers (2009). Based on previous research (e.g., Golubickis et al., 2017), we fixed the threshold a because it has been suggested that threshold should remain constant throughout a task when the luminance of the stimuli is constant across the trials. Note that we used the response coded approach (see the Deviations from Registration in the Supplementary Materials).
All model parameters were estimated using four Markov Chain Monte-Carlo (MCMC) chains of 10,000 samples, each with 1000 burn-in samples to allow the chain to converge (see our online open scripts). The four chains were used to calculate the Gelman–Rubin convergence statistic for all model parameters. This statistic was close to 1, indicating that 10,000 samples were sufficient for MCMC chains to converge (Wiecki et al., 2013). Each HDDM parameter for each participant and each condition was modeled to be distributed according to a normal (or truncated normal, depending on the bounds of parameter intervals) distribution centered around the group mean with group variance.
To select the best fitting model, we conducted a model comparison with additional models in which the parameters v andz were free to vary, using the Deviance Information Criterion (DIC) and posterior prediction check (PPC). DIC is a widely-used index for model comparison of hierarchical models. Typically, lower DIC values favor models with the highest likelihood and least degrees of freedom. However, DIC should not be the only criteria in deciding which model is best (see, http://ski.clps.brown.edu/hddm_docs/howto.html#perform-model-comparison). Hence, we also computed the mean square error (MSE) of the PPC to indicate the differences between the data generated by the model and the original data. The smaller the MSE of PPC, the better the model fit.
Model comparisons showed that the best fitting model required the three-parameter model (see Supplementary Materials), consistent with the findings in prior research (Golubickis, Falben, Cunningham, & Macrae, 2018; Golubickis et al., 2017). We then extracted the parameters for each condition and tested the hypothesis by analyzing the posterior probability density of the parameters across the conditions. When comparing the difference of a parameter under two conditions or comparing a parameter with one fixed value, we reported the proportion of the posterior distribution that was greater or less than the other posterior or the fixed value.
Cross-Task Analysis
To test the cross-task robustness of the self-relevance and valence effects, we further estimated the cross-task correlation of the self-relevance effect (good-self vs. good-other) and the valence effect (good-self vs. bad-self).2
Results
Perceptual-Matching Task
ANOVAs
As described in our preregistration, we focused on the matching trials for RTs. Repeated measures ANOVAs on RTs showed overwhelming evidence for the main effect of Valence, F(1, 40) = 57.88,p < 0.001, ω2 = 0.16, BF10 = 8.26e + 6, BFincl = 2.3e+8 but no strong evidence for the main effect of Self-relevance. Also, there was strong evidence for the interaction between Self-relevance × Valence, F(1, 41) = 14.65, p < 0.001, ω2 = 0.05, BF10 = 68.04,BFincl = 133.9. (Figure 3A left). Planned contrasts showed good-self (637 ± 63 ms) responses were faster than bad-self responses (720 ± 70 ms),t(41) = –8.42, p < .001, Cohen’s dz = –1.299, 95% CI[–1.708 –0.883], BF–0 = 1.19e + 8, and good-other (681 ± 81 ms) responses were faster than bad-other responses (707 ± 70 ms), t(41) = –2.37, p = 0.011, Cohen’s d = –0.367, 95% CI[–0.677 –0.052], BF–0 = 4.01. In addition, good-self was faster than good-other,t(41) = –3.34, p < 0.001, Cohen’s d = –0.515, 95% CI[–0.834 –0.190], BF–0 = 35.50, but there was no evidence for a difference between the bad-self and bad-other.
The results of d prime were similar to the RT data. Evidence for the main effect of Valence was mixed, F(1, 41) = 5.71,p = 0.022, ω2= 0.02, BF10 = 0.90,BFincl = 8.5, and no evidence for the main effect of Self-relevance was observed. The evidence for the interaction between Self-relevance × Valence was strong, F(1, 41) =12.03, p = 0.0012, ω2 = 0.01,BF10 = 70.90,BFincl = 24.7 (Figure 3A right). Planned contrasts showed that there was a larger d prime for good-self (2.33 ± 0.71) than either bad-self (1.80 ± 0.66), t(41) = 4.30,p < 0.001, Cohen’s d = 0.664, 95% CI[0.326 0.995], BF+0 = 472.80, or good-other (1.91 ± 0.75), t(41) = 2.67,p = 0.0055, Cohen’s d = 0.411, 95% CI[0.094 0.74], BF+0 = 7.38. No evidence for the other two contrasts of interest emerged.
Diffusion Modelling
The posterior distributions showed evidence of a stimulus bias indexed by the drift rate (v) on matching trials, such that information uptake was faster for good-self than both bad-self (Pposterier (match-good-self > match-bad-self) = 1) and good-other (Pposterier (match-good-self > match-good-other) = 1) (Figure 4A). These effects were not observed on non-matching trials (see Supplementary Materials). The analysis of the starting point (z) showed a prior bias toward matching responses (z = 0.5), Pposterier (z > 0.5) = 1. Analyses of the non-decision time (t0) yielded no differences between conditions.
Shape-Categorization Task
ANOVAs
The three-way rmANOVA on RTs revealed main effects of Valence,F(1,40) = 15.3, p < 0.001, ω2 = 0.009, BF10 = 4.28,BFincl =407, and Self-relevance,F(1,40) = 41.6, p < 0.001, ω2 = 0.028, BF10 =12971,BFincl = 879324, as well as a Self-relevance × Valence interaction, F(1,40) = 41.6,p = 0.047, ω2 = 0.008,BF10 = 19.60,BFincl = 54.5 (Figure 3B left and Figure 3C left). Also, there was an interaction between Task type and Valence, between Task Type and Self-relevance, but no evidence for the main effect of Task Type or the three-way interaction (see online JASP files).
We therefore collapsed the data across Task Type and compared the pairs of conditions of interest given the purpose of the current study. Results showed faster responses to the good-self (484 ± 62) than both the good-other, (519 ± 70), t(40) = –4.37,p < 0.001, Cohen’s d = –0.684, 95% CI[–1.021 –0.339],BF–0 = 57, and bad-self (509 ± 68), t(40) = –3.18, p = 0.0014, Cohen’s d = –0.496, 95% CI[–0.819 –0.169], BF–0 = 23.90. No other differences were observed.
The three-way rmANOVA on accuracy showed mixed evidence for the main effect of Self-relevance, and no evidence for the main effect of Valence. But there was strong evidence for the interaction between Self-relevance × Valence, F(1, 40) = 8.76, p = .0052, ω2 = 0.06, BF10 = 256.50,BFincl = 14.7 Figure 3B right and Figure 3C right. The evidence for all other interactions were absent (see online JASP files).
After collapsing data across the two categorization tasks, planned contrasts revealed that responses to the good-self (0.947 ± 0.037) were more accurate than either the bad-self (0.902 ± 0.1), t(40) = 3.06, p = 0.002, Cohen’s d = –0.478, 95% CI[0.152, 0.798], BF+0 = 17.9, or good-other (0.89 ± 0.088), t(40) = 3.77,p < 0.001, Cohen’s d = 0.589, 95% CI[0.253 0.917], BF+0 = 106.70. No evidence for other differences were found.
Diffusion Modelling
The HDDM analysis of the shape-categorization tasks revealed that the drift rate (v) was higher for good-self than for good-other in both the valence-based task (Pposterier (good-self > good-other) > 0.994) and the identity-based task (Pposterier (good-self > good-other) > 0.996). The drift rate was also higher for good-self than for bad-self in both the valence-based (Pposterier (good-self > bad-self) = 0.99) and identity-based tasks (Pposterier (good-self > bad-self) = 0.99) (see Figure 4B, 4C, and Supplementary Materials).
For the starting point, there was no strong evidence for bias toward positive or negative valence in the valence-based task (Pposterier (bias > 0.5) = 0.69), but there was a strong evidence for a bias toward the self compared to other in the identity-based task (Pposterier (bias > 0.5) = 1.00). Analyses of the non-decision processes (t0) showed that these activities were longer to good-self than good-other, Pposterier (good-self > good-other) = 0.99 (see the Supplementary Materials for details).
Cross-Task Analysis
Cross-task analysis was conducted by combining the data from the pilot and confirmatory studies to increase the statistical power (Table 2). The valence effect in RT was robust across the perceptual-matching task and the identity-based categorization task for the positive self (good-self vs. bad-self) and positive other (good-other vs. bad-other), r = .454 and r = .398, respectively. The effect of the positive self-relevance (good-self vs. good-other) was also stable across the perceptual matching and the valence-based categorization task, r = .621, and across the perceptual matching and the identity-based categorization task, r = .575.
Contrast . | Task 1 . | Task 2 . | d prime-ACC (r, 95%CI,BF10) . | RT (r, 95%CI,BF10) . |
---|---|---|---|---|
Good-self v. Bad-self | matching | valance-based | .194, [–.043 .410], 0.53 | .164, [–.074 .384], 0.164 |
matching | id-based | .051, [–.186 .282], 0.16 | .454, [.245 .623], 306.7 | |
Good-other v. Bad-other | matching | valance-based | 0.129, [–.109 .354] 0.26 | .273, [.041 .477], 1.94 |
matching | id-based | 0.138, [–.099 .362] 0.28 | .398, [.179 .578], 44 | |
Good-self v. Good-other | matching | valance-based | .271, [.038 .476], 1.86 | .621, [.452 .747], 1.5e+6 |
matching | id-based | .326, [.099 .521], 6.2 | .575, [.394 .714], 8.8e+4 | |
Bad-self v. Bad-other | matching | valance-based | –.082, [–.311 .156] 0.19 | .067, [–.171 .297], 0.17 |
matching | id-based | –.117, [–.343 .121] 0.24 | .268, [.035 .473], 1.74 |
Contrast . | Task 1 . | Task 2 . | d prime-ACC (r, 95%CI,BF10) . | RT (r, 95%CI,BF10) . |
---|---|---|---|---|
Good-self v. Bad-self | matching | valance-based | .194, [–.043 .410], 0.53 | .164, [–.074 .384], 0.164 |
matching | id-based | .051, [–.186 .282], 0.16 | .454, [.245 .623], 306.7 | |
Good-other v. Bad-other | matching | valance-based | 0.129, [–.109 .354] 0.26 | .273, [.041 .477], 1.94 |
matching | id-based | 0.138, [–.099 .362] 0.28 | .398, [.179 .578], 44 | |
Good-self v. Good-other | matching | valance-based | .271, [.038 .476], 1.86 | .621, [.452 .747], 1.5e+6 |
matching | id-based | .326, [.099 .521], 6.2 | .575, [.394 .714], 8.8e+4 | |
Bad-self v. Bad-other | matching | valance-based | –.082, [–.311 .156] 0.19 | .067, [–.171 .297], 0.17 |
matching | id-based | –.117, [–.343 .121] 0.24 | .268, [.035 .473], 1.74 |
* matching task = the perceptual matching task; valence-based task = the valence-based categorization task; id-based task = the identity-based categorization task. r = correlation coefficient; 95%CI = the 95% confidence intervals of the correlation coefficient; BF10 = the Bayes factor results of hypothesis testing of the correlation (H0: no correlation).
General Discussion
Here we manipulated stimulus properties based on identity-relevant valence (i.e., good me, bad me, good other, bad other) to examine which facet of the self is crucial to the emergence of the self-prioritization effect. The results demonstrated a robust ‘good-self’ prioritization effect in perceptual decision-making, regardless of task type (i.e., perceptual-matching or shape-classification). Specifically, compared to other shape-label stimulus combinations, the good-self yielded the most potent benefits during decisional processing. An HDDM analysis further revealed that the good-self association facilitated performance by improving the efficiency of visual processing. These findings suggest that, as a core identity-related component, stimuli associated with the good-self are prioritized during perceptual decision-making (Sedikides & Strube, 1997).
One candidate explanation for the ‘good-self’ prioritization effect lies in the ‘integrative’ self view (Sui & Humphreys, 2015), such that activation of the self-concept facilitates the binding of external stimuli (e.g., shapes) to established self-representations and subsequently leads to prioritized responses to self-related stimuli. However, it remains unclear which aspect of the self is critical to the emergence of this effect. In this respect, recent studies have suggested that the positive aspect of the self-concept may comprise the core self-representation (i.e., true self, see De Freitas, et al., 2017; Strohminger, et al., 2017), consistent with the traditional positive self-bias account (Greenwald, 1980). Corroborating this viewpoint, the current results confirm that self-prioritization during perceptual matching is greater when stimuli are paired with the good- than bad-self.
A competing explanation is that the results may reflect a congruency effect (i.e., positive valence is more congruent with the self, while negative valence with non-self). If this were the case, however, then we would have observed faster responses to the congruent pairs, both the good-self and the bad-other, than the incongruent pairs, the bad-self and the good-other. But this was not the case in our results (see the Supplementary Material).
The results of the present study extend previous work on self-prioritization in a number of interesting ways. First, information uptake was faster when stimuli were paired with the good-self (vs. bad-self or good-other), an effect that emerged in both perceptual-matching and shape-categorization tasks. Going beyond perceptual-matching in which the self-relevance of stimuli must be considered to successfully perform the task (Sui et al., 2012), a good-self prioritization effect emerged when only the shape of the stimuli were task relevant. Second, the magnitude of self-prioritization was modulated by the aspect of the self-concept with which information was associated. That is, rather than the self-concept exerting a basic facilitatory effect on stimulus processing, performance was enhanced when information was tagged with an identity-based aspect of the self, the good-self. Third, to date, the effects of self-enhancement have largely been confined to aspects of higher-level cognition, such as attributions (Pronin, 2008; Sedikides & Strube, 1997), social evaluation, and memory (X. Hu et al., 2015). In contrast, the current results provided evidence that self-enhancement also occurs during the early stages of processing, notably perceptual decision-making. Finally, using computational modelling, we demonstrated that self-prioritization (i.e., good-self prioritization) is underpinned by differences in the efficiency of visual processing (i.e., rate of information uptake) during decision-making (Sui & Humphreys, 2015).
Implications and Limitations
The current study examined social association using computer-based tasks taken from cognitive psychology, where the levels of the process involved in performance were decomposed using a mathematical model. These results have broad implications for understanding social behavior. For example, they may help to explain why healthy people become more sensitive to the positive identity-based self-concept during decision making since cognitive biases toward these self-concepts would be reinforced by enhanced perceptual processing. The effect may also be driven by conscious expectancies from people, in line with work in social psychology, such as self-enhancement and self-protection (Alicke & Sedikides, 2009; Dunning, Leuenberger, & Sherman, 1995; Trope & Pomerantz, 1998). That is, healthy people tend to view themselves more positively, or less negatively, to maintain psychological wellbeing. The present results may reflect different motivational constructs, being positive or less negative, although they may lead to similar social behavior.
The generalizability of the current findings may be limited by the sample (young, healthy Chinese college students) that was tested (Yarkoni, 2019). Also, we assumed that participants had a positive self-concept (Hepper, Sedikides, & Cai, 2011), resulting in better performance for stimuli tagged with the good-self compared to the bad-self. This assumption should be examined in future research, particularly focusing on individuals with a negative self-concept. In addition, the present study demonstrated a spontaneous or natural self-prioritization effect in a laboratory setting. The ecological validity (e.g., contextually-relevant aspects of the self) of the findings should be tested in the future.
Conclusions
In two pre-registered studies, we found that geometric shapes associated with the good-self label were prioritized but not the bad-self or the good-other. These results indicated that only activating the positive aspect of the self-concept enhanced perceptual decision-making, thereby providing evidence for positive self-bias during early stages of information processing.
Data Accessibility Statement
We embrace the values of openness and transparency in science (www.researchtransparency.org/). We report how we determined the sample size, data exclusions (if any), manipulations, and all measures in the study, and refer to the project documentation in the OSF and GitHub (https://osf.io/4zvkm/, https://github.com/hcp4715/moralSelf_ddm). All raw data and the scripts for data analyses are also available (see the additional materials in the OSF and GitHub).
Importance judgments were involved in the design because importance is a crucial variable in decision making, along with self-relevance and valence. To customize the relative importance of each shape for the participants, they were asked to judge the importance of each of four shapes at the beginning of the block, with the constraint that at least one shape was selected for the (un)important condition. This customized procedure, however, resulted in an unequal number of responses to the important and unimportant stimuli across individuals. The importance variable was therefore excluded from data analysis to avoid response biases (see Deviation from Pre-registration. The data were available at https://osf.io/4zvkm/).
A recent simulation suggests that correlation might be unstable if the sample size is small (see the blog by Guillaume A. Rousselet: https://garstats.wordpress.com/2018/06/01/smallncorr/). We, therefore, reported the result by combining the pilot and confirm studies. We also analyzed the pilot study and the confirm study separately, see the supplementary materials.
Acknowledgments
We thank Dr. Yinan Cao and Dr. Qiyang Nie for their help on the DDM analysis, and Mengdi Song and Yuqing Cai for data-collection assistance. This work was supported by a grant from the Leverhulme Trust (RPG-2019-010) to J.S.
Competing Interests
The authors have no competing interests to declare.
Author Contributions
C-P. Hu & J. Sui designed the study, C-P. Hu collected the data and analyzed data of pilot study, C-P. Hu & Y. Lan collected the data and analyzed the data of the confirmatory study, C-P. Hu drafted the manuscript, all authors read and approved the current manuscript.
Peer Review Comments
The author(s) of this paper chose the Open Review option, and the peer review comments can be downloaded at: http://doi.org/10.1525/collabra.301.pr