Do individual differences in attention to one’s own emotion relate to the way individuals interpret emotion in other people? For example, although the accuracy has been debated, people’s facial expressions are often perceived as providing information about their emotional state. Previous research on individual differences in attention to emotion has mostly looked at how individuals categorize the emotions they believe other people to have or to the extent to which individuals have dysregulated attention to emotional information. However, less is known about other facets of emotional interpretation such as perceptions of emotional intensity and genuineness. One hypothesis, suggested by previous literature on categorization, is that higher attention to one’s own emotion may result in greater differentiation of the cues perceived to relate to emotional intensity and genuineness. On the other hand, previous research on dysregulated attention to emotion suggests a second possibility: higher attention to emotion might instead result in heightened weighting of facial expressions which tend to be perceived to indicate emotional intensity and genuineness. Across two studies, participants rated their perception of emotional intensity in facial expressions and their perception of genuineness in Duchenne and non-Duchenne smiles. Contrary to both hypotheses, attention to emotion did not significantly relate to perceived emotional intensity or genuineness (Study 1). Furthermore, whereas an exploratory test suggested a significant relation between attention to emotion and the average perceived intensity of different emotion categories (Study 1), the relation was not significant in a conceptual replication study (Study 2). The current research highlights the possibility that future research may benefit from explorations of an expanded range of biased perceptions of facial expressions in relation to individual differences in attention to emotion.

Does the tendency for individuals to pay attention to their own emotions predict their perceptions of emotion in other people? For example, research has shown that beginning in infancy and throughout adulthood, individuals tend to look to the expressions on other people’s faces for clues as to how they might be feeling (Parkinson & Manstead, 2015; Ruba & Repacholi, 2020; Walle et al., 2017; note that researchers debate whether the muscle movements which contribute to different facial expressions are valid and reliable cues of emotion, see Barrett et al., 2019). Current research has mainly focused on whether individual differences in attention to emotion are associated with the categorization of emotion in other people’s facial expressions (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; Matthews et al., 2015) or the regulation of attention paid to emotional information (Bujanow et al., 2020; Coffey et al., 2003). However, studies have not yet examined whether attention to emotion relates to other aspects of interpretation of emotion, such as the perception of emotional intensity in another person’s face or the perception that their facial expression indicates genuine emotion. Although no research has directly examined attention to emotion in relation to perceptions of intensity or genuineness, it is possible to extrapolate hypotheses from previous research on individual differences in attention to emotion and its relation to categorization and dysregulated attention. On the one hand, the previous research on categorization raises the possibility that individuals higher on attention to emotion may show greater differentiation of facial expressions perceived to indicate emotional intensity and genuineness. On the other hand, the previous research on dysregulated attention to emotional information raises the possibility that individuals higher on attention to emotion may show heightened weighting in the meaning of facial expressions they perceive to indicate emotional intensity and genuineness in other people. The current research builds on previous work by testing hypotheses about the association between attention to emotion and the interpretation of emotional intensity and genuineness from facial expressions.

Attention to emotion is the degree to which one notices and values one’s own feelings (Salovey et al., 1995). Individuals who are higher on attention to emotion monitor their own emotions more frequently (e.g., “I often think about my feelings”, “I pay a lot of attention to how I feel”). Individuals higher on attention to emotion are also more likely to be guided by their emotions (e.g., “feelings give direction to life”) and are significantly more likely to use their mood as a basis for judgment, when deemed relevant (Gasper & Clore, 2000).

Why isn’t more known about the relation of intrapersonal focus on emotion to interpersonal focus on emotion? First, few studies have related individual differences in attention to emotion to interpersonal emotion perception processes. Second, the few studies that have been conducted have primarily focused on just one factor of interpersonal emotional interpretation: emotion categorization (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; Matthews et al., 2015). Yet individuals look to a variety of other factors when interpreting other people’s facial expressions to be emotional. For example, research indicates that people may look to faces to interpret the intensity of someone’s emotion (Edgar et al., 2012; Ekman et al., 1987; Fischer et al., 2018) and their perceptions that a target’s facial expression reflects a genuinely felt emotion (i.e., internally felt) rather than a facial expression produced to adhere to social norms (Bernstein et al., 2008; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007; but see Krumhuber & Manstead, 2009). Therefore, more research is needed to understand whether individual differences in attention to emotion extend to interpersonal interpretation of other facets of emotion beyond categorization.

Although no literature has directly examined the relation between individual differences in attention to emotion and perceptions of emotional intensity and genuineness in facial expressions, the existing literature raises two possible hypotheses (see Table 1). On one hand, individuals higher on attention to emotion may show greater differentiation of cues perceived to relate to emotional intensity and genuineness. Previous research has often concluded that individuals higher on attention to emotion are better at categorizing the emotional expressions of others (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; but see Matthews et al., 2015). For example, individuals higher in attention to emotion show more correspondence between their ratings of a target’s emotion and the target’s rating of their own emotion (Eckland & English, 2019). Furthermore, individuals higher in attention to emotion are better at using the correct emotional state to label another person’s non-verbal behavior (Fernández-Abascal & Martín-Díaz, 2019, Study 2). Similarly, individuals higher on attention to emotion are more likely to act on their emotion in a judicious manner. Individuals higher on attention to emotion are more likely to base judgments on their emotion unless the emotion has been deemed irrelevant (Gasper & Clore, 2000). The association between higher levels of attention to emotion and greater interest in differentiating emotion as relevant or irrelevant may also operate when individuals are trying to interpret the emotion they perceive in another’s facial expression. That is, individuals higher in attention to emotion may be more likely to differentiate facial expressions they believe reflect genuine emotion from those that they do not. Therefore, one possibility is that higher attention to emotion is related to more differentiated interpretation of emotional intensity and genuineness in another person’s facial expressions.

Table 1. Previous Studies Relating Individual Differences in Attention to Emotion to Emotion Categorization and Dysregulated Attention
Study Reference Sample Size Attention to Emotion Measure(s) Emotion Perception Measure(s) Finding for Higher Attention to Emotion Effect Size 
Eckland & English, 2019  106 Trait Meta-Mood Scale (TMMS; Salovey et al., 1995Empathic accuracy task (negative to positive emotion scale) Significantly greater categorization accuracy (controlling for race) b = .09, p = .02 
Fernández-Abascal & Martín-Díaz, 2019 (Study 2) 646 Trait Meta-Mood Scale (TMMS; Salovey et al., 1995Mini Profile of Non-Verbal Sensitivity (statements about emotion (e.g., anger, affection, gratitude) and intention) Significantly greater categorization accuracy r = .13, p = .001 
Matthews et al., 2015  129 Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 4 other measures Micro Expression Training Tool (anger, contempt, disgust, fear, happiness, surprise) No significant differences in categorization accuracy r = .09, p = ns 
Visual search task with images of emotional faces (anger, fear, happiness, sadness, surprise) No significant differences in emotional attention r = .02, p = ns 
Bujanow et al., 2020  91 WEFG (Lischetzke et al., 2001)  Gaze entry time for emotional (distress, comfort, complicity-joy) and neutral images (assessed using eye tracking) Significantly greater attention to emotional over neutral stimuli F(1, 82) = 4.02, p < .05 
Coffey et al., 2003  129 Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 2 other measures Emotional Stroop task (positive, negative, and neutral words) Significantly greater attention to emotional over neutral stimuli r = .15, p < .05 
Study Reference Sample Size Attention to Emotion Measure(s) Emotion Perception Measure(s) Finding for Higher Attention to Emotion Effect Size 
Eckland & English, 2019  106 Trait Meta-Mood Scale (TMMS; Salovey et al., 1995Empathic accuracy task (negative to positive emotion scale) Significantly greater categorization accuracy (controlling for race) b = .09, p = .02 
Fernández-Abascal & Martín-Díaz, 2019 (Study 2) 646 Trait Meta-Mood Scale (TMMS; Salovey et al., 1995Mini Profile of Non-Verbal Sensitivity (statements about emotion (e.g., anger, affection, gratitude) and intention) Significantly greater categorization accuracy r = .13, p = .001 
Matthews et al., 2015  129 Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 4 other measures Micro Expression Training Tool (anger, contempt, disgust, fear, happiness, surprise) No significant differences in categorization accuracy r = .09, p = ns 
Visual search task with images of emotional faces (anger, fear, happiness, sadness, surprise) No significant differences in emotional attention r = .02, p = ns 
Bujanow et al., 2020  91 WEFG (Lischetzke et al., 2001)  Gaze entry time for emotional (distress, comfort, complicity-joy) and neutral images (assessed using eye tracking) Significantly greater attention to emotional over neutral stimuli F(1, 82) = 4.02, p < .05 
Coffey et al., 2003  129 Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 2 other measures Emotional Stroop task (positive, negative, and neutral words) Significantly greater attention to emotional over neutral stimuli r = .15, p < .05 

Alternatively, previous research raises the possibility that individuals higher on attention to emotion may overweight the emotional intensity and genuineness they believe they see in other people’s facial expressions. Individuals higher on attention to emotion find emotional information especially salient (Bujanow et al., 2020; Coffey et al., 2003; but see Matthews et al., 2015) and salient cues tend to be interpreted as more emotionally intense (Mrkva et al., 2019; Mrkva & Van Boven, 2020). Furthermore, attention to emotion has been positively correlated with greater self-reported emotional intensity (Gohm & Clore, 2000, 2002; Thompson et al., 2009). Individuals higher on attention to emotion also highly value emotional information (Gasper & Clore, 2000; Salovey et al., 1995). If emotional information is deemed important enough to monopolize the attention of individuals with greater attention to emotion, then they may be more likely to perceive any signal they believe to be emotional as relatively intense and genuine. Therefore, another possibility is that individuals higher on attention to emotion may place greater weight on facial cues perceived to signal emotion and overinterpret their intensity and genuineness.

More research is needed to expand current understanding of how attention to one’s own emotions extends (or does not) to interpretation of emotion in others and whether this interpretation involves greater differentiation of facial expressions or leads to an overweighting of the facial expressions believed to indicate emotion in others. The current research addresses this gap by testing competing hypotheses on two unexplored facets of the way in which people tend to interpret emotion from facial expression: emotional intensity and genuineness. If attention to emotion relates to greater differentiation, then individuals higher on attention to emotion should be significantly better at distinguishing between facial expressions which tend to be perceived to differ in emotional intensity and genuineness. Alternatively, if attention to emotion relates to the heightened weighting of the significance placed on the emotion believed to be present in others’ facial expressions, then individuals higher on emotional attention may show increased emotional intensity ratings and less distinction between facial expressions of emotion perceived to be genuine or fake.

Study 1 tested whether individual differences in attention to emotion are significantly related to perceptions of emotional intensity and genuineness in other people’s facial expressions. If higher attention to emotion is associated with greater differentiation, then individuals higher on attention to emotion should be significantly more likely to distinguish between manipulated levels of emotional intensity and significantly more likely to perceive Duchenne smiles (i.e., include both lip and upper cheek movement) as more genuine than non-Duchenne smiles (i.e., do not include upper cheek movement) (Bernstein et al., 2008; Ekman, 2006; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007). In contrast, if higher attention to emotion is associated with heightened weighting of the emotion perceived in a facial expression, then individuals higher on attention to emotion should be significantly less likely to distinguish between levels of emotional intensity, perceive facial expressions as significantly more emotionally intense overall, and be significantly less likely to perceive Duchenne smiles as more genuine than non-Duchenne smiles. Finally, a pre-registered exploratory analysis also tested whether attention to emotion relates to differences in perceptions of emotional intensity across facial expressions perceived to be associated with different emotion categories (Ekman et al., 1987; Keltner et al., 2019).

Participants

Pre-registered analyses focused on 256 participants (https://osf.io/6sx48/) (Mage = 25.7 years, SD = 8.1 years, range = 18-72 years; 60.5% male, 37.9% female, 1.2% other, .4% preferred not to say; 75.8% White, 15.2% Hispanic or Latinx, 4.3% Asian, 1.6% Black or African American, 1.6% other, 1.6% multiracial). Participants were compensated with $2.25 for their time. As pre-registered, 46 additional participants were excluded from analyses for failing an attention check or bot check (n = 17), withdrawing consent after completing the study (n = 1), reporting their age as below 18 (n = 1), missing data (n = 20), or falling above or below 3 standard deviations from the sample mean on critical measures or tasks (n = 7). The remaining 256 participants met the target number of participants established from an a priori power analysis (G*Power 3.1; Faul et al., 2009) which aimed for 95% power to detect an effect size of f = .10 at p < .05 in a mixed ANOVA (2-level between subject variable, 4-level within subject variable). The mixed ANOVA was used for estimating power because the 2-level between subject variable has equivalent degrees of freedom to the continuous individual difference variable (attention to emotion) used for analysis in the present study. The effect size was estimated taking into account the small effect sizes common for individual differences (Hall et al., 2009) in the absence of relevant previous research.

Procedure

Overview. Participants completed the study as an online survey created in Qualtrics and administered through the online platform Prolific (Peer et al., 2017). After providing informed consent, participants completed the emotional intensity and emotional genuineness tasks (order was randomly presented across participants), questionnaires (items from each measure intermixed in random order), demographic questions, and an open-ended check for automated bots. The survey took approximately 20 minutes to complete. For more information on the procedure and a number of additional exploratory measures and analyses, see (https://osf.io/6sx48/) and supplement.

Tasks and Questionnaires

Emotional intensity task. Participants viewed 48 randomly presented facial expression images (12 images each of faces labeled as angry, happy, or sad expressions in the Radboud Faces Database (Langner et al., 2010) at four emotional intensity levels (0%, 25%, 50%, 75% emotion)). For each image, participants rated their perceived emotional intensity of the target emotion (e.g., for angry face morphs, participants were asked “How angry does this person appear?”) using a 5-point scale (1 = not at all, 3 = somewhat, 5 = extremely). Note that for each trial, if participants did not interpret the facial expression to indicate its emotion label from the Radboud Faces Database, they could rate it as a “1 = not at all.”

Front-facing facial expression images (Radboud Faces Database: Langner et al., 2010) were used to create morphed images using FantaMorph software (version 5.6.2 Standard, Abrosoft, Beijing, China, http://www.fantamorph.com/). Images at four levels of emotional intensity (0%, 25%, 50%, 75% emotion) were created by morphing each of the three emotional expressions (anger, happiness, sadness) with the neutral expression image for four Caucasian models (two male and two female). Models were matched on attractiveness and physical features (e.g., hair color). The assigned emotion labels included for each face in the database reflect rater consensus (Mean of 98.63%, SD = 2.58 of 276 raters: Langner et al., 2010; supplemental material). This procedure was selected to be consistent with previous studies that have used face morphs to manipulate emotional intensity for facial expressions theorized to be associated with different emotions to study perceived emotion from facial behavior (Calvo et al., 2016; Marneweck et al., 2013; Petrides & Furnham, 2003; Schoth & Liossi, 2017).

Before the main task began, participants practiced the task using 5 randomly presented images (i.e., two additional Caucasian models, one male and one female, displaying the three target emotions at varying levels of emotional intensity) not included in the main task.

Emotional genuineness task. Genuineness was manipulated by using images of Duchenne and non-Duchenne smiles. The Duchenne smile engages the orbicularis oculi with the zygomatic major muscle to produce crow’s feet at the corner of the eyes and outwardly pulled lip corners, while the non-Duchenne smile lacks crow’s feet and pulls the corners of the lips upward (Ekman, 2006; Ekman et al., 1988). The Duchenne smile is perceived to be associated with genuinely felt enjoyment, but the non-Duchenne smile appears posed and is instead associated with social motivation (Ekman, 2006; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007). The current study is concerned with understanding people’s interpretation of facial expressions as emotional rather than whether the facial expression accurately reflects emotion in another person (e.g., Researchers debate whether Duchenne smiles indicate internally felt enjoyment: Krumhuber & Manstead, 2009). This procedure was selected to be consistent with previous research which has examined the perception of emotion from facial expressions as either genuine or fake (Bernstein et al., 2008; Frank et al., 1993; Miles & Johnston, 2007).

Participants viewed 10 randomly presented images of smiling faces (5 Duchenne and 5 non-Duchenne) and were asked to “Please choose the statement that best characterizes this person’s emotion.” Participants chose between two options: “This person is faking, only pretending to be happy” or “This person is genuine, truly feeling happy.”

The smiling faces were taken from the Radboud Faces Database (Langner et al., 2010), publications (Del Giudice & Colle, 2007; Niedenthal et al., 2010; Perron et al., 2017), and internet sources (BBC science, Wired). Images were sorted into Duchenne and non-Duchenne sets by one of the co-authors (JSB is a certified FACS coder). Across sets, faces were matched on physical features (e.g., hair color). Before the main task began, participants practiced the task using 4 randomly presented images (2 Duchenne and 2 non-Duchenne) not included in the main task.

Attention to emotion. Participants completed the 13-item attention subscale (M = 3.70, SD = .64, α = .87) of the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995). In this scale, participants responded to statements about the extent to which they attend to their emotions (e.g., “I pay a lot of attention to how I feel”) and allow their emotions to guide their actions (e.g., “Feelings give direction to life”). Participants responded using a 5-point Likert scale (1 = strongly disagree, 2 = somewhat disagree, 3 = neither agree nor disagree, 4 = somewhat agree, 5 = strongly agree). Attention to emotion was mean centered in all ANCOVA analyses.

Data quality checks. Participants completed a one-question attention check (“I have not been paying attention during this study. Please select ‘Strongly disagree’”) presented with the personality questionnaire items. Participants also completed an open-ended check to catch automated bots (“Briefly tell us about the last meal you ate”).

Pre-registered Analyses for Hypothesis Testing

Individual differences in attention to emotion are not associated with distinguishing between emotional intensity conditions. A pre-registered ANCOVA (https://osf.io/6sx48/) tested whether individual differences in attention to emotion significantly moderated the effect of emotional intensity condition in the intensity rating task. Previous literature raised the possibilities that higher attention to emotion should significantly relate to either more (i.e., greater differentiation) or less (i.e., over-weighting the emotional meaning of facial expressions) distinction in the interpretation of emotional intensity in facial expressions. In contrast to either of the hypotheses, individual differences in attention to emotion did not significantly interact with emotional intensity level to predict perceived emotional intensity (F(3, 762) = 1.26, p = .29; see Figure 1). That is, individuals higher on attention to emotion were no more or less likely (than individuals lower on attention to emotion) to significantly distinguish between the experimentally manipulated levels of emotional intensity. The main effect of intensity level on emotional intensity ratings was significant (F(3, 762) = 2338.14, p < .001), but the main effect of attention to emotion on emotional intensity ratings was not statistically significant (F(1, 254) = 2.86, p = .09).

Figure 1. Emotional Intensity Rating as a Function of Attention to Emotion and Intensity Level

Note. Shaded areas represent 95% confidence intervals.

Figure 1. Emotional Intensity Rating as a Function of Attention to Emotion and Intensity Level

Note. Shaded areas represent 95% confidence intervals.

Close modal

A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included emotional intensity, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 87% of the cumulative model weight) was the model which only included intensity level (AIC = 1283.15). The model that included the hypothesized interaction term (in addition to emotional intensity and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 1302.40, ∆ = 19.25; Anderson, 2008; Burnham & Anderson, 2004).

Individual differences in attention to emotion do not significantly predict average emotional intensity ratings. A second set of pre-registered analyses targeted the hypothesis that individual differences in attention to emotion significantly relate to globally inflated perceptions of emotional intensity and also found no significant support. As noted above, the planned ANCOVA did not find a significant main effect of attention to emotion on perceptions of emotional intensity (F(1, 254) = 2.86, p = .09). Furthermore, pre-registered correlational analyses only found a significant positive relation between attention to emotion and emotional intensity rating for morphs that were 50% emotionally intense (r = .14, p = .02) yet all other correlations were not statistically significant (0% (r = .04, p = .50), 25% (r = .04, p = .51), and 75% (r = .10, p = .12)).

Individual differences in attention to emotion are not significantly associated with genuineness distinctions between Duchenne and non-Duchenne smiles. A pre-registered ANCOVA tested whether individual differences in attention to emotion significantly moderated the effect of smile type in the genuineness rating task. The results did not support either of the hypotheses suggested by previous literature: Attention to emotion did not significantly heighten differentiation nor blur distinctions of perceived genuineness between Duchenne and non-Duchenne smiles. That is, attention to emotion did not significantly interact with smile type to predict genuineness ratings (F(1, 254) = .78, p = .38; see Figure 2). Individuals higher on attention to emotion were no more or less likely to rate Duchenne and non-Duchenne smiles as differing in perceived genuineness than individuals lower on attention to emotion.

Figure 2. Genuineness Rating as a Function of Attention to Emotion and Smile Type

Note. Shaded areas represent 95% confidence intervals.

Figure 2. Genuineness Rating as a Function of Attention to Emotion and Smile Type

Note. Shaded areas represent 95% confidence intervals.

Close modal

The main effect of smile type on genuineness ratings was significant (F(1, 254) = 531, p < .001), but the main effect of attention to emotion on genuineness ratings was not significant (F(1, 254) = 1.14, p = .29). Furthermore, pre-registered one sample t-tests found that genuineness ratings of both the Duchenne (M = 3.6, SD = .97, t(255) = 18.1, p < .001) and non-Duchenne (M = 1.6, SD = 1.2, t(255) = -12.8, p < .001) image sets significantly differed from chance.

A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included smile type, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 87% of the cumulative model weight) was the model which only included smile type (AIC = 1529.27). The model that included the hypothesized interaction term (in addition to smile type and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 1536.84, ∆ = 7.57; Anderson, 2008; Burnham & Anderson, 2004).

Pre-registered Exploratory Analyses

Individual differences in attention to emotion significantly impact intensity ratings across facial categories. A pre-registered yet exploratory ANCOVA found that individual differences in attention to emotion significantly interacted with emotion category on the emotional intensity rating task (F(2, 508) = 9.46, p < .001; see Figure 3). Individuals higher on attention to emotion tended to perceive angry (βangry = .13, t = 2.94, p = .003) and sad (βsad = .10, t = 2.24, p = .03) faces as more emotionally intense than individuals lower on attention to emotion. Perceived emotional intensity in happy faces (βhappy = -.04, t = -.93, p = .35) was not significantly related to individual differences in attention to emotion.

Figure 3. Emotional Intensity Rating as a Function of Attention to Emotion and Emotion in Study 1

Note. Shaded areas represent 95% confidence intervals.

Figure 3. Emotional Intensity Rating as a Function of Attention to Emotion and Emotion in Study 1

Note. Shaded areas represent 95% confidence intervals.

Close modal

Additionally, the main effect of emotion category on emotional intensity ratings (F(2, 508) = 97.15, p < .001) and the interaction between emotion category and intensity level on intensity ratings (F(6, 1524) = 39.41, p < .001) were significant. Follow-up pairwise t-tests found that the mean emotional intensity ratings for the angry (M = 2.91, SD = .47), sad (M = 3.09, SD = .48), and happy (M = 3.29, SD = .46) faces were all significantly different from one another (happy/angry: t(255) = 12.45, p < .001; sad/angry: t(255) = 7.97, p < .001; happy/sad: t(255) = 6.84, p < .001). That is, participants’ rated the emotion categories as having significantly different emotional intensities (see supplement for more information on the exploratory ANCOVA).

Study 2 built on Study 1 by conducting a conceptual replication of the pre-registered exploratory finding that higher attention to emotion is associated with perceptions of higher emotional intensity for angry and sad faces. In Study 1, there were significant differences in emotional intensity between emotion categories. Therefore, the stimuli in Study 2 were modified to rule out the possibility that the association between attention to emotion and intensity rating was driven by different average levels of intensity (rather than category). Specifically, the average level of perceived emotional intensity was selected to be equivocal across emotion categories.

Participants

Pre-registered analyses focused on 254 participants (https://osf.io/5guxb/) (Mage = 25.2 years, SD = 8.0 years, range = 18-77 years; 61.8% male, 37.4% female, .4% other, .4% preferred not to say; 87.0% White, 5.5% Hispanic or Latinx, 2.4% Asian, .8% Black or African American, 2.0% other, 2.4% multiracial). Participants were compensated with $1.15 for their time. As pre-registered, 13 additional participants were excluded from analyses for failing the attention check or bot check (n = 9), declining to provide consent (n = 1), or falling above or below 3 standard deviations from the mean on critical measures or tasks (n = 3). The remaining 254 participants met the target number of participants established from an a priori power analysis (G*Power 3.1; Faul et al., 2009) which aimed for 95% power to detect an effect size of f = .10579 at p < .05 in a mixed ANOVA (2-level between subject variable, 3-level within subject variable). The mixed ANOVA was used for estimating power because the 2-level between subject variable has equivalent degrees of freedom to the continuous individual difference variable (attention to emotion) used for analysis in the present study. The interaction effect between attention to emotion and emotion type found in Study 1 was used to estimate effect size.

Procedure

Overview. Participants completed the study as an online survey created in Qualtrics and administered through the online platform Prolific (Peer et al., 2017). After providing informed consent, participants completed the emotional intensity task, questionnaires (items from each measure intermixed in random order), demographic questions, and an open-ended check for automated bots. The survey took approximately 10 minutes to complete. For more information on the procedure and a number of additional exploratory measures and analyses, see (https://osf.io/5guxb/) and supplement.

Emotional intensity task. As in Study 1, participants rated the emotional intensity of 48 randomly presented facial expression images (16 images each of faces labeled as angry, happy, or sad in the Radboud Faces Database (Langner et al., 2010) at varying emotional intensity levels (0%, 25%, 50%, 75% emotion)). For each image, participants rated the perceived emotional intensity of the target emotion (e.g., for angry face morphs, participants were asked “How angry does this person appear?”) using a 5-point scale (1 = not at all, 3 = somewhat, 5 = extremely). As in Study 1, if participants did not interpret the facial expression to indicate its emotion label from the Radboud Faces Database, they could rate it as a “1 = not at all.”

Front-facing facial expression images (Radboud Faces Database: Langner et al., 2010) were used to create a new set of morphed images using FantaMorph software (version 5.6.2 Standard, Abrosoft, Beijing, China, http://www.fantamorph.com/) in order to control for differences in average perceived emotional intensity between the angry, happy, and sad expression images used in creating the morphs. Images at three levels of emotional expression (anger, happiness, sadness) were created by morphing each emotional image with the neutral expression image for four Caucasian models (two male and two female, selected from a pool of 8 models) at four levels of emotional intensity (0%, 25%, 50%, 75% emotion). As in Study 1, this procedure was selected to be consistent with previous studies that have used face morphs to manipulate emotional intensity for facial expressions associated with different emotions to study perception of emotion from facial behavior (Calvo et al., 2016; Marneweck et al., 2013; Petrides & Furnham, 2003; Schoth & Liossi, 2017).

Standardized emotional intensity ratings from n = 276 raters were computed for the original images in each emotion category (Mangry = 4.07, Mhappy = 3.92, Msad = 4.02) (Langner et al., 2010; supplemental material). Paired sample t-tests on these means showed no significant differences in average perceived emotional intensity across emotion categories (angry/sad: t(3) = .21, p = .85; sad/happy: t(3) = .48, p = .66; angry/happy: t(3) = .68, p = .54). Models were matched on attractiveness and physical features (e.g., hair color). A mean of 96.65% (SD = 4.36) of raters (n = 276) reported perceived consensus on the target emotion in the images used to create the morphs (Langner et al., 2010; supplemental material).

Before the main task began, participants practiced the task using 5 randomly presented images (i.e., two additional Caucasian models, one male and one female, displaying the three target emotions at varying levels of emotional intensity) not included in the main task.

Attention to emotion. Participants completed the 13-item attention subscale (M = 3.75, SD = .56, α = .84) of the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995). In this scale, participants responded to statements about the extent to which they attend to their emotions (e.g., “I pay a lot of attention to how I feel”) and allow their emotions to guide their actions (e.g., “Feelings give direction to life”). Participants responded using a 5-point Likert scale (1 = strongly disagree, 2 = somewhat disagree, 3 = neither agree nor disagree, 4 = somewhat agree, 5 = strongly agree). Attention to emotion was mean centered in all ANCOVA analyses.

Data quality checks. The same data quality checks from Study 1 were used for Study 2. That is, participants completed a one-question attention check (“I have not been paying attention during this study. Please select ‘Strongly disagree’”) presented with the personality questionnaire items. Participants also completed an open-ended check to catch automated bots (“Briefly tell us about the last meal you ate”).

Pre-registered Analyses for Hypothesis Testing

Individual differences in attention to emotion are not associated with intensity ratings for different emotions. A pre-registered ANCOVA (https://osf.io/5guxb/) tested whether individual differences in attention to emotion significantly moderated the effect of emotion condition in the intensity rating task. Contrary to the hypothesis that attention to emotion relates to interpretations of emotional intensity in angry and sad faces (and the significant exploratory findings from Study 1), individual differences in attention to emotion did not significantly interact with emotion condition on intensity ratings (F(2, 504) = .58, p = .56; see Figure 4). That is, individuals higher on attention to emotion were no more likely to perceive angry and sad faces as significantly more emotionally intense than individuals lower on attention to emotion.

Figure 4. Emotional Intensity Rating as a Function of Attention to Emotion and Emotion in Study 2

Note. Shaded areas represent 95% confidence intervals.

Figure 4. Emotional Intensity Rating as a Function of Attention to Emotion and Emotion in Study 2

Note. Shaded areas represent 95% confidence intervals.

Close modal

The main effect of attention to emotion was not significant (F(1, 252) = .09, p = .77) yet the main effect of emotion category on emotional intensity ratings was significant (F(2, 504) = 125, p < .001). Follow-up pairwise t-tests found that the mean emotional intensity ratings for the angry (M = 3.03, SD = .49), happy (M = 3.24, SD = .42), and sad (M = 3.51, SD = .47) faces were all significantly different from one another (sad/angry: t(253) = 18.05, p < .001; sad/happy: t(253) = 8.67, p < .001; happy/angry: t(253) = 6.25, p < .001).

A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included emotion category, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 96% of the cumulative model weight) was the model which only included emotion category (AIC = 866.35). The model that included the hypothesized interaction term (in addition to emotion category and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 884.04, ∆ = 17.69; Anderson, 2008; Burnham & Anderson, 2004).

Are individuals who are especially attentive to their own emotion more likely to differentiate the emotion they believe they see in others’ faces or place greater weight on the intensity and genuineness of the emotion they believe they see in others’ faces? The results from the current research were not consistent with either of the hypotheses suggested by prior research on the relation between individual differences in attention to emotion and categorization or dysregulated attention. That is, Study 1 did not find that individual differences in attention to emotion significantly moderated perceived emotional intensity or genuineness ratings. Whereas a pre-registered, exploratory finding from Study 1 raised the possibility that attention to emotion may relate to differences in the perception of emotional intensity across emotion categories, this result was not replicated in Study 2. Taken together, the present and prior research suggests that future research may build a more complete understanding of the association between individual differences in attention to emotion and emotional interpretation of others’ faces by expanding the focus to multiple biases in facial interpretation (Barrett et al., 2019).

The present work leaves open the question of whether individual differences in attention to emotion relate to increased or decreased susceptibility to biased emotional interpretations of facial behavior. Perceptions of emotion in other people’s faces are not always accurate and oftentimes perceivers make systematically biased interpretations depending on target characteristics (Barrett et al., 2019). For example, individuals show greater agreement in emotion categorization of faces belonging to their ingroup than outgroups (Barrett et al., 2019; Elfenbein & Ambady, 2002). Individuals may also believe they see more (Hugenberg & Bodenhausen, 2003; Shapiro et al., 2009) or less (Kommattam et al., 2019; Mende-Siedlecki et al., 2021) emotion in outgroup versus ingroup faces. Future research might therefore investigate whether individual differences in attention to emotion relate to the extent to which group information is factored into emotional interpretations of facial movement.

In addition to investigating relations between individual differences in attention to emotion and the type of information used to form interpretations of others’ faces, future research may also investigate situations in which the facial behavior of others is fleeting. Research on heuristics shows that individuals tend to make more biased judgments under time pressure (e.g., L. Guo et al., 2017). If susceptibility to bias about the meaning of others’ facial behavior is associated with individual differences in attention to emotion, then individual differences in attention to emotion may be more likely to show significant effects on emotional interpretations formed from a glimpse of a face (rather than having unlimited time to study the face as they did in the current research). In everyday life, most facial behavior which tends to be interpreted as emotional only lasts between 0.5 and 4 seconds (Ekman, 2006; Ekman & Friesen, 1982; Frank et al., 1993). Therefore, individual differences in attention to emotion may show significant associations with interpretation biases that were not captured in the present work using untimed examination of static faces. Future research will benefit from investigating the role of attention to emotion in emotional interpretation when individuals have more or less time to form interpretations of facial movement.

As future research expands its focus on the ways in which individual differences in attention to emotion may shape the interpretation of facial movement, it might also consider a new hypothesis that was not considered in the pre-registration of the current research. That is, greater attention to emotion may reduce the extent to which people form interpretations of others’ faces as emotional. For example, research on neuroticism suggests that individuals who tend to be preoccupied with their own negative emotions also tend to have reduced perspective taking and empathy (Q. Guo et al., 2018; Kim & Han, 2018). It may be that individuals higher on attention to emotion are consumed by their focus on their own feelings and less likely to form interpretations of others as experiencing emotion. Furthermore, the association between greater attention to emotion and the lower likelihood of forming emotional impressions about others’ faces may be especially evident when interest in the emotions of others may be likely to be low (e.g., when targets are from an outgroup: Kommattam et al., 2019) or when low interest results in not seeing the facial movement of others (e.g., which can be fleeting: Ekman, 2006; Ekman & Friesen, 1982; Frank et al., 1993). Therefore, future research should also consider the hypothesis that individuals higher on attention to emotion are so preoccupied with their own emotion that they are less likely to believe they see emotion in other people’s faces (in comparison to individuals relatively lower on attention to emotion).

There are a number of unanswered questions about whether individual differences in attention to one’s own emotions relate to the interpretation of emotions in others. The current research did not find a significant association between individual differences in attention to emotion and perceptions of emotional intensity and genuineness in faces. Future work will benefit from considering a greater range of ways in which individual differences in attention to emotion may shape impressions of others’ faces. For example, do individual differences in attention to emotion influence the extent to which different kinds of information are used to form emotional impressions of others? Does the association between emotional impression formation and individual differences in attention to emotion tend to show stronger effects when the time to form an impression is limited? Additionally, future research should consider the possibility that attention to one’s own emotion drowns out the extent to which emotional impressions of others are formed. The current research underscores the need for an expanded focus to address the unresolved question of how attention to one’s own emotion may play a role in interpreting emotion in other people.

Contributed to conception and design: SM, JSB

Contributed to acquisition of data: SM

Contributed to analysis and interpretation of data: SM, JSB

Drafted and/or revised the article: SM, JSB

Approved the submitted version for publication: SM, JSB

The authors did not receive any external financial support for this research.

The authors have no potential conflicts of interest.

All pre-registrations, stimuli, participant data, and analysis scripts can be found on this paper’s repositories at the Open Science Foundation: Study 1 (https://osf.io/6sx48/) and Study 2 (https://osf.io/5guxb/).

Anderson, D. R. (2008). Model Based Inference in the Life Sciences: A Primer on Evidence. Springer New York. https://doi.org/10.1007/978-0-387-74075-1
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930
Bernstein, M. J., Young, S. G., Brown, C. M., Sacco, D. F., & Claypool, H. M. (2008). Adaptive responses to social exclusion: Social rejection improves detection of real and fake smiles. Psychological Science, 19(10), 981–983. https://doi.org/10.1111/j.1467-9280.2008.02187.x
Bujanow, A., Bodenschatz, C. M., Szymanska, M., Kersting, A., Vulliez-Coady, L., & Suslow, T. (2020). The relationship between dispositional attention to feelings and visual attention to emotion. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 100, 109882. https://doi.org/10.1016/j.pnpbp.2020.109882
Burnham, K. P., & Anderson, D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. https://doi.org/10.1177/0049124104268644
Calvo, M. G., Avero, P., Fernández-Martín, A., & Recio, G. (2016). Recognition thresholds for static and dynamic emotional faces. Emotion, 16(8), 1186–1200. https://doi.org/10.1037/emo0000192
Coffey, E., Berenbaum, H., & Kerns, J. (2003). The dimensions of emotional intelligence, alexithymia, and mood awareness: Associations with personality and performance on an emotional Stroop task. Cognition and Emotion, 17(4), 671–679. https://doi.org/10.1080/02699930302304
Del Giudice, M., & Colle, L. (2007). Differences between children and adults in the recognition of enjoyment smiles. Developmental Psychology, 43(3), 796–803. https://doi.org/10.1037/0012-1649.43.3.796
Eckland, N. S., & English, T. (2019). Trait-level emotion regulation and emotional awareness predictors of empathic accuracy. Motivation and Emotion, 43(3), 461–470. https://doi.org/10.1007/s11031-018-9741-z
Edgar, C., McRorie, M., & Sneddon, I. (2012). Emotional intelligence, personality and the decoding of non-verbal expressions of emotion. Personality and Individual Differences, 52(3), 295–300. https://doi.org/10.1016/j.paid.2011.10.024
Ekman, P. (2006). Darwin, deception, and facial expression. Annals of the New York Academy of Sciences, 1000(1), 205–221. https://doi.org/10.1196/annals.1280.010
Ekman, P., & Friesen, W. V. (1982). Felt, false, and miserable smiles. Journal of Nonverbal Behavior, 6(4), 238–252. https://doi.org/10.1007/bf00987191
Ekman, P., Friesen, W. V., & O’Sullivan, M. (1988). Smiles when lying. Journal of Personality and Social Psychology, 54(3), 414–420. https://doi.org/10.1037/0022-3514.54.3.414
Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., Krause, R., LeCompte, W. A., Pitcairn, T., Ricci-Bitti, P. E., Scherer, K., Tomita, M., & Tzavaras, A. (1987). Universals and cultural differences in the judgments of facial expressions of emotion. Journal of Personality and Social Psychology, 53(4), 712–717. https://doi.org/10.1037/0022-3514.53.4.712
Elfenbein, H. A., & Ambady, N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2), 203–235. https://doi.org/10.1037/0033-2909.128.2.203
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4), 1149–1160. https://doi.org/10.3758/brm.41.4.1149
Fernández-Abascal, E. G., & Martín-Díaz, M. D. (2019). Relations between dimensions of emotional intelligence, specific aspects of empathy, and non-verbal sensitivity. Frontiers in Psychology, 10, 1066. https://doi.org/10.3389/fpsyg.2019.01066
Fischer, A. H., Kret, M. E., & Broekens, J. (2018). Gender differences in emotion perception and self-reported emotional intelligence: A test of the emotion sensitivity hypothesis. PLOS ONE, 13(1), e0190712. https://doi.org/10.1371/journal.pone.0190712
Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64(1), 83–93. https://doi.org/10.1037/0022-3514.64.1.83
Gasper, K., & Clore, G. L. (2000). Do you have to pay attention to your feelings to be influenced by them? Personality and Social Psychology Bulletin, 26(6), 698–711. https://doi.org/10.1177/0146167200268005
Gohm, C. L., & Clore, G. L. (2000). Individual differences in emotional experience: Mapping available scales to processes. Personality and Social Psychology Bulletin, 26(6), 679–697. https://doi.org/10.1177/0146167200268004
Gohm, C. L., & Clore, G. L. (2002). Four latent traits of emotional experience and their involvement in well-being, coping, and attributional style. Cognition & Emotion, 16(4), 495–518. https://doi.org/10.1080/02699930143000374
Guo, L., Trueblood, J. S., & Diederich, A. (2017). Thinking fast increases framing effects in risky decision making. Psychological Science, 28(4), 530–543. https://doi.org/10.1177/0956797616689092
Guo, Q., Sun, P., & Li, L. (2018). Why neurotic individuals are less prosocial? A multiple mediation analysis regarding related mechanisms. Personality and Individual Differences, 128, 55–61. https://doi.org/10.1016/j.paid.2018.02.026
Hall, J. A., Andrzejewski, S. A., & Yopchick, J. E. (2009). Psychosocial correlates of interpersonal sensitivity: A meta-analysis. Journal of Nonverbal Behavior, 33(3), 149–180. https://doi.org/10.1007/s10919-009-0070-5
Hugenberg, K., & Bodenhausen, G. V. (2003). Facing prejudice: Implicit prejudice and perception of facial threat. Psychological Science, 14(6), 640–643. https://doi.org/10.1046/j.0956-7976.2003.psci_1478.x
Keltner, D., Sauter, D., Tracy, J., & Cowen, A. (2019). Emotional expression: Advances in basic emotion theory. Journal of Nonverbal Behavior, 43(2), 133–160. https://doi.org/10.1007/s10919-019-00293-3
Kim, H., & Han, S. (2018). Does personal distress enhance empathic interaction or block it? Personality and Individual Differences, 124, 77–83. https://doi.org/10.1016/j.paid.2017.12.005
Kommattam, P., Jonas, K. J., & Fischer, A. H. (2019). Perceived to feel less: Intensity bias in interethnic emotion perception. Journal of Experimental Social Psychology, 84, 103809. https://doi.org/10.1016/j.jesp.2019.04.007
Krumhuber, E. G., & Manstead, A. S. R. (2009). Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion, 9(6), 807–820. https://doi.org/10.1037/a0017844
Langner, O., Dotsch, R., Bijlstra, G., Wigboldus, D. H. J., Hawk, S. T., & van Knippenberg, A. (2010). Presentation and validation of the Radboud Faces Database. Cognition & Emotion, 24(8), 1377–1388. https://doi.org/10.1080/02699930903485076
Lischetzke, T., Eid, M., Wittig, F., & Trierweiler, L. (2001). Perceiving the feelings of oneself and others: Construction and validation of scales assessing the attention to and the clarity of feelings. Diagnostica, 47(4), 167–177. https://doi.org/10.1026//0012-1924.47.4.167
Marneweck, M., Loftus, A., & Hammond, G. (2013). Psychophysical measures of sensitivity to facial expression of emotion. Frontiers in Psychology, 4, 63. https://doi.org/10.3389/fpsyg.2013.00063
Matthews, G., Pérez-González, J.-C., Fellner, A. N., Funke, G. J., Emo, A. K., Zeidner, M., & Roberts, R. D. (2015). Individual differences in facial emotion processing: Trait emotional intelligence, cognitive ability, or transient stress? Journal of Psychoeducational Assessment, 33(1), 68–82. https://doi.org/10.1177/0734282914550386
Mende-Siedlecki, P., Lin, J., Ferron, S., Gibbons, C., Drain, A., & Goharzad, A. (2021). Seeing no pain: Assessing the generalizability of racial bias in pain perception. Emotion, 21(5), 932–950. https://doi.org/10.1037/emo0000953
Miles, L., & Johnston, L. (2007). Detecting happiness: Perceiver sensitivity to enjoyment and non-enjoyment smiles. Journal of Nonverbal Behavior, 31(4), 259–275. https://doi.org/10.1007/s10919-007-0036-4
Mrkva, K., & Van Boven, L. (2020). Salience theory of mere exposure: Relative exposure increases liking, extremity, and emotional intensity. Journal of Personality and Social Psychology, 118(6), 1118–1145. https://doi.org/10.1037/pspa0000184
Mrkva, K., Westfall, J., & Van Boven, L. (2019). Attention drives emotion: Voluntary visual attention increases perceived emotional intensity. Psychological Science, 30(6), 942–954. https://doi.org/10.1177/0956797619844231
Niedenthal, P. M., Mermillod, M., Maringer, M., & Hess, U. (2010). The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behavioral and Brain Sciences, 33(6), 417–433. https://doi.org/10.1017/s0140525x10000865
Parkinson, B., & Manstead, A. S. R. (2015). Current emotion research in social psychology: Thinking about emotions and other people. Emotion Review, 7(4), 371–380. https://doi.org/10.1177/1754073915590624
Peer, E., Brandimarte, L., Samat, S., & Acquisti, A. (2017). Beyond the Turk: Alternative platforms for crowdsourcing behavioral research. Journal of Experimental Social Psychology, 70, 153–163. https://doi.org/10.1016/j.jesp.2017.01.006
Perron, M., Roy-Charland, A., Dickinson, J., LaForge, C., Ryan, R. J., & Pelot, A. (2017). The use of the Duchenne marker and symmetry of the expression in the judgment of smiles in schizophrenia. Psychiatry Research, 252, 126–133. https://doi.org/10.1016/j.psychres.2017.02.052
Petrides, K. V., & Furnham, A. (2003). Trait emotional intelligence: Behavioural validation in two studies of emotion recognition and reactivity to mood induction. European Journal of Personality, 17(1), 39–57. https://doi.org/10.1002/per.466
Ruba, A. L., & Repacholi, B. M. (2020). Do preverbal infants understand discrete facial expressions of emotion? Emotion Review, 12(4), 235–250. https://doi.org/10.1177/1754073919871098
Salovey, P., Mayer, J. D., Goldman, S. L., Turvey, C., & Palfai, T. P. (1995). Emotional attention, clarity, and repair: Exploring emotional intelligence using the Trait Meta-Mood Scale. In J. W. Pennebaker (Ed.), Emotion, Disclosure, & Health (pp. 125–154). American Psychological Association. https://doi.org/10.1037/10182-006
Schoth, D. E., & Liossi, C. (2017). A systematic review of experimental paradigms for exploring biased interpretation of ambiguous information with emotional and neutral associations. Frontiers in Psychology, 8(171). https://doi.org/10.3389/fpsyg.2017.00171
Shapiro, J. R., Ackerman, J. M., Neuberg, S. L., Maner, J. K., Becker, D. V., & Kenrick, D. T. (2009). Following in the wake of anger: When not discriminating is discriminating. Personality and Social Psychology Bulletin, 35(10), 1356–1367. https://doi.org/10.1177/0146167209339627
Thompson, R. J., Dizén, M., & Berenbaum, H. (2009). The unique relations between emotional awareness and facets of affective instability. Journal of Research in Personality, 43(5), 875–879. https://doi.org/10.1016/j.jrp.2009.07.006
Walle, E. A., Reschke, P. J., & Knothe, J. M. (2017). Social referencing: Defining and delineating a basic process of emotion. Emotion Review, 9(3), 245–252. https://doi.org/10.1177/1754073916669594
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary data