Do individual differences in attention to one’s own emotion relate to the way individuals interpret emotion in other people? For example, although the accuracy has been debated, people’s facial expressions are often perceived as providing information about their emotional state. Previous research on individual differences in attention to emotion has mostly looked at how individuals categorize the emotions they believe other people to have or to the extent to which individuals have dysregulated attention to emotional information. However, less is known about other facets of emotional interpretation such as perceptions of emotional intensity and genuineness. One hypothesis, suggested by previous literature on categorization, is that higher attention to one’s own emotion may result in greater differentiation of the cues perceived to relate to emotional intensity and genuineness. On the other hand, previous research on dysregulated attention to emotion suggests a second possibility: higher attention to emotion might instead result in heightened weighting of facial expressions which tend to be perceived to indicate emotional intensity and genuineness. Across two studies, participants rated their perception of emotional intensity in facial expressions and their perception of genuineness in Duchenne and non-Duchenne smiles. Contrary to both hypotheses, attention to emotion did not significantly relate to perceived emotional intensity or genuineness (Study 1). Furthermore, whereas an exploratory test suggested a significant relation between attention to emotion and the average perceived intensity of different emotion categories (Study 1), the relation was not significant in a conceptual replication study (Study 2). The current research highlights the possibility that future research may benefit from explorations of an expanded range of biased perceptions of facial expressions in relation to individual differences in attention to emotion.
Does the tendency for individuals to pay attention to their own emotions predict their perceptions of emotion in other people? For example, research has shown that beginning in infancy and throughout adulthood, individuals tend to look to the expressions on other people’s faces for clues as to how they might be feeling (Parkinson & Manstead, 2015; Ruba & Repacholi, 2020; Walle et al., 2017; note that researchers debate whether the muscle movements which contribute to different facial expressions are valid and reliable cues of emotion, see Barrett et al., 2019). Current research has mainly focused on whether individual differences in attention to emotion are associated with the categorization of emotion in other people’s facial expressions (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; Matthews et al., 2015) or the regulation of attention paid to emotional information (Bujanow et al., 2020; Coffey et al., 2003). However, studies have not yet examined whether attention to emotion relates to other aspects of interpretation of emotion, such as the perception of emotional intensity in another person’s face or the perception that their facial expression indicates genuine emotion. Although no research has directly examined attention to emotion in relation to perceptions of intensity or genuineness, it is possible to extrapolate hypotheses from previous research on individual differences in attention to emotion and its relation to categorization and dysregulated attention. On the one hand, the previous research on categorization raises the possibility that individuals higher on attention to emotion may show greater differentiation of facial expressions perceived to indicate emotional intensity and genuineness. On the other hand, the previous research on dysregulated attention to emotional information raises the possibility that individuals higher on attention to emotion may show heightened weighting in the meaning of facial expressions they perceive to indicate emotional intensity and genuineness in other people. The current research builds on previous work by testing hypotheses about the association between attention to emotion and the interpretation of emotional intensity and genuineness from facial expressions.
Do Individual Differences in Attention to Emotion Relate to the Interpretation of Emotion in Other People Beyond Categorization?
Attention to emotion is the degree to which one notices and values one’s own feelings (Salovey et al., 1995). Individuals who are higher on attention to emotion monitor their own emotions more frequently (e.g., “I often think about my feelings”, “I pay a lot of attention to how I feel”). Individuals higher on attention to emotion are also more likely to be guided by their emotions (e.g., “feelings give direction to life”) and are significantly more likely to use their mood as a basis for judgment, when deemed relevant (Gasper & Clore, 2000).
Why isn’t more known about the relation of intrapersonal focus on emotion to interpersonal focus on emotion? First, few studies have related individual differences in attention to emotion to interpersonal emotion perception processes. Second, the few studies that have been conducted have primarily focused on just one factor of interpersonal emotional interpretation: emotion categorization (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; Matthews et al., 2015). Yet individuals look to a variety of other factors when interpreting other people’s facial expressions to be emotional. For example, research indicates that people may look to faces to interpret the intensity of someone’s emotion (Edgar et al., 2012; Ekman et al., 1987; Fischer et al., 2018) and their perceptions that a target’s facial expression reflects a genuinely felt emotion (i.e., internally felt) rather than a facial expression produced to adhere to social norms (Bernstein et al., 2008; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007; but see Krumhuber & Manstead, 2009). Therefore, more research is needed to understand whether individual differences in attention to emotion extend to interpersonal interpretation of other facets of emotion beyond categorization.
Attention to Emotion and the Interpretation of Emotional Intensity and Genuineness in Others’ Faces: Greater Differentiation or Heightened Weighting of Facial Movement Perceived to Be Emotional?
Although no literature has directly examined the relation between individual differences in attention to emotion and perceptions of emotional intensity and genuineness in facial expressions, the existing literature raises two possible hypotheses (see Table 1). On one hand, individuals higher on attention to emotion may show greater differentiation of cues perceived to relate to emotional intensity and genuineness. Previous research has often concluded that individuals higher on attention to emotion are better at categorizing the emotional expressions of others (Eckland & English, 2019; Fernández-Abascal & Martín-Díaz, 2019; but see Matthews et al., 2015). For example, individuals higher in attention to emotion show more correspondence between their ratings of a target’s emotion and the target’s rating of their own emotion (Eckland & English, 2019). Furthermore, individuals higher in attention to emotion are better at using the correct emotional state to label another person’s non-verbal behavior (Fernández-Abascal & Martín-Díaz, 2019, Study 2). Similarly, individuals higher on attention to emotion are more likely to act on their emotion in a judicious manner. Individuals higher on attention to emotion are more likely to base judgments on their emotion unless the emotion has been deemed irrelevant (Gasper & Clore, 2000). The association between higher levels of attention to emotion and greater interest in differentiating emotion as relevant or irrelevant may also operate when individuals are trying to interpret the emotion they perceive in another’s facial expression. That is, individuals higher in attention to emotion may be more likely to differentiate facial expressions they believe reflect genuine emotion from those that they do not. Therefore, one possibility is that higher attention to emotion is related to more differentiated interpretation of emotional intensity and genuineness in another person’s facial expressions.
Study Reference | Sample Size | Attention to Emotion Measure(s) | Emotion Perception Measure(s) | Finding for Higher Attention to Emotion | Effect Size |
Eckland & English, 2019 | 106 | Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) | Empathic accuracy task (negative to positive emotion scale) | Significantly greater categorization accuracy (controlling for race) | b = .09, p = .02 |
Fernández-Abascal & Martín-Díaz, 2019 (Study 2) | 646 | Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) | Mini Profile of Non-Verbal Sensitivity (statements about emotion (e.g., anger, affection, gratitude) and intention) | Significantly greater categorization accuracy | r = .13, p = .001 |
Matthews et al., 2015 | 129 | Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 4 other measures | Micro Expression Training Tool (anger, contempt, disgust, fear, happiness, surprise) | No significant differences in categorization accuracy | r = .09, p = ns |
Visual search task with images of emotional faces (anger, fear, happiness, sadness, surprise) | No significant differences in emotional attention | r = .02, p = ns | |||
Bujanow et al., 2020 | 91 | WEFG (Lischetzke et al., 2001) | Gaze entry time for emotional (distress, comfort, complicity-joy) and neutral images (assessed using eye tracking) | Significantly greater attention to emotional over neutral stimuli | F(1, 82) = 4.02, p < .05 |
Coffey et al., 2003 | 129 | Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 2 other measures | Emotional Stroop task (positive, negative, and neutral words) | Significantly greater attention to emotional over neutral stimuli | r = .15, p < .05 |
Study Reference | Sample Size | Attention to Emotion Measure(s) | Emotion Perception Measure(s) | Finding for Higher Attention to Emotion | Effect Size |
Eckland & English, 2019 | 106 | Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) | Empathic accuracy task (negative to positive emotion scale) | Significantly greater categorization accuracy (controlling for race) | b = .09, p = .02 |
Fernández-Abascal & Martín-Díaz, 2019 (Study 2) | 646 | Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) | Mini Profile of Non-Verbal Sensitivity (statements about emotion (e.g., anger, affection, gratitude) and intention) | Significantly greater categorization accuracy | r = .13, p = .001 |
Matthews et al., 2015 | 129 | Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 4 other measures | Micro Expression Training Tool (anger, contempt, disgust, fear, happiness, surprise) | No significant differences in categorization accuracy | r = .09, p = ns |
Visual search task with images of emotional faces (anger, fear, happiness, sadness, surprise) | No significant differences in emotional attention | r = .02, p = ns | |||
Bujanow et al., 2020 | 91 | WEFG (Lischetzke et al., 2001) | Gaze entry time for emotional (distress, comfort, complicity-joy) and neutral images (assessed using eye tracking) | Significantly greater attention to emotional over neutral stimuli | F(1, 82) = 4.02, p < .05 |
Coffey et al., 2003 | 129 | Factor analysis using the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995) and 2 other measures | Emotional Stroop task (positive, negative, and neutral words) | Significantly greater attention to emotional over neutral stimuli | r = .15, p < .05 |
Alternatively, previous research raises the possibility that individuals higher on attention to emotion may overweight the emotional intensity and genuineness they believe they see in other people’s facial expressions. Individuals higher on attention to emotion find emotional information especially salient (Bujanow et al., 2020; Coffey et al., 2003; but see Matthews et al., 2015) and salient cues tend to be interpreted as more emotionally intense (Mrkva et al., 2019; Mrkva & Van Boven, 2020). Furthermore, attention to emotion has been positively correlated with greater self-reported emotional intensity (Gohm & Clore, 2000, 2002; Thompson et al., 2009). Individuals higher on attention to emotion also highly value emotional information (Gasper & Clore, 2000; Salovey et al., 1995). If emotional information is deemed important enough to monopolize the attention of individuals with greater attention to emotion, then they may be more likely to perceive any signal they believe to be emotional as relatively intense and genuine. Therefore, another possibility is that individuals higher on attention to emotion may place greater weight on facial cues perceived to signal emotion and overinterpret their intensity and genuineness.
Current Research
More research is needed to expand current understanding of how attention to one’s own emotions extends (or does not) to interpretation of emotion in others and whether this interpretation involves greater differentiation of facial expressions or leads to an overweighting of the facial expressions believed to indicate emotion in others. The current research addresses this gap by testing competing hypotheses on two unexplored facets of the way in which people tend to interpret emotion from facial expression: emotional intensity and genuineness. If attention to emotion relates to greater differentiation, then individuals higher on attention to emotion should be significantly better at distinguishing between facial expressions which tend to be perceived to differ in emotional intensity and genuineness. Alternatively, if attention to emotion relates to the heightened weighting of the significance placed on the emotion believed to be present in others’ facial expressions, then individuals higher on emotional attention may show increased emotional intensity ratings and less distinction between facial expressions of emotion perceived to be genuine or fake.
Study 1
Study 1 tested whether individual differences in attention to emotion are significantly related to perceptions of emotional intensity and genuineness in other people’s facial expressions. If higher attention to emotion is associated with greater differentiation, then individuals higher on attention to emotion should be significantly more likely to distinguish between manipulated levels of emotional intensity and significantly more likely to perceive Duchenne smiles (i.e., include both lip and upper cheek movement) as more genuine than non-Duchenne smiles (i.e., do not include upper cheek movement) (Bernstein et al., 2008; Ekman, 2006; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007). In contrast, if higher attention to emotion is associated with heightened weighting of the emotion perceived in a facial expression, then individuals higher on attention to emotion should be significantly less likely to distinguish between levels of emotional intensity, perceive facial expressions as significantly more emotionally intense overall, and be significantly less likely to perceive Duchenne smiles as more genuine than non-Duchenne smiles. Finally, a pre-registered exploratory analysis also tested whether attention to emotion relates to differences in perceptions of emotional intensity across facial expressions perceived to be associated with different emotion categories (Ekman et al., 1987; Keltner et al., 2019).
Study 1 Method
Participants
Pre-registered analyses focused on 256 participants (https://osf.io/6sx48/) (Mage = 25.7 years, SD = 8.1 years, range = 18-72 years; 60.5% male, 37.9% female, 1.2% other, .4% preferred not to say; 75.8% White, 15.2% Hispanic or Latinx, 4.3% Asian, 1.6% Black or African American, 1.6% other, 1.6% multiracial). Participants were compensated with $2.25 for their time. As pre-registered, 46 additional participants were excluded from analyses for failing an attention check or bot check (n = 17), withdrawing consent after completing the study (n = 1), reporting their age as below 18 (n = 1), missing data (n = 20), or falling above or below 3 standard deviations from the sample mean on critical measures or tasks (n = 7). The remaining 256 participants met the target number of participants established from an a priori power analysis (G*Power 3.1; Faul et al., 2009) which aimed for 95% power to detect an effect size of f = .10 at p < .05 in a mixed ANOVA (2-level between subject variable, 4-level within subject variable). The mixed ANOVA was used for estimating power because the 2-level between subject variable has equivalent degrees of freedom to the continuous individual difference variable (attention to emotion) used for analysis in the present study. The effect size was estimated taking into account the small effect sizes common for individual differences (Hall et al., 2009) in the absence of relevant previous research.
Procedure
Overview. Participants completed the study as an online survey created in Qualtrics and administered through the online platform Prolific (Peer et al., 2017). After providing informed consent, participants completed the emotional intensity and emotional genuineness tasks (order was randomly presented across participants), questionnaires (items from each measure intermixed in random order), demographic questions, and an open-ended check for automated bots. The survey took approximately 20 minutes to complete. For more information on the procedure and a number of additional exploratory measures and analyses, see (https://osf.io/6sx48/) and supplement.
Tasks and Questionnaires
Emotional intensity task. Participants viewed 48 randomly presented facial expression images (12 images each of faces labeled as angry, happy, or sad expressions in the Radboud Faces Database (Langner et al., 2010) at four emotional intensity levels (0%, 25%, 50%, 75% emotion)). For each image, participants rated their perceived emotional intensity of the target emotion (e.g., for angry face morphs, participants were asked “How angry does this person appear?”) using a 5-point scale (1 = not at all, 3 = somewhat, 5 = extremely). Note that for each trial, if participants did not interpret the facial expression to indicate its emotion label from the Radboud Faces Database, they could rate it as a “1 = not at all.”
Front-facing facial expression images (Radboud Faces Database: Langner et al., 2010) were used to create morphed images using FantaMorph software (version 5.6.2 Standard, Abrosoft, Beijing, China, http://www.fantamorph.com/). Images at four levels of emotional intensity (0%, 25%, 50%, 75% emotion) were created by morphing each of the three emotional expressions (anger, happiness, sadness) with the neutral expression image for four Caucasian models (two male and two female). Models were matched on attractiveness and physical features (e.g., hair color). The assigned emotion labels included for each face in the database reflect rater consensus (Mean of 98.63%, SD = 2.58 of 276 raters: Langner et al., 2010; supplemental material). This procedure was selected to be consistent with previous studies that have used face morphs to manipulate emotional intensity for facial expressions theorized to be associated with different emotions to study perceived emotion from facial behavior (Calvo et al., 2016; Marneweck et al., 2013; Petrides & Furnham, 2003; Schoth & Liossi, 2017).
Before the main task began, participants practiced the task using 5 randomly presented images (i.e., two additional Caucasian models, one male and one female, displaying the three target emotions at varying levels of emotional intensity) not included in the main task.
Emotional genuineness task. Genuineness was manipulated by using images of Duchenne and non-Duchenne smiles. The Duchenne smile engages the orbicularis oculi with the zygomatic major muscle to produce crow’s feet at the corner of the eyes and outwardly pulled lip corners, while the non-Duchenne smile lacks crow’s feet and pulls the corners of the lips upward (Ekman, 2006; Ekman et al., 1988). The Duchenne smile is perceived to be associated with genuinely felt enjoyment, but the non-Duchenne smile appears posed and is instead associated with social motivation (Ekman, 2006; Ekman et al., 1988; Frank et al., 1993; Miles & Johnston, 2007). The current study is concerned with understanding people’s interpretation of facial expressions as emotional rather than whether the facial expression accurately reflects emotion in another person (e.g., Researchers debate whether Duchenne smiles indicate internally felt enjoyment: Krumhuber & Manstead, 2009). This procedure was selected to be consistent with previous research which has examined the perception of emotion from facial expressions as either genuine or fake (Bernstein et al., 2008; Frank et al., 1993; Miles & Johnston, 2007).
Participants viewed 10 randomly presented images of smiling faces (5 Duchenne and 5 non-Duchenne) and were asked to “Please choose the statement that best characterizes this person’s emotion.” Participants chose between two options: “This person is faking, only pretending to be happy” or “This person is genuine, truly feeling happy.”
The smiling faces were taken from the Radboud Faces Database (Langner et al., 2010), publications (Del Giudice & Colle, 2007; Niedenthal et al., 2010; Perron et al., 2017), and internet sources (BBC science, Wired). Images were sorted into Duchenne and non-Duchenne sets by one of the co-authors (JSB is a certified FACS coder). Across sets, faces were matched on physical features (e.g., hair color). Before the main task began, participants practiced the task using 4 randomly presented images (2 Duchenne and 2 non-Duchenne) not included in the main task.
Attention to emotion. Participants completed the 13-item attention subscale (M = 3.70, SD = .64, α = .87) of the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995). In this scale, participants responded to statements about the extent to which they attend to their emotions (e.g., “I pay a lot of attention to how I feel”) and allow their emotions to guide their actions (e.g., “Feelings give direction to life”). Participants responded using a 5-point Likert scale (1 = strongly disagree, 2 = somewhat disagree, 3 = neither agree nor disagree, 4 = somewhat agree, 5 = strongly agree). Attention to emotion was mean centered in all ANCOVA analyses.
Data quality checks. Participants completed a one-question attention check (“I have not been paying attention during this study. Please select ‘Strongly disagree’”) presented with the personality questionnaire items. Participants also completed an open-ended check to catch automated bots (“Briefly tell us about the last meal you ate”).
Study 1 Results
Pre-registered Analyses for Hypothesis Testing
Individual differences in attention to emotion are not associated with distinguishing between emotional intensity conditions. A pre-registered ANCOVA (https://osf.io/6sx48/) tested whether individual differences in attention to emotion significantly moderated the effect of emotional intensity condition in the intensity rating task. Previous literature raised the possibilities that higher attention to emotion should significantly relate to either more (i.e., greater differentiation) or less (i.e., over-weighting the emotional meaning of facial expressions) distinction in the interpretation of emotional intensity in facial expressions. In contrast to either of the hypotheses, individual differences in attention to emotion did not significantly interact with emotional intensity level to predict perceived emotional intensity (F(3, 762) = 1.26, p = .29; see Figure 1). That is, individuals higher on attention to emotion were no more or less likely (than individuals lower on attention to emotion) to significantly distinguish between the experimentally manipulated levels of emotional intensity. The main effect of intensity level on emotional intensity ratings was significant (F(3, 762) = 2338.14, p < .001), but the main effect of attention to emotion on emotional intensity ratings was not statistically significant (F(1, 254) = 2.86, p = .09).
Note. Shaded areas represent 95% confidence intervals.
Note. Shaded areas represent 95% confidence intervals.
A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included emotional intensity, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 87% of the cumulative model weight) was the model which only included intensity level (AIC = 1283.15). The model that included the hypothesized interaction term (in addition to emotional intensity and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 1302.40, ∆ = 19.25; Anderson, 2008; Burnham & Anderson, 2004).
Individual differences in attention to emotion do not significantly predict average emotional intensity ratings. A second set of pre-registered analyses targeted the hypothesis that individual differences in attention to emotion significantly relate to globally inflated perceptions of emotional intensity and also found no significant support. As noted above, the planned ANCOVA did not find a significant main effect of attention to emotion on perceptions of emotional intensity (F(1, 254) = 2.86, p = .09). Furthermore, pre-registered correlational analyses only found a significant positive relation between attention to emotion and emotional intensity rating for morphs that were 50% emotionally intense (r = .14, p = .02) yet all other correlations were not statistically significant (0% (r = .04, p = .50), 25% (r = .04, p = .51), and 75% (r = .10, p = .12)).
Individual differences in attention to emotion are not significantly associated with genuineness distinctions between Duchenne and non-Duchenne smiles. A pre-registered ANCOVA tested whether individual differences in attention to emotion significantly moderated the effect of smile type in the genuineness rating task. The results did not support either of the hypotheses suggested by previous literature: Attention to emotion did not significantly heighten differentiation nor blur distinctions of perceived genuineness between Duchenne and non-Duchenne smiles. That is, attention to emotion did not significantly interact with smile type to predict genuineness ratings (F(1, 254) = .78, p = .38; see Figure 2). Individuals higher on attention to emotion were no more or less likely to rate Duchenne and non-Duchenne smiles as differing in perceived genuineness than individuals lower on attention to emotion.
Note. Shaded areas represent 95% confidence intervals.
Note. Shaded areas represent 95% confidence intervals.
The main effect of smile type on genuineness ratings was significant (F(1, 254) = 531, p < .001), but the main effect of attention to emotion on genuineness ratings was not significant (F(1, 254) = 1.14, p = .29). Furthermore, pre-registered one sample t-tests found that genuineness ratings of both the Duchenne (M = 3.6, SD = .97, t(255) = 18.1, p < .001) and non-Duchenne (M = 1.6, SD = 1.2, t(255) = -12.8, p < .001) image sets significantly differed from chance.
A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included smile type, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 87% of the cumulative model weight) was the model which only included smile type (AIC = 1529.27). The model that included the hypothesized interaction term (in addition to smile type and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 1536.84, ∆ = 7.57; Anderson, 2008; Burnham & Anderson, 2004).
Pre-registered Exploratory Analyses
Individual differences in attention to emotion significantly impact intensity ratings across facial categories. A pre-registered yet exploratory ANCOVA found that individual differences in attention to emotion significantly interacted with emotion category on the emotional intensity rating task (F(2, 508) = 9.46, p < .001; see Figure 3). Individuals higher on attention to emotion tended to perceive angry ( = .13, t = 2.94, p = .003) and sad ( = .10, t = 2.24, p = .03) faces as more emotionally intense than individuals lower on attention to emotion. Perceived emotional intensity in happy faces ( = -.04, t = -.93, p = .35) was not significantly related to individual differences in attention to emotion.
Note. Shaded areas represent 95% confidence intervals.
Note. Shaded areas represent 95% confidence intervals.
Additionally, the main effect of emotion category on emotional intensity ratings (F(2, 508) = 97.15, p < .001) and the interaction between emotion category and intensity level on intensity ratings (F(6, 1524) = 39.41, p < .001) were significant. Follow-up pairwise t-tests found that the mean emotional intensity ratings for the angry (M = 2.91, SD = .47), sad (M = 3.09, SD = .48), and happy (M = 3.29, SD = .46) faces were all significantly different from one another (happy/angry: t(255) = 12.45, p < .001; sad/angry: t(255) = 7.97, p < .001; happy/sad: t(255) = 6.84, p < .001). That is, participants’ rated the emotion categories as having significantly different emotional intensities (see supplement for more information on the exploratory ANCOVA).
Study 2
Study 2 built on Study 1 by conducting a conceptual replication of the pre-registered exploratory finding that higher attention to emotion is associated with perceptions of higher emotional intensity for angry and sad faces. In Study 1, there were significant differences in emotional intensity between emotion categories. Therefore, the stimuli in Study 2 were modified to rule out the possibility that the association between attention to emotion and intensity rating was driven by different average levels of intensity (rather than category). Specifically, the average level of perceived emotional intensity was selected to be equivocal across emotion categories.
Study 2 Method
Participants
Pre-registered analyses focused on 254 participants (https://osf.io/5guxb/) (Mage = 25.2 years, SD = 8.0 years, range = 18-77 years; 61.8% male, 37.4% female, .4% other, .4% preferred not to say; 87.0% White, 5.5% Hispanic or Latinx, 2.4% Asian, .8% Black or African American, 2.0% other, 2.4% multiracial). Participants were compensated with $1.15 for their time. As pre-registered, 13 additional participants were excluded from analyses for failing the attention check or bot check (n = 9), declining to provide consent (n = 1), or falling above or below 3 standard deviations from the mean on critical measures or tasks (n = 3). The remaining 254 participants met the target number of participants established from an a priori power analysis (G*Power 3.1; Faul et al., 2009) which aimed for 95% power to detect an effect size of f = .10579 at p < .05 in a mixed ANOVA (2-level between subject variable, 3-level within subject variable). The mixed ANOVA was used for estimating power because the 2-level between subject variable has equivalent degrees of freedom to the continuous individual difference variable (attention to emotion) used for analysis in the present study. The interaction effect between attention to emotion and emotion type found in Study 1 was used to estimate effect size.
Procedure
Overview. Participants completed the study as an online survey created in Qualtrics and administered through the online platform Prolific (Peer et al., 2017). After providing informed consent, participants completed the emotional intensity task, questionnaires (items from each measure intermixed in random order), demographic questions, and an open-ended check for automated bots. The survey took approximately 10 minutes to complete. For more information on the procedure and a number of additional exploratory measures and analyses, see (https://osf.io/5guxb/) and supplement.
Emotional intensity task. As in Study 1, participants rated the emotional intensity of 48 randomly presented facial expression images (16 images each of faces labeled as angry, happy, or sad in the Radboud Faces Database (Langner et al., 2010) at varying emotional intensity levels (0%, 25%, 50%, 75% emotion)). For each image, participants rated the perceived emotional intensity of the target emotion (e.g., for angry face morphs, participants were asked “How angry does this person appear?”) using a 5-point scale (1 = not at all, 3 = somewhat, 5 = extremely). As in Study 1, if participants did not interpret the facial expression to indicate its emotion label from the Radboud Faces Database, they could rate it as a “1 = not at all.”
Front-facing facial expression images (Radboud Faces Database: Langner et al., 2010) were used to create a new set of morphed images using FantaMorph software (version 5.6.2 Standard, Abrosoft, Beijing, China, http://www.fantamorph.com/) in order to control for differences in average perceived emotional intensity between the angry, happy, and sad expression images used in creating the morphs. Images at three levels of emotional expression (anger, happiness, sadness) were created by morphing each emotional image with the neutral expression image for four Caucasian models (two male and two female, selected from a pool of 8 models) at four levels of emotional intensity (0%, 25%, 50%, 75% emotion). As in Study 1, this procedure was selected to be consistent with previous studies that have used face morphs to manipulate emotional intensity for facial expressions associated with different emotions to study perception of emotion from facial behavior (Calvo et al., 2016; Marneweck et al., 2013; Petrides & Furnham, 2003; Schoth & Liossi, 2017).
Standardized emotional intensity ratings from n = 276 raters were computed for the original images in each emotion category (Mangry = 4.07, Mhappy = 3.92, Msad = 4.02) (Langner et al., 2010; supplemental material). Paired sample t-tests on these means showed no significant differences in average perceived emotional intensity across emotion categories (angry/sad: t(3) = .21, p = .85; sad/happy: t(3) = .48, p = .66; angry/happy: t(3) = .68, p = .54). Models were matched on attractiveness and physical features (e.g., hair color). A mean of 96.65% (SD = 4.36) of raters (n = 276) reported perceived consensus on the target emotion in the images used to create the morphs (Langner et al., 2010; supplemental material).
Before the main task began, participants practiced the task using 5 randomly presented images (i.e., two additional Caucasian models, one male and one female, displaying the three target emotions at varying levels of emotional intensity) not included in the main task.
Attention to emotion. Participants completed the 13-item attention subscale (M = 3.75, SD = .56, α = .84) of the Trait Meta-Mood Scale (TMMS; Salovey et al., 1995). In this scale, participants responded to statements about the extent to which they attend to their emotions (e.g., “I pay a lot of attention to how I feel”) and allow their emotions to guide their actions (e.g., “Feelings give direction to life”). Participants responded using a 5-point Likert scale (1 = strongly disagree, 2 = somewhat disagree, 3 = neither agree nor disagree, 4 = somewhat agree, 5 = strongly agree). Attention to emotion was mean centered in all ANCOVA analyses.
Data quality checks. The same data quality checks from Study 1 were used for Study 2. That is, participants completed a one-question attention check (“I have not been paying attention during this study. Please select ‘Strongly disagree’”) presented with the personality questionnaire items. Participants also completed an open-ended check to catch automated bots (“Briefly tell us about the last meal you ate”).
Study 2 Results
Pre-registered Analyses for Hypothesis Testing
Individual differences in attention to emotion are not associated with intensity ratings for different emotions. A pre-registered ANCOVA (https://osf.io/5guxb/) tested whether individual differences in attention to emotion significantly moderated the effect of emotion condition in the intensity rating task. Contrary to the hypothesis that attention to emotion relates to interpretations of emotional intensity in angry and sad faces (and the significant exploratory findings from Study 1), individual differences in attention to emotion did not significantly interact with emotion condition on intensity ratings (F(2, 504) = .58, p = .56; see Figure 4). That is, individuals higher on attention to emotion were no more likely to perceive angry and sad faces as significantly more emotionally intense than individuals lower on attention to emotion.
Note. Shaded areas represent 95% confidence intervals.
Note. Shaded areas represent 95% confidence intervals.
The main effect of attention to emotion was not significant (F(1, 252) = .09, p = .77) yet the main effect of emotion category on emotional intensity ratings was significant (F(2, 504) = 125, p < .001). Follow-up pairwise t-tests found that the mean emotional intensity ratings for the angry (M = 3.03, SD = .49), happy (M = 3.24, SD = .42), and sad (M = 3.51, SD = .47) faces were all significantly different from one another (sad/angry: t(253) = 18.05, p < .001; sad/happy: t(253) = 8.67, p < .001; happy/angry: t(253) = 6.25, p < .001).
A post hoc Akaike Information Criterion (AIC) model selection analysis was conducted to evaluate the utility of including the hypothesized interaction term in the model to explain the data in the study sample. Models which included emotion category, attention to emotion, their combination, their combination along with the hypothesized interaction term, and the null model were evaluated. The best-fit model (carrying 96% of the cumulative model weight) was the model which only included emotion category (AIC = 866.35). The model that included the hypothesized interaction term (in addition to emotion category and attention to emotion) was assigned an AIC value which, when compared to the AIC value of the best-fit model, is conventionally interpreted as not providing a useful advancement when modeling data from the study sample (i.e., AIC = 884.04, ∆ = 17.69; Anderson, 2008; Burnham & Anderson, 2004).
General Discussion
Are individuals who are especially attentive to their own emotion more likely to differentiate the emotion they believe they see in others’ faces or place greater weight on the intensity and genuineness of the emotion they believe they see in others’ faces? The results from the current research were not consistent with either of the hypotheses suggested by prior research on the relation between individual differences in attention to emotion and categorization or dysregulated attention. That is, Study 1 did not find that individual differences in attention to emotion significantly moderated perceived emotional intensity or genuineness ratings. Whereas a pre-registered, exploratory finding from Study 1 raised the possibility that attention to emotion may relate to differences in the perception of emotional intensity across emotion categories, this result was not replicated in Study 2. Taken together, the present and prior research suggests that future research may build a more complete understanding of the association between individual differences in attention to emotion and emotional interpretation of others’ faces by expanding the focus to multiple biases in facial interpretation (Barrett et al., 2019).
The present work leaves open the question of whether individual differences in attention to emotion relate to increased or decreased susceptibility to biased emotional interpretations of facial behavior. Perceptions of emotion in other people’s faces are not always accurate and oftentimes perceivers make systematically biased interpretations depending on target characteristics (Barrett et al., 2019). For example, individuals show greater agreement in emotion categorization of faces belonging to their ingroup than outgroups (Barrett et al., 2019; Elfenbein & Ambady, 2002). Individuals may also believe they see more (Hugenberg & Bodenhausen, 2003; Shapiro et al., 2009) or less (Kommattam et al., 2019; Mende-Siedlecki et al., 2021) emotion in outgroup versus ingroup faces. Future research might therefore investigate whether individual differences in attention to emotion relate to the extent to which group information is factored into emotional interpretations of facial movement.
In addition to investigating relations between individual differences in attention to emotion and the type of information used to form interpretations of others’ faces, future research may also investigate situations in which the facial behavior of others is fleeting. Research on heuristics shows that individuals tend to make more biased judgments under time pressure (e.g., L. Guo et al., 2017). If susceptibility to bias about the meaning of others’ facial behavior is associated with individual differences in attention to emotion, then individual differences in attention to emotion may be more likely to show significant effects on emotional interpretations formed from a glimpse of a face (rather than having unlimited time to study the face as they did in the current research). In everyday life, most facial behavior which tends to be interpreted as emotional only lasts between 0.5 and 4 seconds (Ekman, 2006; Ekman & Friesen, 1982; Frank et al., 1993). Therefore, individual differences in attention to emotion may show significant associations with interpretation biases that were not captured in the present work using untimed examination of static faces. Future research will benefit from investigating the role of attention to emotion in emotional interpretation when individuals have more or less time to form interpretations of facial movement.
As future research expands its focus on the ways in which individual differences in attention to emotion may shape the interpretation of facial movement, it might also consider a new hypothesis that was not considered in the pre-registration of the current research. That is, greater attention to emotion may reduce the extent to which people form interpretations of others’ faces as emotional. For example, research on neuroticism suggests that individuals who tend to be preoccupied with their own negative emotions also tend to have reduced perspective taking and empathy (Q. Guo et al., 2018; Kim & Han, 2018). It may be that individuals higher on attention to emotion are consumed by their focus on their own feelings and less likely to form interpretations of others as experiencing emotion. Furthermore, the association between greater attention to emotion and the lower likelihood of forming emotional impressions about others’ faces may be especially evident when interest in the emotions of others may be likely to be low (e.g., when targets are from an outgroup: Kommattam et al., 2019) or when low interest results in not seeing the facial movement of others (e.g., which can be fleeting: Ekman, 2006; Ekman & Friesen, 1982; Frank et al., 1993). Therefore, future research should also consider the hypothesis that individuals higher on attention to emotion are so preoccupied with their own emotion that they are less likely to believe they see emotion in other people’s faces (in comparison to individuals relatively lower on attention to emotion).
There are a number of unanswered questions about whether individual differences in attention to one’s own emotions relate to the interpretation of emotions in others. The current research did not find a significant association between individual differences in attention to emotion and perceptions of emotional intensity and genuineness in faces. Future work will benefit from considering a greater range of ways in which individual differences in attention to emotion may shape impressions of others’ faces. For example, do individual differences in attention to emotion influence the extent to which different kinds of information are used to form emotional impressions of others? Does the association between emotional impression formation and individual differences in attention to emotion tend to show stronger effects when the time to form an impression is limited? Additionally, future research should consider the possibility that attention to one’s own emotion drowns out the extent to which emotional impressions of others are formed. The current research underscores the need for an expanded focus to address the unresolved question of how attention to one’s own emotion may play a role in interpreting emotion in other people.
Contributions
Contributed to conception and design: SM, JSB
Contributed to acquisition of data: SM
Contributed to analysis and interpretation of data: SM, JSB
Drafted and/or revised the article: SM, JSB
Approved the submitted version for publication: SM, JSB
Funding Information
The authors did not receive any external financial support for this research.
Competing Interests
The authors have no potential conflicts of interest.
Data Accessibility Statement
All pre-registrations, stimuli, participant data, and analysis scripts can be found on this paper’s repositories at the Open Science Foundation: Study 1 (https://osf.io/6sx48/) and Study 2 (https://osf.io/5guxb/).