The good and bad impacts of social media on individuals and societies remain poorly understood and highly debated. An often-discussed, yet little-studied worry about social media usage is that it may breed diminished social and emotional abilities. Here, we tested this assumption across three studies with adult samples (N = 316, 1,879, 903). We used different indicators of emotional prowess (i.e., emotional intelligence, emotion recognition), a broad set of social media usage measures and adopted a three-pronged analysis approach featuring zero-order correlations, multiple regressions, and conditional random forests. Our findings do not support consistent evidence for associations between social media usage and emotional prowess. Instead, we find conflicting evidence for passive social media usage (related to lower overall emotional intelligence but better emotion recognition) and active social media usage (related to higher overall emotional intelligence but worse emotion recognition). We find some evidence for positive associations between emotional prowess and general smartphone usage and text messaging usage. Further, we find largely inconsistent and/or null effects for social media addiction, general social media usage, general smartphone usage, video gaming, and media sharing. In the absence of consistent effects of social media usage, we find strong, robust, and replicable associations between age and emotional prowess.

Social media are a ubiquitous part of most people’s lives and – in all likelihood – they are here to stay. As of spring 2024, 5.07 billion people around the globe use social media (i.e., 63.2% of the world’s total population; Statista, 2024) and this number is only growing with around 150 million new users joining social media each year (Datareportal, 2023). As with other sweeping technological innovations before – from novels to the radio – (Orben, 2020), the question of whether social media bring promises or perils to our lives have sparked spirited debates within academia and society at large and has spurred on much research activity (e.g., Cingel et al., 2022; Rozgonjuk et al., 2020; Valkenburg et al., 2022; Vandenbosch et al., 2022; Wolfers & Utz, 2022). Currently, scientific opinion on the potential harmful impacts of social media is frequently divided and driven by applied, exploratory cross-sectional surveys rather than theory-based studies (which may – at least in part – be due to the recent advent of social media and thus the relative lack of time for tailored theoretical framework to evolve). This is especially true with regard to mental health and well-being. Here, different studies report different findings and different researchers differ in their assessment of the practical relevance of those effects that have been reported (for different accounts see: Orben et al., 2019; Orben & Przybylski, 2019; Vuorre et al., 2021; as well as: Haidt & Allen, 2020; Twenge, 2019, 2020; Twenge et al., 2018, 2020).

One prominent notion, which could reconcile these opposing results, is the rich-get-richer and poor-get-poorer hypothesis (Cheng et al., 2019). Some people profit from social media use whereas others do not. For example, people with strong friendship support and little feelings of loneliness profit from social media use because it strengthens and reinforces their existing positive relationships with their friends (i.e., rich-get-richer). Conversely, people who try to compensate personal deficits (e.g., feeling depressed) by using social media might realize in the long term that using social media does not help them in their daily (offline face-to-face) life making them even more depressed (i.e., poor-get-poorer; Pouwels et al., 2021). Indeed, an emerging line of research highlights the pivotal role of individual differences as moderators of the impact of social media use on health and well-being outcomes (Beyens et al., 2020; Valkenburg & Peter, 2013).

One relatively under-researched aspect of this broader question – and the focus of the current paper – is the often-voiced, yet little-studied fear that social media might in fact be anti-social media that diminish interpersonal skills and lower emotional abilities (Waytz & Gray, 2018). The research that does exist on this question paints a complex, multifaceted, and incomplete picture. For example, while initial work seemed to suggest that higher social media use is associated with increased shyness, recent meta-analytical evidence questions this and points to much smaller effect sizes and less clear links (e.g., Appel & Gnambs, 2019; Gnambs & Appel, 2018). At the same time, interactions in online communities are often more hostile, toxic, and verbally violent than interactions in face-to-face communities (e.g., Sibai et al., 2024). This could plausibly be driven by toxic online disinhibition (i.e., anonymity, invisibility and lack of eye-contact giving rise to an online sense of unidentifiability which in turn emboldens bullies and enables animosities that go unsanctioned; Lapidot-Lefler & Barak, 2012), but it could also reflect a broader erosion of social communication skills in the digital age wherein people collectively lose their emotional prowess as their lives are increasingly playing out online (i.e., rich-get-poorer and poor-get-poorer). Aside from yielding divergent findings, while generally concerned with the impact of modern communication technologies on sociability and emotional intelligence, the existing research is often not specific to social media usage but rather considers internet access (Uhls et al., 2014) or general screen time (Skalická et al., 2019).

With this in mind, we set out to shed new light on the specific association between social media usage and emotional prowess, by testing it comprehensively across three studies (Ntotal = 3,098). We define emotional prowess as an umbrella term that subsumes emotion recognition (i.e., accurate perception of emotional signals from others and conscious awareness of the emotion expressed; Tracy & Robins, 2008) and emotional intelligence (i.e., ability to perceive and use emotional information to guide one’s goal-directed thinking and behaviour; Mayer et al., 2016). Across our three studies, we (a) consider a diverse range of aspects of social media usage (i.e., average daily passive and active social media use, social media addiction), (b) examine multiple indicators of emotional prowess (i.e., different measures of emotion recognition and emotional intelligence), and (c) analyse the impact of other forms of smartphone usage to gauge the specificity of social media usage effects. Given the lack of prior research in this specific area and the inconsistent findings in the broader literature on the psychological impact of social media usage, we adopt an exploratory research approach and refrain from postulating any a priori hypotheses.

In Study 1, we built on the materials and study approach used by Uhls et al. (2014). Instead of using the Diagnostic Analysis of Nonverbal Behavior (DANVA2) – as a measure of the ability to read emotions in the facial expression of others – we used the Reading the Mind in the Eyes Test (RMET, also often called the eyes-test; Baron-Cohen et al., 2001; Oakley et al., 2016) which is a widely established measure that is frequently used for adult populations (such as our samples) versus children (such as the samples studied in Uhls et al., 2014).

To allow a nuanced assessment of social media usage, we included several measures of active, as well as passive social media usage and also measured social media addiction (Wartberg et al., 2017). We differentiated between active and passive use because past research suggested that active use may be associated with positive outcomes of social media usage, whereas the opposite may be true for passive use (e.g., Escobar-Viera et al., 2018; Thorisdottir et al., 2019).

Based on the previous effects sizes reported in Uhls et al. (2014; d = .33/.66) we assumed an expected effect size of d = .33 (converted equivalent: r = .163). Based on this estimation, we ran a power analysis which suggested a minimum sample size of n = 293 to reach a power of 80% (α = 5%, two-tailed).

2.1 Method – Study 1

2.1.1 Participants

Overall, 353 people from German-speaking countries (predominantly Austria) finished the study between April 2019 and June 2021, but 37 (10.5%) stated that they did not want their data analysed. Therefore, these people were excluded from further analyses. In the final community-based sample (N = 316), participants were M = 28.6 years old (SD = 11.42, range = 18 to 75) and predominantly identified as female (76.6%; 23.4% male; 0.0% non-binary). Most had completed secondary school (48.4%) or had a tertiary degree (30.4%) as their highest level of completed education (0.6% none; 7.9% primary school; 12.7% apprenticeship). Participants were treated in accordance with the World Medical Association Declaration of Helsinki and with local ethical guidelines. All gave their informed consent prior to participating. Participation was voluntary and no financial compensation was offered.

2.1.2 Materials

2.1.2.1 Demographics. Participants reported their age, gender (female/male/other), and highest level of completed education (categories).

2.1.2.2 Social Media Addiction (SMA; Young, 1998). We adapted the German 8-item version (Wartberg et al., 2017) of the Young Diagnostic Questionnaire assessing pathological Internet use (Young, 1998) by replacing the word ‘Internet’ with ‘social media’. In the instructions, we asked participants to think back to the previous 6 months as the relevant time-frame for answering the questions. We used a 6-point Likert-type scale (1 = never or very seldom, 6 = very often or always; Cronbach α = .84; McDonald ω = .85).

2.1.2.3 Social Media Usage Behaviour. We asked two questions about how much time (in minutes) social media was used on average each day. We differentiated between active and passive use (“On average per day, how much time do you typically spend actively on social media (e.g., creating Facebook post, writing Twitter message, sending WhatsApp message)?”; “On average per day, how much time do you typically spend passively on social media (e.g., watching Youtube videos, reading Facebook posts, viewing Snapchat pictures)?” [underlining appeared in original]).

2.1.2.4 Reading the Mind in the Eyes Test (RMET; Baron-Cohen et al., 2001). In the RMET (German version: Bölte, 2005), participants are presented with 36 pictures of the eye’s region of different male and female faces. Participants are instructed to pick one out of four presented mental state terms that best describes the emotion expressed in the particular face. For each picture, there is only one correct mental state. The sum of all correctly classified pictures forms the RMET score (0 to 36). Internal consistency was rather low (Cronbach α = .55) but in line with past research (Vellante et al., 2013: α = .61; Voracek & Dressler, 2006: α = .60/.63).

2.1.2.5 Experience with Emotion Recognition Tests and Adjective Comprehension. As the RMET is a frequently used test, we also asked about any previous experience with the test itself as well as experience with emotion recognition tests in general (Stieger & Reips, 2016). Furthermore, we asked if participants understood the emotion descriptors used in the test.

In general, 30 participants (9.5%) had experience with this particular test, and 67 participants (21.2%) with a similar one (also using pictures of faces/facial parts). Furthermore, all emotion descriptors were well-known (M = 92.1, SD = 12.8) although the range was rather large (36 to 101).

2.1.3 Procedure

Participants were recruited through different channels, including word-of-mouth, recruitment of friends and relatives of several study administrators and a student research participation pool (i.e., students studying at the first and second author’s university). To boost participant engagement and survey reach, the data collection was embedded in a pop-culturally themed quiz. While previous research has successfully used similar approaches – consistently producing good data quality – through linking surveys to the Harry Potter universe (Ebert et al., 2019; Götz, Bleidorn, et al., 2020) or the Star Wars saga (Du et al., 2023; Gosling et al., 2004; Stieger et al., 2022), in the current project we aligned our survey with “The Big Bang Theory.” This highly successful TV show, that ran between 2007 and 2019, offers a natural fit as the main character, Sheldon, famously struggles with interpersonal skills and emotion recognition. As such participants were offered personalized feedback on their own emotion recognition skills that would rank them relative to various characters from the show.

To participate in the study, participants first had to provide informed consent. Thereafter, participants filled in their demographics, social media usage, and subsequently completed the RMET.1 The test started with a sample item to familiarize participants with the procedure. After that, all 36 items were presented in a pre-randomized but fixed order for each participant. At the end of the questionnaire, participants were asked whether they had experience with these sorts of tests (i.e., judgment of pictures of eyes; yes/no), had done this test before (yes/no), and whether they understood all adjectives used in the visual analogue scale (“Have you always understood the meaning of the adjectives?” from 1 = never to 101 = always). Finally, participants were presented with their individual Big Bang Theory Character match and were thanked and debriefed. Importantly, participants were free to either donate their data to the current research project or withhold it. Regardless of that decision, after having completed the RMET test, participants got personalized feedback about how comparable their score was to the characters in the series (Amy, Bernadette, Howard, Leonard, Penny, Raj, Sheldon). Reference scores of the characters were determined in a pre-study using 25 raters.

2.1.4 Statistical Analyses

We adopted a three-fold analysis strategy. First, we computed zero-order correlations between the dependent measure and each predictor (Pearson as well as Spearman correlations). Second, we conducted linear regressions and third, we ran conditional random forests (Fokkema & Strobl, 2020; Strobl et al., 2009). In all analyses, we controlled for participants’ gender and age.

Conditional random forest models are based on machine learning algorithms representing a non-parametric, data-driven procedure, which is robust to overfitting, nonlinearity, higher-order interactions, correlated predictors, and heterogeneity (Joel et al., 2020). As such, conditional random forests are accurate in identifying meaningful predictors of the outcome variable at hand. Applying a process called recursive partitioning (IJzerman et al., 2018), they assess the relative importance of each predictor by examining all possible relationships between predictors and the outcome measure. More specifically, conditional random forest models draw random subsets of predictors and participants to examine the predictive power of each selected predictor within the respective subset. This procedure is repeated over hundreds (or thousands) of bootstrap samples. Finally, the predictive power of each predictor across all iterations is averaged resulting in the final overall importance score. The result of a conditional random forest model is usually visualized with a dotplot where all predictors are presented in descending order based on their importance score. Of note, importance values should not be interpreted in absolute terms (akin to interval or ratio scaling) but rather conveys differences in relative importance of predictors (e.g., predictor A has larger importance values than predictor B; akin to ordinal scaling). Furthermore, in dotplots a red dashed vertical line is presented which is based on the smallest positive importance score or the absolute value of the largest negative importance score. Predictors exceeding this vertical line are considered highly unlikely to be noise (Strobl et al., 2009). Conditional random forest models are frequently used in genetics (e.g., Brieuc et al., 2018) but more and more also in psychology (Ebert et al., 2021; Götz, Stieger, et al., 2020; Wei et al., 2017).

When calculating conditional random forests models, three parameters need to be specified (Strobl et al., 2009): mtry, ntree, and seed. Mtry represents the number of predictors, which are randomly selected out of all predictors during each model tree that is fitted. It is recommended that mtry should be one third of the total number of predictors (Kuperman et al., 2018; Liaw & Wiener, 2002) or the square root of the total number of predictors (Breiman, 2001; IJzerman et al., 2016, 2018).

The second parameter is ntree, which represents the overall number of trees that the conditional random forest model consists of. Usually, ntree is set to 1,000, which has led to robust results in the past (IJzerman et al., 2016; Kuperman et al., 2018; Latinne et al., 2001). The third parameter – seed – provides a starting point from which the conditional random forest trees grow and is important to obtain reproducible results (IJzerman et al., 2016). It is suggested to calculate the model with a series of different seeds and to calculate Spearman rank-order correlation coefficients to quantify the robustness of the results (i.e., the order of predictors; IJzerman et al., 2016). The Spearman rank-order correlations should exceed .7 as a general rule of thumb (IJzerman et al., 2016).

Following the above-mentioned rules, we set mtry = 3 for Study 1 and used the following seeds: 1, 2, 666. Spearman correlations indicated a high model stability across seeds (range: .881 – 1.000). For detailed plots, see Figures S2–S4 in the online supplement.

2.2 Results and Discussion – Study 1

Study 1 evinced few robust effects of social media usage. Most notably, active social media usage was identified as a practically relevant predictor in the conditional random forests (see Figure 1) and was also negatively associated with emotion recognition in a statistically significant manner in the linear regression. It also approached – but did not reach – statistical significance in the zero-order correlations (see Table 1). Meanwhile, passive social media usage was not statistically significantly associated with emotion recognition in the zero-order correlations or the linear regression and was also not identified as a practically relevant predictor in the conditional random forests. Lastly, social media addiction was identified as a relevant predictor in the conditional random forests (see Figure 1) but was not significantly associated with emotion recognition in either the zero-order correlations, or the linear regression (see Table 1).

A more consistent picture emerged with regard to age. That is, the older the participants, the lower was the emotion recognition score, which is in line with past research (e.g., Kynast et al., 2021). Age was even the most important predictor of the emotion recognition score (see Figure 1). Interestingly, we found no statistically significant female superiority in emotion recognition although this has been found in past research (e.g., Schlegel et al., 2019). As meta-analytic evidence suggests that this effect might be rather small (Kirkland et al., 2013), Study 1 may have not been sufficiently well-powered to detect an effect. On an explorative/descriptive level, women indeed had slightly higher scores compared to men (female: M = 25.2, SD = 3.83; male: M = 24.6, SD = 3.97; t [314] = 1.10, p = .274, Cohen d = 0.15).

Finally, as can be seen from Table 1 and Figure 1, the comprehension of the adjectives used in the RMET had a significant influence on the emotion recognition score, such that individuals with better emotion descriptor comprehension obtained higher RMET scores (although this only appeared to affect participants with a score lower than 70, see Figure S1).

Table 1.
Zero-Order Correlations and Multiple Linear Regression with Emotion Recognition as the Dependent Variable
 Zero-order  Linear regression   
 r rsp B β t 
Gender -.062 -.044 -0.551 -.061 -1.087 
Age -.203*** -.113* -0.066 -.192 -3.221** 
Social Media Addiction SMA .084 .095† 0.055 .097 1.542 
Social media – active use -.109† -.031 -0.012 -.170 -2.712** 
Social media – passive use .021 .024 -0.001 -.012 -0.189 
Experience with test – specific -.083 -.086 -0.693 -.053 -0.823 
Experience with test – general -.098† -.091 -0.397 -.042 -0.665 
Comprehension adjectives .141* .076 0.039 .128 2.305* 
 Zero-order  Linear regression   
 r rsp B β t 
Gender -.062 -.044 -0.551 -.061 -1.087 
Age -.203*** -.113* -0.066 -.192 -3.221** 
Social Media Addiction SMA .084 .095† 0.055 .097 1.542 
Social media – active use -.109† -.031 -0.012 -.170 -2.712** 
Social media – passive use .021 .024 -0.001 -.012 -0.189 
Experience with test – specific -.083 -.086 -0.693 -.053 -0.823 
Experience with test – general -.098† -.091 -0.397 -.042 -0.665 
Comprehension adjectives .141* .076 0.039 .128 2.305* 

Note. All variance inflation factors (VIFs) – as a measure of multicollinearity – between 1.042 and 1.388 (values above 5 are often considered problematic). Adjusted R2 = 7.1%. Gender: 1 = female, 2 = male.

*** p < .001, ** p < .01, * p < .05, † p < .10

Figure 1.
Variable Importance of Predictors from Study 1.

Note. Conditional random forest parameters: mtry = 3, seed = 666, ntree = 1000.

Figure 1.
Variable Importance of Predictors from Study 1.

Note. Conditional random forest parameters: mtry = 3, seed = 666, ntree = 1000.

Close modal

Taken together, the findings from Study 1 neatly replicated previous research suggesting a robust relationship between age and decreased emotion recognition, but offered less clarity with respect to linkages between social media usage and emotion recognition.

It is important to acknowledge a number of limitations of Study 1. One such limitation regards the assessment of emotion recognition. Although the RMET is a well-established and comprehensive measure of emotion perception, featuring 36 items, its reliability in the current study was rather low (Cronbach α = .55). This is consistent with prior research (Vellante et al., 2013: α = .61; Voracek & Dressler, 2006: α = .60/.63) and a recent meta-analysis (Kittel et al., 2022) finding a mean reliability of α = .73 with a large range (.45 to .96). Furthermore, its item structure seems to be still not settled (see e.g., Olderbak et al., 2015 who found a very poor model fit for the proposed single factor solution). Another important limitation concerns the statistical power. Although we conducted an a priori power analysis – based on previous research (Uhls et al., 2014) – to inform our target sample size, the effect sizes that were actually observed in our correlation and regression analyses were far smaller than those that we had based our estimations on. As such, it is possible and perhaps even likely that the current study may have been underpowered to detect certain effects. Against this backdrop, we designed Study 2 as a follow-up study to ameliorate these design aspects.

As mentioned above, because the effect sizes observed in Study 1 were smaller than expected based on the ones described in the study by Uhls et al. (2014), we conducted a well-powered follow-up study. In light of the small-to-medium effects sizes observed in Study 1, we based our power analysis on a small effect size (r = .1; Funder & Ozer, 2019). The power analysis suggested a required minimum sample size of N = 782 to reach a power of 80% (α = 5%, two-tailed). Furthermore, because the reliability of the RMET was rather low, we decided to use a different measure of emotional prowess – the TEIQue-SF (Petrides & Furnham, 2003) –, which assesses emotional intelligence.

Moreover, in Study 2 we also explicitly set out to examine – and model – the possibility that the effects of modern technology on social and emotional skills might be broader in nature. That is, such effects may exist, but they may not be about social media usage per se, but rather about spending time on one’s phone – as opposed to with people (e.g., Przybylski & Weinstein, 2013; Skalická et al., 2019; Uhls et al., 2014). Therefore, we additionally asked for overall personal smartphone usage in years and personal daily average smartphone usage.

3.1 Method – Study 2

3.1.1 Participants

2,103 people from German-speaking countries (predominantly Austria) participated in the study between December 2019 and May 2021. 224 (10.7%) stated that they did not want their data analysed and were thus excluded from further analyses. In the final community-based sample (N = 1,879), participants were M = 30.1 years old (SD = 13.96, range = 18 to 88). 56.3% identified as women, 43.4% as men, and 0.4% as non-binary. Most had completed secondary school (55.7%) or held a tertiary degree (25.4%) (0.5% none; 5.0% primary school; 13.4% apprenticeship). 38.0% were single, 38.7% were in a relationship, 21.2% were married or in a partnership, 1.5% were divorced, and 0.6% widowed.

As in Study 1, participants were treated in accordance with the World Medical Association Declaration of Helsinki and with local ethical guidelines. They gave informed consent prior to participating. Participation was voluntary and no financial compensation was offered.

3.1.2 Materials

3.1.2.1 Demographics. Participants reported their age, gender (female/male/other), highest level of completed education, and relationship status.2

3.1.2.2 Social Media Usage Behaviour. As in Study 1, we asked the same two questions regarding average daily social media usage (in minutes), both passively and actively.

3.1.2.3 Smartphone Usage Behaviour. We asked how old participants where when they first owned a smartphone. We also inquired about participants’ typical daily phone usage (“How much time do you spend using your smartphone per day on average?”).

3.1.2.4 Trait Emotional Intelligence Questionnaire – Short Form (TEIQue-SF). We used the German version (Freudenthaler et al., 2008) of the short from of the TEIQue (Petrides & Furnham, 2003; TEIQue-SF: Petrides & Furnham, 2006). The TEIQue-SF consists of 30 items using a 7-point Likert scale (1 = do not agree at all, 7 = absolutely agree). Reliability in our sample was very good (α = .86; McDonald ω = .86).

3.1.3 Procedure

As in Study 1, a quiz-based approach was used to foster participant engagement and survey reach. This time – in line with previous work (Du et al., 2023; Gosling et al., 2004) – we used characters from the Star Wars universe, that are widely seen as differing in their emotional intelligence. Once again, participants were offered personalized feedback on their own emotion recognition skills that would rank them relative to various characters from the Star Wars saga.

Mirroring Study 1, in order to participate in the study, participants first had to provide informed consent. Then participants filled in the TEIQue-SF, which was followed by demographic questions, and questions about social media and smartphone usage. Finally, participants were presented with their individual Star Wars Character match and were thanked and debriefed. As before, participants were free to donate their data to the current research project or to decline that invitation. Regardless of their decision, following survey completion, participants got personalized feedback about how their score compared to various Star Wars main characters (Luke, Leia, Han Solo, The Emperor, C3PO, Darth Vader, Jabba, General Tarkin). Reference scores of the characters were determined in a pre-study using 24 raters.

3.1.4 Statistical Analyses

As only seven participants reported identifying as non-binary, we excluded them from further analyses because this group was too small for separate analyses. Analogous to Study 1, we applied the same three-pronged procedure consisting of zero-order correlations, linear regressions, and conditional random forest models.

3.2 Results and Discussion – Study 2

Study 2 yielded a nuanced, yet heterogeneous picture of the relationship between social media usage and emotional intelligence. Passive social media usage produced a consistent pattern, wherein greater passive social media usage was negatively associated with emotional intelligence in both the zero-order correlations, and the linear regression (see Table 2). Moreover, passive social media usage was ranked as the second most important predictor of emotional intelligence in the conditional random forest (after age; Figure 2). The results for active social media usage were less clear (i.e., a negative but not statistically significant zero-order correlation, a small positive, statistically significant regression coefficient and practical predictive relevance according to the conditional random forest analyses).

Table 2.
Zero-Order Correlations and Multiple Linear Regression with Emotional Intelligence as the Dependent Variable
Zero-orderLinear regression
 r rsp B β t 
Gender -.011 -.008 -0.058 -.042 -1.406 
Age .157*** .167*** 0.004 .081 2.192* 
Social media – active use -.015 -.012 0.001 .065 1.974* 
Social media – passive use -.171*** -.162*** -0.001 -.159 -4.287*** 
Daily Smartphone usage -.089** -.107*** 0.006 .018 0.499 
General Smartphone usage in years .138*** .138*** 0.011 .078 2.298* 
Zero-orderLinear regression
 r rsp B β t 
Gender -.011 -.008 -0.058 -.042 -1.406 
Age .157*** .167*** 0.004 .081 2.192* 
Social media – active use -.015 -.012 0.001 .065 1.974* 
Social media – passive use -.171*** -.162*** -0.001 -.159 -4.287*** 
Daily Smartphone usage -.089** -.107*** 0.006 .018 0.499 
General Smartphone usage in years .138*** .138*** 0.011 .078 2.298* 

Note. All VIFs between 1.025 and 1.571. Adjusted R2 = 4.3%. Pairwise case exclusion. Gender: 1 = female, 2 = male. *** p < .001, ** p < .01, * p < .05

Figure 2.
Variable Importance of Predictors from Study 2.

Note. Conditional random forest parameters: mtry = 2, seed = 666, ntree = 1000.

Figure 2.
Variable Importance of Predictors from Study 2.

Note. Conditional random forest parameters: mtry = 2, seed = 666, ntree = 1000.

Close modal

This set of results is intriguing and stands – at least partially – in contrast to the results observed in Study 1. That is, in Study 1, higher active social media usage was associated with a lower ability of emotion recognition whereas in Study 2 higher passive social media usage was associated with lower emotional intelligence. Thus, in both studies, aspects of social media usage were negatively associated with emotional prowess, but in Study 1, the effect was on active social media usage and in Study 2 on passive social media usage.

Going beyond social media usage, limited empirical support was found for the role of general smartphone usage. Even though general smartphone usage – measured in years – was positively associated with emotional intelligence in both the zero-order correlations and the multiple regression, it fell just below the practical relevance cutoff in the random forest models. Similarly, general daily smartphone usage was negatively associated with emotional intelligence in the zero-order correlation analyses, but did not receive empirical support in either the multiple regression or the conditional random forest. Taken together, these findings speak against the possibility that general smartphone usage may independently and uniquely exert a strong impact on social and emotional skills.

Meanwhile, participant age once again emerged as the strongest predictor of emotional prowess – although this time it did so in the opposite direction. This association – the older, the higher one’s emotional intelligence – is, however, in line with past research, (e.g., Cabello et al., 2016; Petrides & Furnham, 2006).3 In conjunction with the findings from Study 1, this highlights that – though conceptually overlapping and both components of emotional prowess – emotion recognition and emotional intelligence are not interchangeable concepts. In fact, emotional intelligence and emotion recognition are situated at different hierarchical levels. That is, emotional intelligence is defined as the ability to perceive, use, understand, manage, and handle emotions (Mayer et al., 2001), whereas emotion recognition is just one component of emotional intelligence (Mayer et al., 1990). Thus, all emotion recognition reflects emotional intelligence but not all emotional intelligence manifests as emotion recognition. This differentiation is also empirically supported. A recent study found only around 9% shared variance between the RMET and emotional intelligence measured with the Mayer–Salovey–Caruso Emotional Intelligence Test (Megías-Robles et al., 2020).

As before it is important to note limitations of Study 2. While Study 2 was well-powered and used a widely established measure of emotional intelligence – the TEIQue-SF –, as a self-report measure, this measure has its own drawbacks (e.g., social desirability, concept clarity). Thus, in keeping with our multi-angle assessment approach, in Study 3, we used a third, different way of assessing emotional prowess.

Though widely used and well-established, both the RMET (to measure emotion recognition) and the TEIQue (to measure emotional intelligence) have their shortcomings. The RMET assesses only a small aspect of our everyday emotion recognition ability by focusing entirely on pictures of the eyes part of faces. This is critical, as in real life, humans routinely use many more information such as hand gestures, body posture, facial expressions, as well as the pitch and loudness of a person’s voice to infer their emotions (e.g., D’mello & Kory, 2015; Kessous et al., 2010). While not subject to this specific bias, as a self-report measure of emotional intelligence, the TEIQue has its own issues to grapple with (e.g., potential distortion due to social desirability, concept clarity, etc.).

Against this backdrop, we decided to use a measure of emotion recognition again, but one which integrates all the senses with which we usually judge the emotional state of others. We used the Geneva Emotion Recognition Test (GERT; Schlegel et al., 2014) which features small video clips of 10 different actors depicting 14 different emotions (6 positive, 8 negative) by actively using body gesture, mimicry, and voice characterizations (e.g., pitch, loudness, tone). Furthermore, to remove the impact of language (and thus linguistic ability as a potential confound), pseudo-linguistic sentences are used.

Moreover, to further probe specificity and get a better understanding which – if any – concrete aspects of engagement with social media may be driving associations with social and emotional skills, in Study 3 we also assessed – and analysed – overall social media usage duration (in years). Furthermore, to get a more comprehensive picture of technology use per-se, we assessed specific media and technology usage in everyday life (Rosen et al., 2013). The rationale for this decision is that the absence of a clear association between emotional prowess and social media use does not necessarily imply that there is no link between technology-based communication and emotional prowess on a broader level. For example, Uhl and colleagues (2014) found that adolescence staying in an overnight camp for several weeks without any access to computers, mobile phones, and television had better social skills (i.e., emotion recognition) than a control group, who continued to use the same technologies as before. Another experiment by Przybylski and Weinstein (2013) found that even the mere presence of a mobile phone led to reduced sociability (e.g., feeling of less connection to a conversation partner, less trust, less perceived empathy from the conversation partner) compared to a control group without the presence of a mobile phone. Such negative effects were even observed with children as young as 4 to 8 years of age (Skalická et al., 2019) when analyzing on screen time in general. In this large longitudinal study (N = 960), Skalická and colleagues (2019) observed that more screen time at the age of 4 years led to reduced levels of emotional intelligence at the age of 6. Similarly, more TV watching in children’s bedroom at age 6 was related to lower levels of emotional intelligence two years later (at age 8). To sum up, possible negative effects might of modern communication technologies be not rooted in the specific use of social media, but could be due to specific usage forms (e.g., video gaming, texting) or the presence of – and engagement with – electronic devices per-se (e.g., smartphone, computer, TV).

Aside from using social media (General social media usage subscale), people also do other things on their smartphones (e.g., reading e-mails, listening to music and audiobooks, watching videos or pictures; Smartphone usage subscale). Furthermore, people also use different devices such as desktop computers to watch videos, sharing videos, or picture content (Media sharing subscale) or use all sorts of devices (e.g., smartphone, desktop, console) to play games (Video gaming subscale). Finally, still one of the most frequently used services on smartphones is text messaging (Text messaging subscale). Therefore, to account for all these diverse ways of technology engagement, we additionally assess technology use in general via selected subscales of the Media and Technology Usage and Attitudes Scale (MTUAS; Rosen et al., 2013).

With respect to statistical power, as we had previously found rather small effects in both Study 1 and Study 2, in Study 3, we once again based our power analysis on a small effect size (r = .1; Funder & Ozer, 2019). This resulted in a required minimum sample size of N = 782 to achieve a statistical power of 80%. To account for potential dropout – which we estimated at up to 20%, in keeping with prior research (Götz et al., 2023) – due to the elevated technological demands of the study that required participants to watch videos online, we aimed for at least 900 participants.

4.1 Method – Study 3

4.1.1 Participants

We recruited 930 German-speaking participants from Austria using a crowd-working platform (Respondi). Due to technical reasons (e.g., used Smartphone instead of Desktop computer; issues with videos), we had to exclude 27 participants.

Participants in the final sample (N = 903) were M = 49.5 years old (SD = 15.7, range = 18 to 82). 54% identified as male, 46% female as female, and 0% as non-binary. Most had reached an apprenticeship level (36.2%), had a secondary school degree (31.0%), or tertiary degree (26.8%; 0.2% no formal education, 5.8% primary school). Participants were treated in accordance with the World Medical Association Declaration of Helsinki and with local ethical guidelines. They gave informed consent prior to participating. As an incentive, participants were paid 2.25 Euro for their participation.

4.1.2 Materials

4.1.2.1 Demographics. Participants reported their age, gender (female/male/non-binary), and highest level of completed education (categories). Furthermore, we asked what kind of device the study was taken on (i.e., PC, tablet, other).

4.1.2.2 Social Media Usage Behaviour. We asked the same two questions as in Study 1 and 2.

4.1.2.3 Social Media Addiction (SMA). We used the same measure as in Study 1. Reliabilities were Cronbach α = .87 and McDonald ω = .85.

4.1.2.4 Media and Technology Usage and Attitudes Scale (MTUAS; Rosen et al., 2013). The MTUAS is a 46-item measure of technology usage and attitudes towards technologies. It has 10 subscales out of which we only used the following five subscales because of their fit with our research question (see introduction of Study 3): Smartphone usage scale (SUS; 9 Items; α = .90, ω = .91); General social media usage scale (GSMUS; 9 Items; α = .93, ω = .93); Media sharing scale (MSS; 4 Items; α = .78, ω = .77); Text messaging scale (TMS; 3 Items; α = .75, ω = .76); Video gaming scale (VGS; 3 Items; α = .71, ω = .72). All scales asked about the frequency of certain behaviours shown online by using a 10-point Likert scale from 1 = never to 10 = all the time.

4.1.2.5 Social Media Overall Usage. We asked how long participants had been actively using social media (“For how many years have you been actively using social media (e.g., Facebook, Twitter, Instagram)?”).

4.1.2.6 Geneva Emotion Recognition Test (GERT; Schlegel et al., 2014). In the GERT, 14 different emotions (joy, amusement, pride, pleasure, relief, interest, anger, fear, despair, irritation, anxiety, sadness, disgust, and surprise) are presented in 83 different video clips (duration 1-3 seconds; same single-coloured background) of 10 actors (5 women, 5 men). In the present study the short version with 42 video clips was used (Schlegel & Scherer, 2016). Each clip can only be viewed once. After viewing the clip, participants had to choose the emotion shown in the video clip (out of 14 pre-defined emotions). All clips had the following characteristics: (1) multimodal (i.e., facial, vocal, and gestural/postural information); (2) actor’s upper body including the head (facing forward) is shown; and (3) actor’s voice is audible. Instead of a real language, for each emotion one of two pseudo-linguistic sentences were used (i.e., sentence without meaning which should make it easier to focus on the tone of the voice instead of the content of the sentence).4 The reliability was α = .77 and ω = .76.

4.1.3 Procedure

First, participants were informed about the study background and the participation prerequisites (i.e., minimum age 18 years; usage of desktop PC, laptop, or tablet with the ability to play videos and sound). Participants were also told that they would have the possibility to get their individual emotion recognition score at the end of the study.

Then, participants provided informed consent. The following pages informed in detail about the GERT (e.g., procedure; definition of emotions) and three sample items were presented in order to test the video quality and sound loudness. After that, the 42 video clips of the GERT were presented. After viewing each video clip – which could only be played once – participants had to select the depicted emotion in the video clip (out of 14 pre-defined emotions). After that, demographics, questions about social media and smartphone usage were asked. Finally, participants received their individual emotion recognition score, were thanked and debriefed. The whole online questionnaire used a forced-response design.

4.1.4 Statistical Analyses

We used the same three-pronged analytical approach as in Study 1 and 2.

4.2 Results and Discussion – Study 3

With respect to social media usage, once again, a nuanced pattern was obtained. Active social media usage was not statistically significantly associated with emotion recognition in either the correlation or the regression analyses (see Table 3) and also ranked last in terms of practical relevance as indicated by the conditional random forest output (see Figure 3). Passive social media usage was positively associated with emotion recognition in both, correlation and regression analyses and was also identified as a practically relevant predictor by the conditional random forest. Social media addiction was negatively associated with emotion recognition in both, correlation and regression analyses and was also identified as a practically relevant predictor in the conditional random forest model. Among the newly added media and technology usage subscales, many showed tenuous relationships with emotion recognition, but the only subscale that demonstrated a robust link was the text messaging subscale (TMS) which was strongly positively associated with emotion recognition in the zero-order correlations and multiple regressions and also ranked second-highest among all predictors in the conditional random forest. Overall social media usage showed a positive zero-order correlation with emotion recognition but did not receive empirical support in either the linear regression or the conditional random forest analyses.

Concordant with the previous two studies, participant age once again emerged as the strongest predictor of emotion recognition across all three analysis methods (showing the highest coefficients and ranking first in the conditional random forest models). The higher participant’s age, the lower was their ability to correctly recognize the emotions expressed in the short videos (and vice versa). This is in line with past research (Schlegel et al., 2019) and the results observed in Study 1. Participant’s gender was also affirmed as a meaningful predictor across models, with women – versus men – being better at correctly identifying emotions. This finding also dovetails well with prior research (e.g., Schlegel et al., 2019).

Table 3.
Zero-Order Correlations and Multiple Linear Regression with Emotion Recognition as the Dependent Variable
Zero-orderLinear regression
 r rsp B β t 
Gender .226*** .220*** 2.453 .204 6.646*** 
Age -.308*** -.327*** -0.102 -.267 -7.400*** 
Social media – active use .053 .049 -0.004 -.045 -1.237 
Social media – passive use .186*** .232*** 0.011 .134 3.688*** 
Social Media Addiction -.073* -.005 -0.218 -.242 -6.475*** 
Media and Technology Usage – SUS .161*** .172*** -0.129 -.036 -0.781 
Media and Technology Usage – GSMUS .091** .131*** -0.042 -.013 -0.286 
Media and Technology Usage – MSS .012 .094** 0.028 .007 0.184 
Media and Technology Usage – TMS .278*** .294*** 0.671 .223 5.508*** 
Media and Technology Usage – VGS -.036 .037 -0.252 -.070 -2.079* 
Media and Technology Usage – Social media overall usage .070* .120*** 0.041 .042 1.346 
Zero-orderLinear regression
 r rsp B β t 
Gender .226*** .220*** 2.453 .204 6.646*** 
Age -.308*** -.327*** -0.102 -.267 -7.400*** 
Social media – active use .053 .049 -0.004 -.045 -1.237 
Social media – passive use .186*** .232*** 0.011 .134 3.688*** 
Social Media Addiction -.073* -.005 -0.218 -.242 -6.475*** 
Media and Technology Usage – SUS .161*** .172*** -0.129 -.036 -0.781 
Media and Technology Usage – GSMUS .091** .131*** -0.042 -.013 -0.286 
Media and Technology Usage – MSS .012 .094** 0.028 .007 0.184 
Media and Technology Usage – TMS .278*** .294*** 0.671 .223 5.508*** 
Media and Technology Usage – VGS -.036 .037 -0.252 -.070 -2.079* 
Media and Technology Usage – Social media overall usage .070* .120*** 0.041 .042 1.346 

Note. SUS = Smartphone Usage Scale, GSMUS = General Social Media Usage Scale, MSS = Media Sharing Scale, TMS = Text Messaging Scale, VGS = Video Gaming Scale. All VIFs between 1.093 and 2.444. Adjusted R2 = 22.0%. Listwise case exclusion. Gender: 1 = men, 2 = women. *** p < .001, ** p < .01, * p < .05.

Figure 3.
Variable Importance of Predictors from Study 3.

Note. Conditional random forest parameters: mtry = 4, seed = 666, ntree = 1000.

Figure 3.
Variable Importance of Predictors from Study 3.

Note. Conditional random forest parameters: mtry = 4, seed = 666, ntree = 1000.

Close modal

The addition of social media to our lives is as pervasive as it is recent. This means most of us engage with social media, yet we continue to know very little about what this may mean for individuals and the societies they live in. One often-discussed, yet rarely-studied question in this realm concerns the worry that social media usage may reduce humans’ emotional and social abilities, with prior research that used proxies for social media usage lending some credence to this concern (e.g., Przybylski & Weinstein, 2013; Skalická et al., 2019; Uhls et al., 2014). To test this more directly, and in greater detail, here we pursued a multi-study approach. Across three German-speaking adult samples (Noverall = 3,098), we analysed a broad range of social media and general smartphone usage indicators in relation to two aspects of emotional prowess (i.e., emotion recognition, general emotional intelligence) across different operationalisations (i.e., picture-based ability test, self-report measure, video-based ability test) and through multiple statistical methods (i.e., zero-order correlations, multiple regressions, conditional random forest machine learning algorithms). A comprehensive summary of all our analyses is provided in Table 4.

Table 4.
Comprehensive Result Overview Table
Study 1 (DV = RMET)Study 2 (TEIQue-SF)Study 3 (GERT)
Predictor ZOC MRA CRF ZOC MRA CRF ZOC MRA CRF 
Gender 6 (npr) 6 (npr) +*** +*** 3 (pr) 
Age -*** -** 1 (pr) +*** +* 1 (pr) -*** -*** 1 (pr) 
Social Media Addiction 4 (pr) NA NA NA -* -*** 4 (pr) 
Social Media Active Use -** 3 (pr) +* 3 (pr) 11 (npr) 
Social Media Passive Use 8 (npr) -*** -*** 2 (pr) +*** +*** 5 (pr) 
Social Media Overall Usage (in years) NA NA NA NA NA NA +* 10 (pr) 
Daily Smartphone Usage NA NA NA -*** 5 (npr) NA NA NA 
Duration of General Smartphone Usage (in years) NA NA NA +*** +* 4 (npr) NA NA NA 
Media and Technology Usage – Smartphone Usage NA NA NA NA NA NA +*** 9 (pr) 
Media and Technology Usage – General Social Media Usage NA NA NA NA NA NA +*** 7 (pr) 
Media and Technology Usage – Media Sharing NA NA NA NA NA NA 8 (pr) 
Media and Technology Usage – Text Messaging NA NA NA NA NA NA +*** +*** 2 (pr) 
Video Gaming NA NA NA NA NA NA -* 6 (pr) 
Study 1 (DV = RMET)Study 2 (TEIQue-SF)Study 3 (GERT)
Predictor ZOC MRA CRF ZOC MRA CRF ZOC MRA CRF 
Gender 6 (npr) 6 (npr) +*** +*** 3 (pr) 
Age -*** -** 1 (pr) +*** +* 1 (pr) -*** -*** 1 (pr) 
Social Media Addiction 4 (pr) NA NA NA -* -*** 4 (pr) 
Social Media Active Use -** 3 (pr) +* 3 (pr) 11 (npr) 
Social Media Passive Use 8 (npr) -*** -*** 2 (pr) +*** +*** 5 (pr) 
Social Media Overall Usage (in years) NA NA NA NA NA NA +* 10 (pr) 
Daily Smartphone Usage NA NA NA -*** 5 (npr) NA NA NA 
Duration of General Smartphone Usage (in years) NA NA NA +*** +* 4 (npr) NA NA NA 
Media and Technology Usage – Smartphone Usage NA NA NA NA NA NA +*** 9 (pr) 
Media and Technology Usage – General Social Media Usage NA NA NA NA NA NA +*** 7 (pr) 
Media and Technology Usage – Media Sharing NA NA NA NA NA NA 8 (pr) 
Media and Technology Usage – Text Messaging NA NA NA NA NA NA +*** +*** 2 (pr) 
Video Gaming NA NA NA NA NA NA -* 6 (pr) 

Note. ZOC = zero-order correlation, MR = multiple regression analysis, CRF = conditional random forest, RMET = Reading the Mind in the Eyes test, TEIQue-SF = Trait Emotional Intelligence Questionnaire – Short Form, GERT = Geneva Emotion Recognition Test, - = statistically significant negative association, + = statistically significant positive association, 0 = statistically non-significant association, * p < .05, ** p < .01, *** p < .001, pr = practically relevant, npr = not practically relevant, for conditional random forests ranks are reported, Gender: 0 = male, 1 = female, statistically significant and practically relevant associations are highlighted in bold font.

When considered in its totality, we do not find a clear link between social media usage and emotional prowess. What we do find is conflicting evidence for active (sometimes, though not always related to lower overall emotional intelligence but better emotion recognition) and passive social media use (sometimes though not always related to higher overall emotional intelligence but worse emotion recognition). We further find largely inconsistent and/or null effects for social media addiction, general social media usage, general smartphone usage, video gaming, and media sharing. We also find some evidence for aspects of smartphone usage that are not directly tied to social media usage. That is, we find some support for longer smartphone usage being associated with higher overall emotional intelligence, and some support for more intense text messaging being associated with better emotion recognition. This opens the door for intriguing hypotheses. For example, experience may act as a buffer (i.e., the longer someone has been using a smartphone, the less likely they are to be affected by its downsides). Similarly, there might be emotional benefits to be derived from digital bilateral 1-to-1 text communication, that may not apply to other uses such as posting pictures, videos, or tweets in 1-to-many or many-to-many settings on social media platforms (e.g., Instagram, X, TikTok). However, we hasten to add that the current research merely invites such speculation, instead of providing substantive evidence for either of these claims. Thus, if they are to be substantiated, this would need to be done in future research.

All in all, then, even the strongest of these – generally tenuous – links are eclipsed by age, which consistently emerged as the most potent and most robust factor, predicting emotional prowess in different ways that are consistent (a) across studies and (b) with previous research (i.e., lower emotion recognition but higher emotional intelligence; Cabello et al., 2016; Petrides & Furnham, 2006; Schlegel et al., 2019).

In light of these findings, it might be tempting to readily infer that social media usage has no association with emotional prowess. However, – precisely because there is a lot at stake and the pressure on researchers to deliver actionable insights for societal institutions, parents, and policymakers is mounting (Orben et al., 2024) – we encourage an especially critical evaluation of the evidence before making any firm inferences.

That is, on the one hand there are good reasons why a true null finding is a reasonable conclusion to draw from the current data. First, with the meagre exceptions of passive social media use (in Studies 2 and 3, but not in Study 1), social media addiction (in Study 3, but not in Studies 1 and 2) and text messaging (which was only measured in Study 3), technology-use predictors were not only inconsistent between studies, but also within. That is, aside from the rare exceptions mentioned above, in all studies all technology-use predictors failed to provide converging evidence across the three analysis methods at the same time (i.e., zero-order correlations, multiple regressions, conditional random forest importance ranking). Second, for many years, conscious and unconscious research practices and skewed incentive structures pervading psychological science have stacked the deck against null effects (for an excellent review see Nelson et al., 2018). That is, – while reform is happening (Nosek et al., 2022) – in a world in which people are motivated to find differences, it may be more sensible to trust null results than purported differences. Third, as mentioned in the introduction, theoretical maturity is low (cf., Muthukrishna & Henrich, 2019; Nosek et al., 2022). That is, there is – as of yet – no good theory of how social media usage and emotional prowess are causally connected and as such the notion of them not being causally connected is not per se unimaginable.

On the other hand, there are also many good reasons to caution against drawing clear-cut conclusions too quickly. First, absence of evidence does not equal evidence of absence (Aczel et al., 2018). Second, a lack of consistent effects (across different samples, different aspects of emotional prowess and different ways of measuring these aspects) does not mean that there are no effects at all. Rather, the effects of social media usage may be highly specific and may even only pertain to certain groups or only occur under certain circumstances, but could still be very real and relevant if and when they do occur (Beyens et al., 2020; Orben et al., 2024; Valkenburg & Peter, 2013). Third, while many small effects may not matter, many may – especially when they accumulate across large populations and over time (Funder & Ozer, 2019; Götz et al., 2022, 2024). Future research can employ a range of tailored techniques to determine whether they do and if so, what their longevity and practical consequences look like (Anvari et al., 2023). Fourth, effects may vary across cultures (Ghai et al., 2022) and the fact that there were no consistent effects in our German-speaking samples of Central Europeans does not mean that there may be no consistent effects in other cultures. Fifth, effects may vary across developmental stages (Orben et al., 2022, 2024; Orben & Blakemore, 2023) and the fact that we find no consistent effects across adult populations does not rule out the possibility that social media usage may detrimentally affect the emotional prowess of children and adolescents, who have been the main focus of prior research (e.g., Orben et al., 2019; Roberston et al., 2022; Twenge et al., 2018). Sixth, our understanding and measurement of the relevant aspects of social media usage – and general smartphone behaviours – may not yet allow us to study these associations as comprehensively and as effectively as needed. That is, while we have done our best to approach the measurement of social media usage in a cautious, inclusive, and thoughtful way and have measured various important aspects across different settings, our study – like the field at large – may still suffer from imperfect measurement. For example, scholars have highlighted that while it is laudable and conceptually relevant to distinguish between active and passive social media usage, that distinction alone may be too coarse – prompting the development of the recently published “extended active-passive model of social networking sites (SNS) use” model (Verduyn et al., 2022). Others have pointed out pervasive measurement issues in the field that range from non-standardised measures to an overreliance on self-reports (Parry et al., 2021) to issues with measuring screen time and a lack of context assessment (Kaye et al., 2020). Yet others have stressed the need for longitudinal within-person analyses to get a more individual-centric understanding of personal vulnerabilities and their unfolding over time (Cingel et al., 2022; Orben et al., 2024), which would also allow for more tailored and direct tests of the role of individual differences in general (Beyens et al., 2020; Valkenburg & Peter, 2013) and the rich-get-richer, the poor-get-poorer hypothesis in particular (Cheng et al., 2019; Pouwels et al., 2021).

In short, much work remains to be done and the debate is far from settled. For now, however, we do not find consistent evidence for diminished emotional prowess as a result of social media usage.

Data Accessibility Statement

De-identified data along with the analysis scripts and all materials are posted at OSF https://osf.io/jegbn/

Funding

Study 3 was financially supported by a fundraising initiative. The following organizations funded the project: Generali Versicherung AG Geschäftsstelle St. Pölten, Astoria Steuerberatung GmbH & Co KG, Uni Credit Bank Austria AG Krems an der Donau, Kremser Bank und Sparkassen AG, HYPO NOE Landesbank, Niederösterreichische Vorsorgekasse AG St. Pölten. Furthermore, Elke Pichler, Dieter Windischbaur, Elisabeth Wendt, Petra Grob, Karin Riedel, Samuel Wundsam and 10 anonymous donors supported the project financially. All supporters had no influence on the design and analysis of the study as well as write-up of the manuscript.

Author Contributions

Conceptualization: S.S., S.V., F.M.G.; Formal analysis: S.S.; Funding acquisition: S.S.; Investigation: S.S., S.V.; Methodology: S.S., S.V., F.M.G.; Project administration: S.S.; Supervision: S.S.; Visualization: S.S.; Writing – Original draft preparation: S.S., F.M.G.; Writing – Review & editing: S.S., S.V., F.M.G.

Acknowledgements

We acknowledge the support of the Open Access Publishing Fund of Karl Landsteiner University of Health Sciences, Krems, Austria.

We thank the students of the experimental classes of the terms 2019 and 2020 for their help in recruitment (Study 1 and Study 2). We thank Ingrid Brunner for her support in organizing and leading the crowdfunding project (Study 3).

Competing Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

1.

Questions about friends were also included in the questionnaire, but not part of this study (e.g., number of Facebook friends, offline friends, close friends).

2.

Several other demographics were assessed which are not part of this study (e.g., work-related questions, music taste, eating behavior, sport behavior, number of Facebook friends, offline friends, close friends; ownership of pets).

3.

We also checked for an inverted U-shape association as suggested by Cabello et al. (2016). In fact, a scatterplot revealed a slightly U-shaped curve, but a quadratic association fitted the data only slightly better than a linear association (R2linear = 2.5%, R2quadratic = 3.2%).

Aczel, B., Palfi, B., Szollosi, A., Kovacs, M., Szaszi, B., Szecsi, P., Zrubka, M., Gronau, Q. F., Van Den Bergh, D., & Wagenmakers, E. (2018). Quantifying support for the null hypothesis in psychology: An empirical investigation. Advances in Methods and Practices in Psychological Science, 1(3), 357–366. https://doi.org/10.1177/2515245918773742
Anvari, F., Kievit, R., Lakens, D., Pennington, C. R., Przybylski, A. K., Tiokhin, L., Wiernik, B. M., & Orben, A. (2023). Not all effects are indispensable: Psychological science requires verifiable lines of reasoning for whether an effect matters. Perspectives on Psychological Science, 18(2), 503–507. https://doi.org/10.1177/17456916221091565
Appel, M., & Gnambs, T. (2019). Shyness and social media use: A meta-analytic summary of moderating and mediating effects. Computers in Human Behavior, 98, 294–301. https://doi.org/10.1016/j.chb.2019.04.018
Baron-Cohen, S., Wheelwright, S., Hill, J., Raste, Y., & Plumb, I. (2001). The “reading the mind in the eyes” test revised version: A study with normal adults, and adults with Asperger syndrome or high-functioning autism. The Journal of Child Psychology and Psychiatry and Allied Disciplines, 42(2), 241–251. https://doi.org/10.1017/S0021963001006643
Beyens, I., Pouwels, J. L., Van Driel, I. I., Keijsers, L., & Valkenburg, P. M. (2020). The effect of social media on well-being differs from adolescent to adolescent. Scientific Reports, 10(1). https://doi.org/10.1038/s41598-020-67727-7
Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324
Brieuc, M. S. O., Waters, C. D., Drinan, D. P., & Naish, K. A. (2018). A practical introduction to Random Forest for genetic association studies in ecology and evolution. Molecular Ecology Resources, 18(4), 755–766. https://doi.org/10.1111/1755-0998.12773
Cabello, R., Sorrel, M. A., Fernández-Pinto, I., Extremera, N., & Fernández-Berrocal, P. (2016). Age and gender differences in ability emotional intelligence in adults: A cross-sectional study. Developmental Psychology, 52(9), 1486–1492. https://doi.org/10.1037/dev0000191
Cheng, C., Wang, H. Y., Sigerson, L., & Chau, C. L. (2019). Do the socially rich get richer? A nuanced perspective on social network site use and online social capital accrual. Psychological Bulletin, 145(7), 734–764. https://doi.org/10.1037/bul0000198
Cingel, D. P., Carter, M. C., & Krause, H.-V. (2022). Social media and self-esteem. Current Opinion in Psychology, 45, 101304. https://doi.org/10.1016/j.copsyc.2022.101304
Datareportal. (2023). Global social media statistics. DataReportal – Global Digital Insights. https:/​/​datareportal.com/​social-media-users
D’mello, S. K., & Kory, J. (2015). A review and meta-analysis of multimodal affect detection systems. ACM Computing Surveys, 47(3). https://doi.org/10.1145/2682899
Du, H., Götz, F. M., Chen, A., & Rentfrow, P. J. (2023). Revisiting values and self-esteem: A large-scale study in the United States. European Journal of Personality, 37(1), 3–19. https://doi.org/10.1177/08902070211038805
Ebert, T., Götz, F. M., Gladstone, J. J., Müller, S. R., & Matz, S. C. (2021). Spending reflects not only who we are but also who we are around: The joint effects of individual and geographic personality on consumption. Journal of Personality and Social Psychology, 121(2), 378–393. https://doi.org/10.1037/pspp0000344
Ebert, T., Götz, F. M., Obschonka, M., Zmigrod, L., & Rentfrow, P. J. (2019). Regional variation in courage and entrepreneurship: The contrasting role of courage for the emergence and survival of start-ups in the United States. Journal of Personality, 87(5), 1039–1055. https://doi.org/10.1111/jopy.12454
Escobar-Viera, C. G., Shensa, A., Bowman, N. D., Sidani, J. E., Knight, J., James, A. E., & Primack, B. A. (2018). Passive and active social media use and depressive symptoms among United States adults. Cyberpsychology, Behavior, and Social Networking, 21(7), 437–443. https://doi.org/10.1089/cyber.2017.0668
Fokkema, M., & Strobl, C. (2020). Fitting prediction rule ensembles to psychological research data: An introduction and tutorial. Psychological Methods, 25(5), 636–652. https://doi.org/10.1037/met0000256
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. https://doi.org/10.1177/2515245919847202
Ghai, S., Magis-Weinberg, L., Stoilova, M., Livingstone, S., & Orben, A. (2022). Social media and adolescent well-being in the Global South. Current Opinion in Psychology, 46, 101318. https://doi.org/10.1016/j.copsyc.2022.101318
Gnambs, T., & Appel, M. (2018). Narcissism and social networking behavior: A meta-analysis. Journal of Personality, 86(2), 200–212. https://doi.org/10.1111/jopy.12305
Gosling, S. D., Vazire, S., Srivastava, S., & John, O. P. (2004). Should we trust web-based studies? A comparative analysis of six preconceptions about internet questionnaires. American Psychologist, 59(2), 93–104. https://doi.org/10.1037/0003-066X.59.2.93
Götz, F. M., Bleidorn, W., & Rentfrow, P. J. (2020). Age differences in Machiavellianism across the life span: Evidence from a large-scale cross-sectional study. Journal of Personality, 88(5), 978–992. https://doi.org/10.1111/jopy.12545
Götz, F. M., Gosling, S. D., & Rentfrow, P. J. (2022). Small effects: The indispensable foundation for a cumulative psychological science. Perspectives on Psychological Science, 17(1), 205–215. https://doi.org/10.1177/1745691620984483
Götz, F. M., Gosling, S. D., & Rentfrow, P. J. (2024). Effect sizes and what to make of them. Nature Human Behaviour, 8, 798–800. https://doi.org/10.1038/s41562-024-01858-z
Götz, F. M., Maertens, R., Loomba, S., & van der Linden, S. (2023). Let the algorithm speak: How to use neural networks for automatic item generation in psychological scale development. Psychological Methods. https://doi.org/10.1037/met0000540
Götz, F. M., Stieger, S., Gosling, S. D., Potter, J., & Rentfrow, P. J. (2020). Physical topography is associated with human personality. Nature Human Behaviour, 4, 1135–1144. https://doi.org/10.1038/s41562-020-0930-x
Haidt, J., & Allen, N. (2020). Scrutinizing the effects of digital technology on mental health. Nature, 578(7794), 226–227. https://doi.org/10.1038/d41586-020-00296-x
IJzerman, H., Lindenberg, S., Dalğar, İ., Weissgerber, S. S. C., Vergara, R. C., Cairo, A. H., Čolić, M. V., Dursun, P., Frankowska, N., Hadi, R., Hall, C. J., Hong, Y., Hu, C.-P., Joy-Gaba, J., Lazarević, D., Lazarević, L. B., Parzuchowski, M., Ratner, K. G., Rothman, D., & Zickfeld, J. H. (2018). The human penguin project: Climate, social integration, and core body temperature. Collabra: Psychology, 4(1), 37. https://doi.org/10.1525/collabra.165
IJzerman, H., Pollet, T., Ebersole, C., & Kun, D. (2016). What predicts stroop performance? A conditional random forest approach. In SSRN Scholarly Paper 2805205. https://doi.org/10.2139/ssrn.2805205
Joel, S., Eastwick, P. W., Allison, C. J., Arriaga, X. B., Baker, Z. G., Bar-Kalifa, E., Bergeron, S., Birnbaum, G. E., Brock, R. L., Brumbaugh, C. C., Carmichael, C. L., Chen, S., Clarke, J., Cobb, R. J., Coolsen, M. K., Davis, J., de Jong, D. C., Debrot, A., DeHaas, E. C., … Wolf, S. (2020). Machine learning uncovers the most robust self-report predictors of relationship quality across 43 longitudinal couples studies. Proceedings of the National Academy of Sciences, 117(32), 19061–19071. https://doi.org/10.1073/pnas.1917036117
Kaye, L. K., Orben, A., Ellis, D. A., Hunter, S. C., & Houghton, S. (2020). The conceptual and methodological mayhem of “screen time.” International Journal of Environmental Research and Public Health, 17(10), 3661. https://doi.org/10.3390/ijerph17103661
Kessous, L., Castellano, G., & Caridakis, G. (2010). Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. Journal on Multimodal User Interfaces, 3(1), 33–48. https://doi.org/10.1007/s12193-009-0025-5
Kirkland, R. A., Peterson, E., Baker, C. A., Miller, S., & Pulos, S. (2013). Meta-analysis reveals adult female superiority in “Reading the Mind in the Eyes” Test. North American Journal of Psychology, 15(1), 121–146.
Kittel, A. F. D., Olderbak, S., & Wilhelm, O. (2022). Sty in the mind’s eye: A meta-analytic investigation of the nomological network and internal consistency of the “Reading the Mind in the Eyes” Test. Assessment, 29(5), 872–895. https://doi.org/10.1177/1073191121996469
Kuperman, V., Matsuki, K., & Van Dyke, J. A. (2018). Contributions of reader- and text-level characteristics to eye-movement patterns during passage reading. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44(11), 1687–1713. https://doi.org/10.1037/xlm0000547
Kynast, J., Polyakova, M., Quinque, E. M., Hinz, A., Villringer, A., & Schroeter, M. L. (2021). Age- and sex-specific standard scores for the Reading the Mind in the Eyes Test. Frontiers in Aging Neuroscience, 12. https://doi.org/10.3389/fnagi.2020.607107
Lapidot-Lefler, N., & Barak, A. (2012). Effects of anonymity, invisibility, and lack of eye-contact on toxic online disinhibition. Computers in Human Behavior, 28(2), 434–443. https://doi.org/10.1016/j.chb.2011.10.014
Latinne, P., Debeir, O., & Decaestecker, C. (2001). Limiting the number of trees in random forests. In J. Kittler & F. Roli (Eds.), Multiple Classifier Systems (pp. 178–187). Springer. https://doi.org/10.1007/3-540-48219-9_18
Liaw, A., & Wiener, M. (2002). Classification and regression by randomForest. R News, 2(3), 18–22.
Mayer, J. D., Caruso, D. R., & Salovey, P. (2016). The ability model of emotional intelligence: Principles and updates. Emotion Review, 8(4), 290–300. https://doi.org/10.1177/1754073916639667
Mayer, J. D., DiPaolo, M., & Salovey, P. (1990). Perceiving affective content in ambiguous visual stimuli: A component of emotional intelligence. Journal of Personality Assessment, 54(3–4), 772–781. https://doi.org/10.1080/00223891.1990.9674037
Mayer, J. D., Salovey, P., Caruso, D. R., & Sitarenios, G. (2001). Emotional intelligence as a standard intelligence. Emotion, 1(3), 232–242. https://doi.org/10.1037/1528-3542.1.3.232
Megías-Robles, A., Gutiérrez-Cobo, M. J., Cabello, R., Gómez-Leal, R., Baron-Cohen, S., & Fernández-Berrocal, P. (2020). The ‘Reading the mind in the Eyes’ test and emotional intelligence. Royal Society Open Science, 7(9), 201305. https://doi.org/10.1098/rsos.201305
Muthukrishna, M., & Henrich, J. (2019). A problem in theory. Nature Human Behaviour, 3(3), 221–229. https://doi.org/10.1038/s41562-018-0522-1
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69(1), 511–534. https://doi.org/10.1146/annurev-psych-122216-011836
Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., Fidler, F., Hilgard, J., Struhl, M. K., Nuijten, M. B., Rohrer, J. M., Romero, F., Scheel, A. M., Scherer, L. D., Schönbrodt, F. D., & Vazire, S. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology, 73(1), 719–748. https://doi.org/10.1146/annurev-psych-020821-114157
Oakley, B. F. M., Brewer, R., Bird, G., & Catmur, C. (2016). Theory of mind is not theory of emotion: A cautionary note on the Reading the Mind in the Eyes Test. Journal of Abnormal Psychology, 125(6), 818–823. https://doi.org/10.1037/abn0000182
Olderbak, S., Wilhelm, O., Olaru, G., Geiger, M., Brenneman, M. W., & Roberts, R. D. (2015). A psychometric analysis of the reading the mind in the eyes test: Toward a brief form for research and applied settings. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01503
Orben, A. (2020). The sisyphean cycle of technology panics. Perspectives on Psychological Science, 15(5), 1143–1157. https://doi.org/10.1177/1745691620919372
Orben, A., & Blakemore, S.-J. (2023). How social media affects teen mental health: A missing link. Nature, 614(7948), 410–412. https://doi.org/10.1038/d41586-023-00402-9
Orben, A., Dienlin, T., & Przybylski, A. K. (2019). Social media’s enduring effect on adolescent life satisfaction. Proceedings of the National Academy of Sciences, 116(21), 10226–10228. https://doi.org/10.1073/pnas.1902058116
Orben, A., Meier, A., Dalgleish, T., & Blakemore, S. (2024). Mechanisms linking social media use to adolescent mental health vulnerability. Nature Reviews Psychology, 3, 407–423. https://doi.org/10.1038/s44159-024-00307-y
Orben, A., & Przybylski, A. K. (2019). The association between adolescent well-being and digital technology use. Nature Human Behaviour, 3, 173–182. https://doi.org/10.1038/s41562-018-0506-1
Orben, A., Przybylski, A. K., Blakemore, S.-J., & Kievit, R. A. (2022). Windows of developmental sensitivity to social media. Nature Communications, 13(1). https://doi.org/10.1038/s41467-022-29296-3
Parry, D. A., Davidson, B. I., Sewall, C. J. R., Fisher, J. T., Mieczkowski, H., & Quintana, D. S. (2021). A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nature Human Behaviour, 5(11), 1535–1547. https://doi.org/10.1038/s41562-021-01117-5
Petrides, K. V., & Furnham, A. (2003). Trait emotional intelligence: Behavioural validation in two studies of emotion recognition and reactivity to mood induction. European Journal of Personality, 17(1), 39–57. https://doi.org/10.1002/per.466
Petrides, K. V., & Furnham, A. (2006). The role of trait emotional intelligence in a gender-specific model of organizational variables. Journal of Applied Social Psychology, 36(2), 552–569. https://doi.org/10.1111/j.0021-9029.2006.00019.x
Pouwels, J. L., Valkenburg, P. M., Beyens, I., van Driel, I. I., & Keijsers, L. (2021). Some socially poor but also some socially rich adolescents feel closer to their friends after using social media. Scientific Reports, 11, 21176. https://doi.org/10.1038/s41598-021-99034-0
Przybylski, A. K., & Weinstein, N. (2013). Can you connect with me now? How the presence of mobile communication technology influences face-to-face conversation quality. Journal of Social and Personal Relationships, 30(3), 237–246. https://doi.org/10.1177/0265407512453827
Roberston, L., Twenge, J. M., Joiner, T. E., & Cummins, K. (2022). Associations between screen time and internalizing disorder diagnoses among 9- to 10-year-olds. Journal of Affective Disorders, 311, 530–537. https://doi.org/10.1016/j.jad.2022.05.071
Rosen, L. D., Whaling, K., Carrier, L. M., Cheever, N. A., & Rokkum, J. (2013). The Media and Technology Usage and Attitudes Scale: An empirical investigation. Computers in Human Behavior, 29(6), 2501–2511. https://doi.org/10.1016/j.chb.2013.06.006
Rozgonjuk, D., Sindermann, C., Elhai, J. D., & Montag, C. (2020). Fear of Missing Out (FoMO) and social media’s impact on daily-life and productivity at work: Do WhatsApp, Facebook, Instagram, and Snapchat Use Disorders mediate that association? Addictive Behaviors, 110, 106487. https://doi.org/10.1016/j.addbeh.2020.106487
Schlegel, K., Fontaine, J. R. J., & Scherer, K. R. (2019). The nomological network of emotion recognition ability. European Journal of Psychological Assessment, 35(3), 352–363. https://doi.org/10.1027/1015-5759/a000396
Schlegel, K., Grandjean, D., & Scherer, K. R. (2014). Introducing the Geneva Emotion Recognition Test: An example of Rasch-based test development. Psychological Assessment, 26(2), 666–672. https://doi.org/10.1037/a0035246
Schlegel, K., & Scherer, K. R. (2016). Introducing a short version of the Geneva Emotion Recognition Test (GERT-S): Psychometric properties and construct validation. Behavior Research Methods, 48(4), 1383–1392. https://doi.org/10.3758/s13428-015-0646-4
Sibai, O., Luedicke, M. K., & De Valck, K. (2024). Why online consumption communities brutalize. Journal of Consumer Research. https://doi.org/10.1093/jcr/ucae022
Skalická, V., Wold Hygen, B., Stenseng, F., Kårstad, S. B., & Wichstrøm, L. (2019). Screen time and the development of emotion understanding from age 4 to age 8: A community study. British Journal of Developmental Psychology, 37(3), 427–443. https://doi.org/10.1111/bjdp.12283
Statista. (2024). Number of internet and social media users worldwide as of April 2024. https:/​/​www.statista.com/​statistics/​617136/​digital-population-worldwide/​#:~:text=Worldwide%20digital%20population%202024=As%20of%20April%202024%2C%20there,population%2C%20were%20social%20media%20users
Stieger, S., Götz, F. M., Wilson, C., Volsa, S., Rentfrow, P. J. (2022). A tale of peaks and valleys: Sinusoid relationship patterns between mountainousness and basic human values. Social Psychological and Personality Science, 13(2), 390–402. https://doi.org/10.1177/19485506211034966
Stieger, S., Reips, U.-D. (2016). A limitation of the Cognitive Reflection Test: Familiarity. PeerJ, 4, e2395. https://doi.org/10.7717/peerj.2395
Strobl, C., Malley, J., Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14(4), 323–348. https://doi.org/10.1037/a0016973
Thorisdottir, I. E., Sigurvinsdottir, R., Asgeirsdottir, B. B., Allegrante, J. P., Sigfusdottir, I. D. (2019). Active and passive social media use and symptoms of anxiety and depressed mood among Icelandic adolescents. Cyberpsychology, Behavior, and Social Networking, 22(8), 535–542. https://doi.org/10.1089/cyber.2019.0079
Tracy, J. L., Robins, R. W. (2008). The automaticity of emotion recognition. Emotion, 8(1), 81–95. https://doi.org/10.1037/1528-3542.8.1.81
Twenge, J. M. (2019). More time on technology, less happiness? Associations between digital-media use and psychological well-being. Current Directions in Psychological Science, 28(4), 372–379. https://doi.org/10.1177/0963721419838244
Twenge, J. M. (2020). Why increases in adolescent depression may be linked to the technological environment. Current Opinion in Psychology, 32, 89–94. https://doi.org/10.1016/j.copsyc.2019.06.036
Twenge, J. M., Haidt, J., Joiner, T. E., Campbell, W. K. (2020). Underestimating digital media harm. Nature Human Behaviour, 4, 346–348. https://doi.org/10.1038/s41562-020-0839-4
Twenge, J. M., Joiner, T. E., Rogers, M. L., Martin, G. N. (2018). Increases in depressive symptoms, suicide-related outcomes, and suicide rates among U.S. adolescents after 2010 and links to increased new media screen time. Clinical Psychological Science, 6(1), 3–17. https://doi.org/10.1177/2167702617723376
Uhls, Y. T., Michikyan, M., Morris, J., Garcia, D., Small, G. W., Zgourou, E., Greenfield, P. M. (2014). Five days at outdoor education camp without screens improves preteen skills with nonverbal emotion cues. Computers in Human Behavior, 39, 387–392. https://doi.org/10.1016/j.chb.2014.05.036
Valkenburg, P. M., Beyens, I., Meier, A., Vanden Abeele, M. M. P. (2022). Advancing our understanding of the associations between social media use and well-being. Current Opinion in Psychology, 47, 101357. https://doi.org/10.1016/j.copsyc.2022.101357
Valkenburg, P. M., Peter, J. (2013). The differential susceptibility to media effects model. Journal of Communication, 63(2), 221–243. https://doi.org/10.1111/jcom.12024
Vandenbosch, L., Fardouly, J., Tiggemann, M. (2022). Social media and body image: Recent trends and future directions. Current Opinion in Psychology, 45, 101289. https://doi.org/10.1016/j.copsyc.2021.12.002
Vellante, M., Baron-Cohen, S., Melis, M., Marrone, M., Petretto, D. R., Masala, C., Preti, A. (2013). The “Reading the Mind in the Eyes” test: Systematic review of psychometric properties and a validation study in Italy. Cognitive Neuropsychiatry, 18(4), 326–354. https://doi.org/10.1080/13546805.2012.721728
Verduyn, P., Gugushvili, N., Kross, E. (2022). Do social networking sites influence well-being? The extended active-passive model. Current Directions in Psychological Science, 31(1), 62–68. https://doi.org/10.1177/09637214211053637
Voracek, M., Dressler, S. G. (2006). Lack of correlation between digit ratio (2D:4D) and Baron-Cohen’s “Reading the Mind in the Eyes” test, empathy, systemising, and autism-spectrum quotients in a general population sample. Personality and Individual Differences, 41(8), 1481–1491. https://doi.org/10.1016/j.paid.2006.06.009
Vuorre, M., Orben, A., Przybylski, A. K. (2021). There is no evidence that associations between adolescents’ digital technology engagement and mental health problems have increased. Clinical Psychological Science, 9(5), 823–835. https://doi.org/10.1177/2167702621994549
Wartberg, L., Durkee, T., Kriston, L., Parzer, P., Fischer-Waldschmidt, G., Resch, F., Sarchiapone, M., Wasserman, C., Hoven, C. W., Carli, V., Wasserman, D., Thomasius, R., Brunner, R., Kaess, M. (2017). Psychometric properties of a German version of the Young Diagnostic Questionnaire (YDQ) in two independent samples of adolescents. International Journal of Mental Health and Addiction, 15(1), 182–190. https://doi.org/10.1007/s11469-016-9654-6
Waytz, A., Gray, K. (2018). Does online technology make us more or less sociable? A preliminary review and call for research. Perspectives on Psychological Science, 13(4), 473–491. https://doi.org/10.1177/1745691617746509
Wei, W., Lu, J. G., Galinsky, A. D., Wu, H., Gosling, S. D., Rentfrow, P. J., Yuan, W., Zhang, Q., Guo, Y., Zhang, M., Gui, W., Guo, X.-Y., Potter, J., Wang, J., Li, B., Li, X., Han, Y.-M., Lv, M., Guo, X.-Q., Wang, L. (2017). Regional ambient temperature is associated with human personality. Nature Human Behaviour, 1, 890–895. https://doi.org/10.1038/s41562-017-0240-0
Wolfers, L. N., Utz, S. (2022). Social media use, stress, and coping. Current Opinion in Psychology, 45, 101305. https://doi.org/10.1016/j.copsyc.2022.101305
Young, K. S. (1998). Internet addiction: The emergence of a new clinical disorder. CyberPsychology Behavior, 1(3), 237–244. https://doi.org/10.1089/cpb.1998.1.237
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material