We introduce the Social Media Sexist Content (SMSC) database, an open-access online stimulus set consisting of 382 social media content items and 221 comments related to the content. The content items include 90 sexist posts and 292 neutral posts. The comment items include 75 sexist comments along with 238 neutral comments. The database consists of a broad range of topics including lifestyle, memes, and school posts. All posts were anonymized after being retrieved from publicly available sources. All content and comments were rated across two domains: degree of sexism and emotional reaction to the post. In terms of sexism, the posts were rated along three dimensions of gender bias: Hostile Sexism, Benevolent Sexism, and Objectification. Participants also provided their emotional reactions to the posts in terms of feeling Ashamed, Insecure, and/or Angry. Data were collected online in two separate studies: one rating the content and the other rating the comments. The sexism and emotion ratings were highly reliable and showed the posts displayed either sexism or neutral content. The SMSC database is beneficial to researchers because it offers updated social media content for research use online and in the lab. The database affords researchers the ability to explore stimuli either by content or by ratings, and the database is free to use for research purposes. The SMSC is available for download from hannahbuie.com.
Social media platforms were initially seen as the ultimate tool of connection, enabling people to expand their social networks across cultures and international boundaries. Today, people report spending twice as much time socializing online versus in person (American Time Use Survey, 2019), and social media can be an important source of strengthening and supporting social relationships (Burke & Kraut, 2014). Unfortunately, instead of providing a utopian tool of inclusion and connection, social media platforms often exacerbate existing societal biases like sexism and objectification (Fosch-Villaronga et al., 2021). Jokes targeting women’s competency are common (Drakett et al., 2018; Fox et al., 2015), and women are sexualized at an alarming rate (Bell et al., 2018; Davis, 2018). For example, women are verbally abused on Twitter every 30 seconds. BIPOC women are about 3 times as likely to be mentioned by problematic or abusive Tweets than White women. Black women in particular are 8 times as likely to be targeted by problematic or abusive Tweets (Amnesty International, 2017). Further, social media platforms themselves know that sexist and objectifying content is having a negative impact on women and girls but are taking little to no action to mitigate this effect. For example, research conducted by Facebook found that 32% of teen girls said that Instagram exacerbates negative body image. Research also found that Instagram makes body image worse for 1 out of every 3 adolescent girls and can lead to increased anxiety and depression (Wells et al., 2021).
Informal and unspoken guidelines (i.e., social norms) inform how people are treated, including the differential treatment of men and women. Some of these social norms can serve as oppressive tools to maintain an unequal status quo (Jackman, 1994). In most modern societies, men have a higher status, controlling most economic streams (Alesina et al., 2013). This higher status informs how individuals in these societies think about men and women, often seeing men, the high-status group, as having greater societal value and import (Berger et al., 2018; Ridgeway, 1992; Schmader et al., 2001). Social norms like these are broadly established and legitimized through government policies, workplace cultures, and education, limiting the professional opportunities individuals seek, how people think about themselves, and interpersonal relationships (Block et al., 2019; Boesveld, 2020; Croft et al., 2015, 2019; Kong et al., 2020; Meara et al., 2020). These norms legitimize gender inequality, leading to disparities like lack of opportunity for women and lessened value of positions when they become more associated with women than men.
Social media platforms not only reflect these unequal societal gender norms but can also inflate them. For example, biased interactions like racism and sexism now occur more frequently online than face-to face (Tynes et al., 2015), likely due to the relative social distance and anonymity online interactions can have (Fox et al., 2015). Further, sexism and objectification are typically ignored or ingratiated rather than confronted online (Mallett & Monteith, 2019). The amplification of sexism and objectification combined with the lack of confrontation potentially creates permissive norms on social media platforms, perpetuating these types of biases.
While some people attempt to filter out this bias by blocking perpetrators, widespread sexism and objectification on these platforms can make it impossible to avoid because the current algorithms and underlying structure of social media platforms uniquely fosters vulnerability to the expression of prejudice, especially for adolescent populations. Experiencing prejudice in online interactions can undermine people’s social and intellectual development (English et al., 2020). Unfortunately, there is a dearth of non-proprietary research examining interactions within actual platforms where people feel like they are interacting with other people. Much of the existing literature is correlational (e.g., Meier & Gray, 2013; Stanton et al., 2017) or analyzes existing social media content (e.g., Döring & Mohseni, 2019; Drakett et al., 2018; Paciello et al., 2021). These approaches have strengths and weaknesses but most importantly illustrate a gap in the literature regarding the type of questions researchers are able to address. For example, correlational research often requires participants to reflect on past experiences, leading to low experimental control and an inability to draw conclusions about causality. Analyzing existing social media content is valuable in its ecological validity, however it lacks experimental control in comparisons. Greater experimental control along with high ecological validity is now possible due to resources like the Truman Platform (DiFranzo et al., 2018) and the Mock Social Media Website Tool (Jagayat et al., 2021), which allow researchers to use pre-existing code to build out a fake social media website that looks realistic for research use. These resources give researchers the capacity to use cover stories, like telling participants that they are beta testing a new special social media platform, thereby increasing ecological validity and allowing researchers to examine issues like prejudice on social media platforms rather than the less realistic approach of just showing images from social media platforms, which lack the feeling of a real online social interaction.
While social media research is beginning to make strides within Psychology through the use of tools like this, there are still many challenges that researchers face when building out mock social media websites. One of the biggest challenges is finding, selecting, and preparing content to use in research (Stieglitz et al., 2018). Similar to other research topics and paradigms investigating topics like sexism or violence, for a mock social media website to be realistic it must include a substantial amount of neutral filler content on the newsfeed as well as posts the target the construct that is being examined. For example, our database provides sexist content and is ideal for researchers examining sexism and objectification encountered on social media platforms. Accordingly, having open-access databases of social media content for research use would be hugely beneficial to researchers taking advantage of the tools available to realistically study social media. In the current research, we created a novel database of pre-tested social media content containing 382 social media posts and 221 comments related to the posts. These content and comments were rated on dimensions of Ambivalent Sexism (hostile and benevolent) and Objectification, as well as emotional reactions of Anger, feeling Ashamed, and feeling Insecure. This stimulus set is intended to help facilitate future research by easing the burden of collecting and piloting stimuli, providing experimental control in the selection of sexist and neutral social media stimuli, and making the comparison of results across studies more realistic.
In this paper, we define gender as a social construct, learned through exposure to norms, roles, and stereotypes. Sex, which refers to sex-assigned at birth, does not play a central role in this discussion. Further, while there is increasing social and scientific recognition that gender is not a binary of man or woman, in the cognitive processes associated with social categorization taking place on social media platforms, people continue to categorize others as men or women (Gelman et al., 1986; Gelman & Koenig, 2003; Klysing, 2020; Roberts & Gelman, 2017). Accordingly, we will use the language of man/woman with the understanding that this is referring to the process of social categorization, which indeed is one of the quickest categorizations to take place during social perception because people are taught from a young age to categorize others and themselves as either man or woman (Maccoby & Jacklin, 1974). However, our use of man/woman is not intended to describe gender as a binary. Gender as a part of one’s self-construct is better identified along a spectrum.
Ambivalent Sexism and Objectification
Ambivalent Sexism Theory provides a framework with which we can distinguish between two general factors of sexism: hostile and benevolent sexism (Glick & Fiske, 1997). In this framework, both hostile and benevolent sexism are comprised of three areas: power, gender roles, and sexuality. Hostile and benevolent sexism are independent constructs that typically work in concert to legitimize the current social hierarchy. As such, people can and often do endorse the beliefs described in both hostile sexism and benevolent sexism. The first area, power differences between genders, helps rationalize the different forms of paternalism that occur in both hostile and benevolent sexism (Fiske, 1993; Glick & Fiske, 1997; Goodwin et al., 2000). In hostile sexism, power is embodied as dominative paternalism, in which the person believes women should be controlled by men. Benevolent sexism ascribes to a more protective paternalism, in which the individual believes men should be protectors and providers for women due to men’s greater supposed authority and physical strength.
The second area, belief and support for traditional gender roles, reinforces and perpetuates gender inequality by assuming men are better fit for high-status positions while women are more suited for domestic and lower status professional roles (Eagly, 1987). In hostile sexism, this manifests as competitive gender differentiation, which provides men with self-confidence by believing they are the more competent and highly skilled half of the population (Hogg & Abrams, 1990). Benevolent sexism promotes a more complementary gender differentiation in which men’s dependence on women is celebrated as women are seen as more nurturing and caring. So, men are seen as the providers while women play a supportive role that is not rewarded monetarily and in which men hold the decision-making power. Yet, because women are positively evaluated, they are often described as men’s “better half” (Eagly, 1987; Glick & Fiske, 1997).
The third and final area, heterosexual men’s sexual desires, reflects a fear that women will use sexual attraction to manipulate men in hostile sexism. In benevolent sexism, women are romanticized as sexual objects, something men should strive to obtain in order to feel complete (Glick & Fiske, 1997). It is important to note that a key factor absent in these beliefs about heterosexuality is seeing heterosexual romantic relationships as a partnership or equal in any way. Instead, there is an overt power imbalance and a tendency to see women as something to obtain rather than autonomous, independent people entering a partnership. Finally, the focus of this sexism on heterosexuality demonstrates an important intersectionality in prejudice: the assumption of heterosexuality in these formats is both sexist and anti-LGBTQ+.
These types of sexism have real world consequences. Hostile sexism enforces the “glass-ceiling” effect, when women are unable to reach leadership or managerial positions due to sexism in the workplace (Masser & Abrams, 2004). Benevolent sexism also impacts women in the workplace. For example, coworkers who are men often perceive women as being less competent and less deserving of the job (Good & Rudman, 2010). This can be because men assume women will be better equipped for more nurturing positions. This, in turn, affects women’s self-perception of competency and how women describe themselves (Barreto et al., 2010). Together, hostile and benevolent sexism form Ambivalent Sexism, which can serve to provide a rationale for a failure to acknowledge ongoing discrimination, hostility toward gender equality, and antagonism toward gender-egalitarian policies (Glick & Fiske, 1997; Swim et al., 1995).
At the same time the theory of Ambivalent Sexism was being developed, a separate but parallel line of research examined “the tendency to reduce women to sexual objects”: Objectification Theory (Fredrickson & Roberts, 1997). Although they are different constructs, objectification does share some variance with the ‘heterosexual men’s sexual desires’ area of Ambivalent Sexism. However, objectification is important to isolate as a unique construct when studying social media due to its specificity and prevalence in the media and, more specifically, on social media platforms. Objectification can have detrimental consequences that are not always linked back to the media that caused them. Most commonly, objectification leads to internalized, habitual self-body monitoring (Fredrickson & Roberts, 1997). Further, it has is an additive effect in which the accumulation of objectifying experiences can lead to an array of mental health risks that disproportionately affect women, including sexual dysfunction, eating disorders, and unipolar depression. Similarly, there is also an additive effect when men are exposed media consistently objectifying women. Men who see more objectifying content report stronger attitudes supporting violence against women than men who are not exposed to as much objectifying content (Wright & Tokunaga, 2016).
Sexism and Objectification on Social Media Platforms
Ambivalent sexism and objectification serve to justify and maintain a patriarchal system that is harmful to both men and women (Glick & Fiske, 1997). Sexism limits women’s ambitions: women report more pessimistic views of their future after being exposed to sexism (Brown, 1998; Ford et al., 2019; Markus & Nurius, 1986). Across social media platforms, women are generally portrayed more negatively in both overt displays of objectification (Bell et al., 2018; Davis, 2018) and more subtle social cues of lower competency and respect such as referring to candidates who are men by their last name and candidates who are women by their first names (Falk, 2010). On platforms such as Twitter there are consistent occurrences of hostile sexism, with about 7% of tweets women typically receive being problematic or abusive (Amnesty International, 2017). Beyond these overt instances of sexism, benevolent sexism is thought to be even more rampant than the already pervasive hostile sexism. Instances of benevolent sexism include comments like “No man succeeds without a good woman beside him” and “they’re probably surprised at how smart you are, for a girl” (Jha & Mamidi, 2017).
Emotional reactions to sexism and objectification on social media platforms are also important to understand as emotional reactions have been shown to mediate prejudice confrontation (e.g., Thomas et al., 2020), bystander intervention (e.g., Barhight et al., 2013; Yule & Grych, 2017), and wellbeing outcomes (Kaiser et al., 2004; McCoy & Major, 2003). These are all important areas of research, with long-term implications, in the context of social media platforms. Accordingly, the content in the database contains participant ratings along dimensions of ambivalent sexism and objectification, as well as feelings of shame, insecurity, and anger in response to encountering the sexist content and comments.
Present Research
In this project, we created the novel SMSC database of social media stimuli from participant ratings of content (Study 1) and comments (Study 2) in terms of sexism and objectification of women. The SMSC database consists of the pre-tested stimuli (content and comments) and could help social media research by easing the burden of collecting and piloting stimuli for new projects. This stimulus set could also provide experimental control in the selection of sexist and neutral social media stimuli, making the comparison of results across studies more realistic and encouraging replication within and across research labs. While curated for social media research, the stimuli in the database could also be used to prime participants with sexist or objectifying stimuli, to populate an implicit measure to examine the threshold for perceiving sexism images, or for studies of any sort simply seeking neutral images or comments as filler stimuli for a control condition. The authors do not have conflicts of interest to report. All the stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page on Open Science Framework https://osf.io/eqtr4/.
Study 1
Method
All content in the SMSC database are open-access and free for research use. The database includes ratings of Hostile Sexism, Benevolent Sexism, Objectification, and emotional reactions (i.e., feeling Insecure, Angry, or Ashamed) related to the stimuli.
Selection of Stimuli
The social media content stimuli (e.g., personal posts, memes, graphics, etc.) included in the study were obtained from a variety of online sources. This research was reviewed and approved by the ethics review board at our institution. Most of the content is adapted from real social media posts from users with public profiles that we deidentified and recreated using images from (a) free online sources like Pixabay (https://pixabay.com), (b) content research assistants found on existing social media platforms, and (c) personal photos from the research team (used with permission from those photographed), giving the content a strong amount of ecological validity. We include content in the database from different social groups including Black and Latina/x women. Intentionally representing different groups of women in the stimulus set was important in order to allow for research examining intersectional identities. While we do not examine or report group differences based on intersectional identities here, future researchers will be able to use the stimulus ratings provided in the database.
A group of four research assistants compiled social media content using the Ambivalent Sexism Inventory (Glick & Fiske, 1997) as a guide for selecting content that fit the categories of Hostile Sexism, Benevolent Sexism, or neither (neutral content). Examples of content items are included in Figures 2 and 4. Each potential content item was saved in a shared document. Each week over a five-month period the group met to discuss the content added since the prior meeting. Only content with consensus that it was a good fit for its assigned category from all four research assistants and the first author were officially added to the category for which it was suggested. At this stage, we had participants rate the selected stimuli.
Participants
We collected data from 142 students who participated for class credit. Of these students, 38.8% were men, 58.3% were women, and 2.8% preferred to self-describe. Participants’ ages ranged from 18 to 50 (M = 20.76, SD = 3.99). Participants consisted of 79 Whites (51.6%), 14 Black or African Americans (10%), 13 East Asians (8.5%), 3 Middle Easterners (2%), 3 Native Americans or Pacific Islanders (2%), and 17 individuals self-identified as other/mixed (11.1%).
Procedure
A total of 382 content items were included in the survey: 45 Hostile Sexism items, 45 Benevolent Sexism items, and 292 Neutral content items. More neutral than sexist items were included because, when building out a social media platform that mimics a typical social media platform, more neutral than sexist content is needed for the mock newsfeed feature.
Because each content item had eight questions, the total number of questions related to content in the survey was 3,056. Answering this many questions would undoubtedly lead to a high level of attrition and response fatigue. Accordingly, we randomly assigned participants to rate 100 stimulus items each, 15% Hostile Sexism, 15% Benevolent Sexism, and 70% Neutral. The randomization was set to represent each content category so that each participant was exposed to and rated the same percentage of Hostile, Benevolent, and Neutral content.
After reading the instructions, participants were presented with the 100 items one-by-one in an individually randomized order and were asked to rate them along dimensions of sexism and objectification. Content items were rated one at a time. After rating each item, participants clicked a button to move on to the next screen, which showed the next content item. Participants could only progress forward and were not able to go back and change already-rated content. After completing the rating task, participants completed a standard demographic questionnaire. Participants were then debriefed and given class credit.
Measures
Hostile Sexism. Participants answered a 2-item measure rating the content in terms of Hostile Sexism adapted from the Ambivalent Sexism Inventory (Glick & Fiske, 1997). Accordingly, participants responded to the following statements using a scale from 1 (Strongly Disagree) to 7 (Strongly Agree): This post disparages/belittles women; This post suggests women are inferior to men.
Benevolent Sexism. Benevolent Sexism was also rated using a 2-item measure (i.e., This post shows that women should be cherished and protected; This post promotes purity amongst women), adapted from the Ambivalent Sexism Inventory (Glick & Fiske, 1997). Participants responded to the following statements using a scale from 1 (Strongly Disagree) to 7 (Strongly Agree).
Objectification. Objectification was also rated using a 2-item measure (i.e., This post emphasizes physical appearance; This post objectifies women) adapted from (Fredrickson & Roberts, 1997) on a scale from 1 (Strongly Disagree) to 7 (Strongly Agree). Objectification was rated along with Ambivalent Sexism because, while it is a construct that might fall under the ‘heterosexual men’s sexual desires’ area of Ambivalent Sexism, it is important to isolate as a unique construct given the strong emphasis on physical/body image on social media platforms.
Emotional reactions. Participants also responded to three 1-item measures rating their emotional reactions to the content (i.e., To what degree did this post make you feel each of the following emotions? Ashamed; Angry; Insecure) on a scale from 1 (Not at all) to 7 (Very Intensely).
Demographics. Finally, participants responded to an array of demographic questions including age, gender, race, ethnicity, and political affiliation.
Results
Detailed information about the ratings for each content item is available in the SMSC database.
Reliability
Due to the necessary design of each participant rating a subset of the content, we calculated the interrater reliability for the Ambivalent Sexism dimensions, objectification, and emotional reactions using one-way random intraclass correlation coefficients. For the Hostile
Sexism dimension the intraclass correlation coefficient was .95, 95%CI [.92, .98]. For the Benevolent Sexism dimension, the intraclass correlation coefficient was .96, 95%CI [.94, .97]. For Objectification, the intraclass correlation coefficient was .95, 95%CI [.93, .97]. Given the high reliability of the measures, we aggregated the 2-item measures of each stimulus into the three planned indices: Hostile Sexism, Benevolent Sexism, and Objectification.
For the single-item emotional reaction variables: The intraclass correlation coefficient for feeling Insecure was .74, 95%CI [.36, .95]. The intraclass correlation coefficient for feeling Angry was .90, 95%CI [.84, .94]. The intraclass correlation coefficient for feeling Ashamed was .84, 95%CI [.77, .90].
Univariate Distributions Along the Target Dimensions
The number of content ratings provided for each image ranged from 31 to 50, with a mean of 37.65 (SD = 5.96) ratings per content item. The reason for the high standard deviation is the mean of ratings for neutral content items was 34.39 (SD = 1.10) while the mean for Hostile and Benevolent Sexism ratings was 48.18 (SD = 0.99) due to the higher number of Neutral content items to rate. The mean ratings were calculated for each content category (i.e., content displaying Hostile Sexism, Benevolent Sexism, or Neutral content). The distribution of the content means along the gender bias domains is shown in Figure 1 and the distribution of the content means along the emotional reaction domains is shown in Figure 3. These distributions display the expected trends of Benevolent Sexism content being rated higher in Benevolent Sexism and Hostile Sexism content being rated higher in Hostile Sexism than the other content categories.
Across content categories, the bias ratings ranged from 1 to 7, showing good use of the entire range of the scale. The mean ratings for the Benevolent Content were: Benevolent Sexism (M = 4.27, SD = 1.16), Hostile Sexism (M = 2.70, SD = 1.06), and Objectification (M = 3.19, SD = 1.07). The mean ratings for the Hostile Content were: Benevolent Sexism (M = 2.47, SD = 1.16), Hostile Sexism (M = 4.76, SD = 1.30), and Objectification (M = 3.89, SD = 1.12). The mean ratings for the Neutral Content were: Benevolent Sexism (M = 1.94, SD = 1.03), Hostile Sexism (M = 1.60, SD = 0.96), and Objectification (M = 1.77, SD = 0.93). The three content items with the highest rating in the bias domains are shown in Figure 2.
Across the content categories, the emotional reaction ratings were positively skewed (Figure 3), indicating that participants reported low emotional reactions to a lot of the content. Hostile Sexism was the only content category that received responses ranging from 1 to 7, showing a wider range of emotional reactions.
The mean ratings for the Benevolent Content were: Angry (M = 1.79, SD = 1.10), Ashamed (M = 1.94, SD = 1.14), and Insecure (M = 2.12, SD = 1.18). The mean ratings for the Hostile Content were: Angry (M = 2.52, SD = 1.65), Ashamed (M = 2.95, SD = 1.68), and Insecure (M = 3.61, SD = 1.79). The mean ratings for the Neutral Content were: Angry (M = 1.50, SD = 0.97), Ashamed (M = 1.50, SD = 0.98), and Insecure (M = 1.52, SD = 0.98). The three content items with the highest ratings in the emotion domains are shown in Figure 4. All top-rated content in terms of emotional reaction was in the Hostile Sexism category.
Gender Differences
The subject matter, selected to display Ambivalent Sexism and objectification, directly addresses gender inequality and thus it was important to examine whether gender moderated the content ratings. Accordingly, we conducted a series of independent samples t-tests to examine gender mean differences and found some gender differences that were isolated to the Hostile Sexism content. Women (M = 5.12, SD = 1.13) rated Hostile Sexism content higher in Hostile Sexism than men (M = 4.26, SD = 1.40), t(132) = -3.91, p < .001, d = -.69. Likewise, women (M = 4.16, SD = 0.98) rated the Hostile Sexism content higher in Objectification than men (M = 3.51, SD = 1.22), t(132) = -3.40, p < .001, d = -.60. Further, women (M = 2.92, SD = 1.74) reported stronger emotional reactions of Anger toward the Hostile Sexism content than men (M = 1.86, SD = 1.29), t(132) = -3.81, p < .001, d = -.67. Similarly, women (M = 4.20, SD = 1.64) reported stronger emotional reactions of Insecurity about the Hostile Sexism content than men (M = 2.67, SD = 1.60), t(132) = -5.31, p < .001, d = -.94. There were no other significant gender differences in the content ratings ps > .065.
Discussion
All tested content was rated as biased or neutral, consistent with our predictions. The strongest results were the ratings in the Hostile Sexism domain. The ratings of the Hostile Sexism content were on the high end of the Hostile Sexism scale and were consistently higher across the emotional reaction scales compared to Neutral or Benevolent Sexist content. Further, gender differences in content ratings were isolated to Hostile Sexism content. Women rated Hostile Sexism content as higher in Hostile Sexism and Objectification than men. Women also reported stronger emotional reactions to the Hostile Sexism content than men did.
The ratings of Benevolent Sexism content followed the same trend, in that ratings appeared to be higher in Benevolent Sexism than Hostile or Neutral. However, unlike the Hostile Sexism ratings, Benevolent ratings were not as consistently at the high end of the Benevolent rating scale and did not elicit as intense emotional reactions. Objectification ratings reflected that both the hostile and benevolent sexism categories contain some objectifying content, but also demonstrated that not all of the content is objectifying.
Finally, the neutral content was low on all rating dimensions, as predicted. Overall, these findings show that the social media content stimuli retrieved for the SMSC database reflect their intended categories (i.e., the Hostile content demonstrate hostile sexism, etc.).
Study 2
Method
We followed the same methodological approach in Study 2 as we did in Study 1 for the selection of comment stimuli (i.e., the ostensible replies posted online by others in response to specific content; see Table 1 for sample comments). We also used the same rating scales. To avoid redundancy, we do not list the measures again here as they are identical to those used in Study 1.
Participants
We collected data from 194 students who participated for class credit. Of these students, 22.9% were men, 63.2% were women, and 0.9% preferred to self-describe. Participants’ ages ranged from 17 to 45 (M = 19.42, SD = 2.73). Participants consisted of 114 Whites (51.1%), 20 East Asians (9%), 19 Black or African Americans (9%), 14 Middle Easterners (6.3%), 2 Native American or Pacific Islanders (0.9%), and 18 individuals self-identified as other/mixed (8.1%).
Procedures
In Study 2 a total of 221 comments were tested, 62 Hostile Sexism comments, 13 Benevolent Sexism comments, and 238 Neutral comments. The comments were rated in the context of the content, so participants first saw a content item and were then asked to rate the highlighted comment. The pairings of content and comments were always presented together in a randomized order. In this study, we did not provide context about the commenter themselves (e.g., a profile picture of the commenter, the commenter’s gender, interests, etc.). However, future researchers using the comment stimuli would have the flexibility provide or manipulate this information using the Truman Platform (DiFranzo et al., 2018) or the Mock Social Media Website Tool (Jagayat et al., 2021). Fewer Benevolent Sexism comments were tested because it was difficult to find real-world examples to adapt for the study, which perhaps reflects a disparity in the amount of Benevolent content versus comments occurring on actual social media platforms. Comments displaying Hostile Sexism, however, were plentiful. Thus, we decided to mirror this real-world difference by having participants view and rate more comments depicting Hostile Sexism than Benevolent Sexism. Again, as in Study 1, due to the high number of survey questions, participants were randomly assigned to rate only a subset of stimuli. In this study, participants rated a total of 98 comments, 21% Hostile Sexism, 8% Benevolent Sexism, and 71% Neutral. The study procedure was the same as Study 1.
Results
Reliability
Again, we calculated the interrater reliability for ratings using one-way random intraclass correlation coefficients. For the Hostile Sexism dimension the intraclass correlation coefficient was .97, 95%CI [.83, 1.00]. For the Benevolent Sexism dimension, the intraclass correlation coefficient was .99, 95%CI [.99, 1.00]. For Objectification, the intraclass correlation coefficient was .97, 95%CI [.65, .99]. Due to the high reliability of the measures, we aggregated the 2-item measures of each stimulus into the three planned indices: Hostile Sexism, Benevolent Sexism, and Objectification.
For the single-item emotional reactivity variables: The intraclass correlation coefficient for feeling Insecure was .80, 95%CI [.38, .99]. The intraclass correlation coefficient for feeling Angry was .95, 95%CI [.83, .99]. The intraclass correlation coefficient for feeling Ashamed was .81, 95%CI [.41, .99].
Univariate Distributions Along the Target Dimensions
The number of comment ratings provided for each comment ranged from 73 to 104, with a mean of 90.20 (SD = 8.67) ratings per comment. In this study, the reason for the high standard deviation was participants rated more comments for the Neutral (M = 94.50, SD = 1.35) and Benevolent categories (M = 101.57, SD = 1.22) than the Hostile category (M = 75.86, SD = 1.10) due to fewer Neutral and Benevolent items being included in this study compared Study 1. The mean ratings were calculated for each comment category and the distribution of the comment means is shown in Figure 5.
Across comment categories, the ratings ranged from 1 to 7, again showing use of the entire range of the scale. The mean ratings for the Benevolent Comments were: Benevolent Sexism (M = 3.72, SD = 1.13), Hostile Sexism (M = 3.08, SD = 1.20), and Objectification (M = 3.48, SD = 1.16). The mean ratings for the Hostile Comments were: Benevolent Sexism (M = 2.47, SD = 1.18), Hostile Sexism (M = 4.92, SD = 1.19), and Objectification (M = 4.02, SD = 1.08). The mean ratings for the Neutral Comments were: Benevolent Sexism (M = 1.88, SD = 1.07), Hostile Sexism (M = 1.53, SD = 0.89), and Objectification (M = 1.67, SD = 0.93). The three comments with the highest rating in each bias domain are listed in Table 1.
Highest Mean | 2nd Highest Mean | 3rd Highest Mean | |
Hostile Sexism | only if she cleans like a maid 🤷 M = 6.20 (SD = 1.08) | We don't need any more women in power dawg M = 5.99 (SD = 1.32) | Everytime my girl trys to talk about sports M = 5.98 (SD = 1.41) |
Benevolent Sexism | She looks so pure M = 4.94 (SD = 1.52) | You are a beautiful woman that SHOULD be showcased and taken care of M = 4.84 (SD = 1.60) | Make sure she gets enough rest! She's so fragile right now. M = 4.81 (SD = 1.69) |
Objectification | Only reason to take a girl out is to see her without her clothes lmao M = 6.35 (SD = 1.29) | I'm only into women who look like this M = 6.22 (SD = 1.07) | This is why women should model everything. M = 6.01 (SD = 1.39) |
Insecure | Only reason to take a girl out is to see her without her clothes lmao M = 3.92 (SD = 2.25) | I'm only into women who look like this M = 3.83 (SD = 2.20) | women wear things like this and don't expect me to talk to them… M = 3.57 (SD = 2.09) |
Angry | only if she cleans like a maid 🤷 M = 5.55 (SD = 1.65) | We don't need any more women in power dawg M = 5.43 (SD = 1.57) | And girls are supposed to be the cooks lol M = 5.19 (SD = 1.77) |
Ashamed | I'm only into women who look like this M = 4.40 (SD = 2.15) | We don't need any more women in power dawg M = 4.33 (SD = 2.19) | like she's an angel M = 3.93 (SD = 2.17) |
Highest Mean | 2nd Highest Mean | 3rd Highest Mean | |
Hostile Sexism | only if she cleans like a maid 🤷 M = 6.20 (SD = 1.08) | We don't need any more women in power dawg M = 5.99 (SD = 1.32) | Everytime my girl trys to talk about sports M = 5.98 (SD = 1.41) |
Benevolent Sexism | She looks so pure M = 4.94 (SD = 1.52) | You are a beautiful woman that SHOULD be showcased and taken care of M = 4.84 (SD = 1.60) | Make sure she gets enough rest! She's so fragile right now. M = 4.81 (SD = 1.69) |
Objectification | Only reason to take a girl out is to see her without her clothes lmao M = 6.35 (SD = 1.29) | I'm only into women who look like this M = 6.22 (SD = 1.07) | This is why women should model everything. M = 6.01 (SD = 1.39) |
Insecure | Only reason to take a girl out is to see her without her clothes lmao M = 3.92 (SD = 2.25) | I'm only into women who look like this M = 3.83 (SD = 2.20) | women wear things like this and don't expect me to talk to them… M = 3.57 (SD = 2.09) |
Angry | only if she cleans like a maid 🤷 M = 5.55 (SD = 1.65) | We don't need any more women in power dawg M = 5.43 (SD = 1.57) | And girls are supposed to be the cooks lol M = 5.19 (SD = 1.77) |
Ashamed | I'm only into women who look like this M = 4.40 (SD = 2.15) | We don't need any more women in power dawg M = 4.33 (SD = 2.19) | like she's an angel M = 3.93 (SD = 2.17) |
As in Study 1, the emotional reaction ratings were positively skewed (Figure 6), indicating that participants reported low emotional reactions to a lot of the comments. Again, Hostile Sexism was the only rating category that received responses ranging from 1 to 7, showing a wider range of emotional reactions.
The mean ratings for the Benevolent Comments were: Angry (M = 2.91, SD = 1.45 Ashamed (M = 2.59, SD = 1.40), and Insecure (M = 2.33, SD = 1.35). The mean ratings for the Hostile Comments were: Angry (M = 4.11, SD = 1.61), Ashamed (M = 3.38, SD = 1.69), and Insecure (M = 2.98, SD = 1.63). The mean ratings for the Neutral Comments were: Angry (M = 1.48, SD = 0.87), Ashamed (M = 1.44, SD = 0.85), and Insecure (M = 1.43, SD = 0.86). The comments with the highest ratings in the emotion domains are listed in Table 1.
Gender Differences
Next, we examined whether gender moderated the comment ratings by conducting a series of independent samples t-tests to examine gender mean differences. Following the pattern in Study 1, women (M = 5.05, SD = 1.10) rated Hostile Sexism comments higher in Hostile Sexism than men (M = 4.58, SD = 1.23), t(189) = -2.53, p = .012, d = .42. Again, women (M = 4.30, SD = 1.56) reported stronger emotional reactions of Anger toward the Hostile Sexism comments than men (M = 3.64, SD = 1.61), t(189) = -2.53, p = .012, d = -.42. Likewise, women (M = 3.13, SD = 1.66) reported stronger emotional reactions of Insecurity about the Hostile Sexism comments than men (M = 2.60, SD = 1.49), t(189) = -1.98, p = .049, d = -.33.
Diverging from the pattern found in Study 1, women (M = 3.23, SD = 1.18) rated the Benevolent Sexism comments higher in Hostile Sexism than men (M = 2.65, SD = 1.14), t(189) = -3.01, p = .003, d = -.49. Women (M = 3.59, SD = 1.14) also rated the Benevolent Sexism comments higher in Objectification than men (M = 3.16, SD = 1.18), t(189) = -2.29, p = .023, d = -.37. Interestingly, men (M = 2.76, SD = 1.17) rated the Hostile Sexism comments higher in Benevolent Sexism than women (M = 2.35, SD = 1.15), t(189) = 2.14, p = .034, d = .35. There were no other significant gender differences in the comment ratings ps > .160.
Discussion
Our findings with comment ratings in Study 2 generally replicated our findings with content ratings in Study 1. Again, the strongest results were the ratings in the Hostile Sexism domain. The Hostile Sexism comments were on the high end of the Hostile Sexism scale and were consistently higher across the emotional reaction scales compared to Neutral or Benevolent Sexist comments. Further, women again reacted more strongly to the Hostile Sexism comments than men, finding the Hostile Sexism comments to be higher in Hostile Sexism than men did and reporting stronger emotional reactions of both anger and insecurity.
Also similar to Study 1, the Benevolent Sexism comment ratings were higher in Benevolent Sexism than Hostile or Neutral comments. As with Benevolent Sexism content, Benevolent Sexism comments were not as consistently high in Benevolent ratings and participants did not report as intense emotional reactions. However, women did react more strongly to the Benevolent Sexism comments, with women rating these comments higher in Hostile Sexism and Objectification than men did. Objectification ratings for Hostile and Benevolent Sexism comments revealed some patterns of perceived objectification, but again showed that not all of these kinds of comments are seen as objectifying. Finally, the neutral comments were low on all rating dimensions, as predicted.
General Discussion
The results of Studies 1 and 2 directly align with the theoretical frameworks of Ambivalent Sexism and Objectification. Consistent with research in Ambivalent Sexism (Glick & Fiske, 1997, 2001), the strongest results were the ratings in the Hostile Sexism domain. Results revealed the ratings of the Hostile Sexism in both Studies 1 and 2 were on the high end of the Hostile Sexism scale and were consistently higher across the emotional reaction scales compared to Neutral or Benevolent Sexist content and comments. Further, gender differences in Study 1 were isolated to Hostile Sexism content. Women rated Hostile Sexism content as higher in Hostile Sexism and Objectification than men. Women also reported stronger emotional reactions to the Hostile Sexism content than men did. In Study 2, women again reacted more strongly to the Hostile Sexism comments than men, finding the Hostile Sexism comments to be higher in Hostile Sexism than men did and eliciting stronger emotional reactions of both anger and insecurity.
The Benevolent Sexism ratings in both studies followed the same trend in that ratings appeared to be higher in Benevolent Sexism than Hostile or Neutral for content and comments. However, unlike the Hostile Sexism ratings, Benevolent ratings were not as consistently at the high end of the Benevolent rating scale and did not elicit as intense emotional reactions in either study. Importantly, this difference reflects the existing research in Ambivalent Sexism. Hostile sexism is easier for people to recognize and more clearly violates existing norms (Hall, 2021), whereas people tend to find benevolent sexism more ambiguous and harder to recognize (Dardenne et al., 2007). Additionally, more people personally endorse benevolent sexism relative to hostile sexism (Glick & Fiske, 2001), which might lead to greater variance in the rating of the content compared to a more clear-cut construct like hostile sexism. Interestingly, women did react more strongly in Study 2 to the Benevolent Sexism comments, with women rating these comments higher in Hostile Sexism and Objectification than men did. We suspect this might reflect a difference in how content posts versus comments are interpreted with posts being more passive and comments being more active, thereby generating a stronger response. Future research is needed to explore this possibility.
Objectification ratings reflected that both sexism categories contain some objectifying content and comments, but also highlighted that not all of the content and comments are objectifying. This will be important for researchers who are looking to disentangle these dimensions and constructs in future research.
Finally, the neutral content was low on all rating dimensions, as predicted. Overall, the findings from the Content and Comment Studies suggest that the content and comments we retrieved for the SMSC database reflect their intended categories (i.e., the Hostile content/comments demonstrate hostile sexism, etc.).
The SMSC database is predominantly intended for social media research using resources like the Truman Platform (DiFranzo et al., 2018) and the Mock Social Media Website Tool (Jagayat et al., 2021), however the stimuli could be used for a range of other areas including lab-based research needing sexist content that targets a specific type of sexism such as hostile sexism, benevolent sexism, or objectification, to prime participants with sexist or objectifying stimuli, for studies needing to populate an implicit measure to examine the threshold for perceiving sexism images, or studies of any sort simply seeking neutral images as filler stimuli for a control condition.
There are several possible future directions for this work. One limitation of the current study is that we did not measure whether participants thought the content was sarcastic or genuine. This is an important future direction. Further, we made preliminary attempts to allow for intersectional research by including content targeting White, BIPOC, and Latina/x women, giving researchers the tools to examine how encountering prejudice that targets one or more of the social groups that women belong to might impact the experience of encountering sexist and objectifying content online. Continuing to build out content from different social and cultural groups is an important future direction of this work.
The SMSC database is open-access and can be accessed at hannahbuie.com. On this site, there is also a portal to submit sexist content that you and other readers and researchers encounter online. This content portal is not limited to English language content. We are hoping to receive content from a diverse set of regions and backgrounds. Moving forward, we hope to expand the SMSC database by developing the intersectional component to include women from more racial and ethnic backgrounds. We also plan to code for content from different cultural backgrounds. For example, we hope to receive sexist and neutral content from Central America, South America, Africa, Asia, Europe, and beyond. Having more diverse content would strengthen the SMSC database and make the database more useful to researchers from around the world.
In conclusion, this stimulus set is intended to help facilitate future research by easing the burden of collecting and piloting sexist, objectifying, and neutral stimuli. The SMSC database also provides experimental control in the selection of stimuli for use online or in other types of research paradigms, making the comparison of results across studies more realistic. Further, it could also assist replication studies by providing a common source for stimuli.
Competing interests
The authors do not have competing interests to declare.
Contributions
Contributed to conception and design: HB, AC
Contributed to acquisition of data: HB
Contributed to analysis and interpretation of data: HB, AC
Drafted and/or revised the article: HB, AC
Approved the submitted version for publication: HB, AC
Data accessibility statement
All the stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page: https://osf.io/eqtr4/