There is considerable debate about whether survey respondents regularly engage in “expressive responding” – professing to believe something that they do not sincerely believe to show support for their in-group or hostility to an out-group. Nonetheless, there is widespread agreement that one study provides compelling evidence for a consequential level of expressive responding in a particular context. In the immediate aftermath of Donald Trump’s 2017 presidential inauguration rally there was considerable controversy about whether this inauguration crowd was the largest ever. At this time, a study was conducted which found that Donald Trump voters were more likely than Hillary Clinton voters or non-voters to indicate that an unlabeled photo of Donald Trump’s 2017 presidential inauguration rally showed more people than an unlabeled photo of Barack Obama’s 2009 presidential inauguration rally, despite the latter photo clearly showing more people. However, this study was not pre-registered, suggesting that a replication is needed to establish the robustness of this important result. In the present study, we conducted an extended replication over two years after Donald Trump’s presidential inauguration rally. We found that despite this delay the original result replicated, albeit with a smaller magnitude. In addition, we extended the earlier study by testing several hypotheses about the characteristics of Republicans who selected the incorrect photo.

Social scientists have found large partisan differences in belief reports about factual issues among Americans (Enders et al., 2022; Flynn, 2016; Jerit & Barabas, 2012; Pennycook & Rand, 2021). For example, consider two conspiracy theories from a 2016 YouGov poll (Frankovic, 2016). 46% of Trump voters reported that it was “definitely true” or “probably true” that “Leaked email from some of Hillary Clinton’s campaign staffers contained code words for pedophilia, human trafficking and satanic ritual abuse - what some people refer to as ‘Pizzagate’”, while only 17% of Clinton voters did. By contrast, in the same poll, 50% of Clinton voters reported that it was “definitely true” or “probably true” that “Russia tampered with vote tallies in order to get Donald Trump elected President”, while only 9% of Trump voters did.

When interpreting responses from surveys, social scientists and media commentators typically treat belief reports as sincere. However, in some cases, belief reports might be insincere. In particular, in the context of politically polarizing issues, partisan gaps in responses to questions about whether factual statements are true might, at least in part, be driven by “expressive responding”: professing to believe something that one does not sincerely believe to show support for one’s in-group or hostility to an out-group.1 For example, a Trump voter might insincerely claim to believe the Pizzagate conspiracy is true to express solidarity with Republicans and disapproval of Democrats, and a Clinton voter might insincerely claim to believe the Russian vote tampering conspiracy is true to express solidarity with Democrats and disapproval of Republicans. Indeed, if vast numbers of Republicans sincerely believe the Pizzagate conspiracy a remarkable fact to be explained is why so few of them are willing to engage in costly behaviors to protect the children, such as attacking their ostensible places of captivity to free them (Levy, 2021; Mercier & Altay, 2022).

Expressive responding is notoriously difficult to identify with confidence in survey responses (Bullock & Lenz, 2019). After all, if a respondent claims to believe a statement is true that they actually believe is false, such as the Pizzagate conspiracy or the Russian vote tampering conspiracy, how could a researcher demonstrate that this belief report is insincere? Despite ongoing debates about whether various attempts to identify expressive responding have been successful, there is widespread agreement that a key study by Schaffner and Luks (2018) provides unusually compelling evidence for expressive responding in a particular context (Bullock & Lenz, 2019; Hannon, 2021; Levy & Ross, 2021; Malka & Adelman, 2022).

Schaffner and Luks (2018) took advantage of a controversy about the size of the crowd gathered at the National Mall for the presidential inauguration of Trump in 2017. On the day following Trump’s presidential inauguration the White House press secretary, Sean Spicer, stated that the crowd “was the largest audience ever to witness an inauguration, period, both in person and around the globe”. However, as many media sources have reported, multiple converging lines of evidence demonstrate that Obama’s 2009 and 2013 presidential inauguration crowds were considerably larger than Trump’s (Farley & Robertson, 2017). While this controversy was ongoing, Schaffner and Luks (2018) showed participants two unlabeled photos and asked a simple question: “Which photo has more people?”2 One photo showed the crowd at Obama’s 2009 presidential inauguration at the National Mall, while the other showed the crowd at Trump’s 2017 presidential inauguration at the same place. Importantly, the photo of Obama’s inauguration clearly showed a much larger crowd meaning that, so the argument goes, virtually no one who understood the question and provided a sincere response could possibly respond incorrectly.

Schaffner and Luks (2018) found that 15% of Trump voters selected the incorrect photo, compared to only 2% of Clinton voters and 3% of non-voters (third party voters were excluded from this analysis).3 That is, it appears that a sizable minority of Trump voters must have known the correct response but chose to give the incorrect response to express support for Trump. In addition, Schaffner and Luks (2018) pointed out that it is likely that many Trump voters were unaware of the controversy and, thus, did not realize that this question afforded them with an opportunity to engage in expressive responding. Consequently, they argued that these results should be taken as providing a lower bound for expressive responding – a position that was supported by their finding that 26% of college educated Trump supporters selected the incorrect photo compared to 11% with less education.4

Interestingly, despite the success of this study, it has proven to be remarkably difficult to identify evidence of expressive responding among Republicans in another salient context: Trump’s “big lie” that the 2020 American presidential election was stolen from him due to widespread voter fraud. Several recent studies using a variety of approaches failed to find much evidence for expressive responding, which the authors of this research note is consistent with Republicans who endorse the big lie sincerely believing it (Fahey, 2022; Graham & Yair, 2022).

Given the lack of compelling evidence for expressive responding from other studies, the key result of Schaffner and Luks’ (2018) provides an important “existence proof” for expressive responding. However, this study was not pre-registered, meaning that the finding might not prove to be robust (Hardwicke & Wagenmakers, in press; Nosek et al., 2018). For these reasons, in the present paper, we report a pre-registered replication study, which was conducted in October 2019. The key hypothesis is Hypothesis 1: More Republicans than Democrats select the incorrect photo as showing more people.5

In addition to attempting to replicate this key finding, we extend the original study by testing additional hypotheses with the aim of identifying characteristics of Republicans who are more likely to claim that the photo with fewer people has more people. In total, we pre-registered five additional hypotheses that we justify below.

A growing literature indicates that “identity fusion” – a visceral sense of oneness with a group – can predict endorsement of and engagement in pro-group behaviors, including extreme behaviors (Swann & Buhrmester, 2015; Varmann et al., 2022; Whitehouse, 2018). Of particular relevance here, a recent study found that identity fusion with Trump predicted self-reported willingness to engage in extremism, namely, persecution of immigrants and Trump’s political opponents (Kunst et al., 2019). This result suggests that fusion with Trump might predict expressive responding in support of Trump. Hence, Hypothesis 2: Among Republicans, fusion with Trump predicts selecting the incorrect photo as showing more people.

According to two dual process theories of thinking, there are two types of cognitive processes (Evans & Stanovich, 2013; Kahneman, 2011; Pennycook et al., 2015a): Type 1 “intuitive” processes that do not require working memory, and are often fast and automatic; and Type 2 “analytic” processes that require working memory, and are typically slow and deliberative. A disposition to inhibit intuitive processes and engage in analytic thinking has been found to predict many everyday beliefs and behaviors (Pennycook et al., 2015b). Answering a simple survey question truthfully (such as which of two photos has more people) is, presumably, a preponderant response. Consequently, inhibiting this preponderant response and engaging in analytic thinking could help a respondent realize that providing a false response affords them with an opportunity for expressive responding. Hence, Hypothesis 3: Among Republicans, analytic thinking predicts selecting the incorrect photo as showing more people.

It stands to reason that people who engage in expressive responding in response to one question are more likely to engage in expressive responding in response to other questions. Thus, we anticipate that among Republicans selecting the incorrect photo as showing more people would be associated with reporting belief in other false statements that are congenial to Republicans. Hence, Hypothesis 4: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that anthropogenic climate change is happening. Hypothesis 5: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama is not the Antichrist. And Hypothesis 6: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama was born in the USA.

At the time that we pre-registered our study, it escaped our notice that we should pre-register a hypothesis based on the finding reported by Schaffner and Luks (2018) that college education predicted selecting the incorrect photo among Trump voters, but after beginning data analysis we realized that this hypothesis should be reported (as an exploratory analysis). Hence, Hypothesis 7: Among Republicans, having a college education predicts selecting the incorrect photo as showing more people.

Finally, during the review process a reviewer suggested that we conduct exploratory analyses that test the same hypotheses as Hypotheses 1 through 7 but divide participants into groups based on whether they voted for Clinton or Trump in the 2016 Presidential Election as opposed to whether they leaned Democratic or leaned Republican (i.e., a more precise replication of the original study). These additional hypotheses are reported as Hypothesis 8 through 14.

We preregistered our sample size, data exclusion criteria, and hypotheses. All the stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page on the Open Science Framework (OSF): https://osf.io/dhr36/

Participants

We aimed to collect a sample of 1000 participants. This sample size was selected by considering the resources we had available for this project, not a formal power analysis. Participants were recruited using Lucid, an online recruiting platform that aggregates survey respondents from several respondent providers (Coppock & McClellan, 2019). Lucid uses quota sampling to provide a sample that matches the national distribution in terms of age, gender, ethnicity, and geographic region. Lucid compensates participants in a variety of ways that include cash and various points programs. The study was run from October 23th to October 29th, 2019. In total, 1114 participants completed some portion of the study, and Lucid slightly overshot in collecting participants who completed the entire study, so we had complete data for 1018 participants. Following our pre-registered analysis plan, participants were excluded from the dataset prior to analysis if they failed any of the following six exclusion criteria: 1) completed the survey in less than 5 minutes, 2) completed the survey in more than 60 minutes, 3) reported their age as less than 18 years old, 4) reported their age as greater than 100 years old, 5) had the same IP address as a participant who completed the study before them, or 6) had a non-USA IP address. After exclusions, the final sample included 954 participants (mean age = 45.01; 443 male, 507 female, 4 other gender).

Materials

To code political preferences, the original study by Schaffner and Luks (2018) examined who participants voted for in the 2016 presidential election. That was an appropriate choice given that election had happened recently and participants were, thus, likely to remember who they voted for and have the same candidate preferences as they did when they cast their vote. By contrast, the present study was run more than two years after the 2016 presidential election and participants may have forgotten who they voted for or shifted candidate preferences in response to their assessment of Trump’s presidency, which would be consistent with findings on voter recall that find varying levels of misreporting (Atkeson, 1999; Dassonneville & Hooghe, 2017; Wells, 2019). Consequently, for our pre-registered analyses we coded partisanship by asking participants, “Which of the following best describes your political preference?” and offering six response options (“strongly Democratic”, n = 192; “Democratic”, n = 137; “lean Democratic”, n = 203; “lean Republican”, n = 211; “Republican”, n = 99; “strongly Republican”, n = 112) that we used to sort participants into two partisan groups: Democrat (n = 532) and Republican (n = 422). In addition, for exploratory analyses requested by a reviewer, we coded partisanship by asking participants, “Who did you vote for in the 2016 Presidential Election?” and offering seven response options (“Donald Trump”, n = 303; “Hillary Clinton”, n = 333; “Other candidate (such as Jill Stein or Gary Johnson”, n = 60; “I did not vote for reasons outside of my control”, n = 93; “I did not vote, but I could have”, n = 102; “I did not vote out of protest”, n = 48; “I cannot remember”, n = 15). When testing these exploratory hypotheses we compared Trump and Clinton voters only.

Procedure

Participants were first asked to provide informed consent. Next, participants were asked demographic questions (including gender, English fluency, age, ethnicity, and highest level of education).

Next participants were asked questions about politics, including questions about political identity (response options: “Democrat”, “Republican”, “Independent”, “Other”, “Don’t know”), political partisanship (which we used to code participants as Democratic or Republican; see paragraph above), who they voted for in the 2016 Presidential Election (response options: “Donald Trump”, “Hillary Clinton”, “Other candidate”, “I Did not vote for reasons outside of my control”, “I did not vote, but I could have”, “I did not vote out of protest”, and “I cannot remember”), who they would choose to be president if they absolutely had to choose between Trump and Clinton, political perspective on social and economic issues (response options: “Strongly Liberal”, “Somewhat Liberal”, “Moderate”, “Somewhat Conservative”, “Strongly Conservative”), and the degree to which they follow politics (response options: “Most of the time”, “Some of the time”, “Only now and then”, “Hardly at all”).

Next, participants completed a 7-item Likert measure of identity fusion (Gómez et al., 2011) that had been adapted to measure fusion with Trump (Kunst et al., 2019). An example item: “I have a deep emotional bond with Donald Trump”. This measure had excellent reliability, Cronbach’s α = .97 (95% CI [.96, .97]).

Next, participants were shown two photos (the order of the photos was counterbalanced across participants): Photo A, which shows a crowd at Donald Trump’s 2017 presidential inauguration rally; and Photo B, which shows a crowd at Obama’s 2009 presidential inauguration rally. While these photos were displayed participants were asked “Which photo has more people?” and had two response options: “Photo A” or “Photo B” (see Figure 1). Selecting Photo A is coded as incorrect and selecting photo B is coded as correct.

Figure 1.
The photos and question about crowd sizes.

The photo on the left shows Trump’s 2017 presidential inauguration and the photo on the right shows Obama’s 2009 presidential inauguration. Adapted from Schaffner and Luks (2018).

Figure 1.
The photos and question about crowd sizes.

The photo on the left shows Trump’s 2017 presidential inauguration and the photo on the right shows Obama’s 2009 presidential inauguration. Adapted from Schaffner and Luks (2018).

Close modal

Next, participants were asked to rate 22 factual statements (presented in random order) in terms of their truth value. Three of these statements were used for testing pre-registered hypotheses. “Barack Obama is the Antichrist” and “Barack Obama was not born in the United States and his official Hawaiian certificate is really a fake”, which had three response options: “True”, “False”, and “Don’t know” (we coded “True” as correct and “False” and “Don’t know” as incorrect). And “Which of these three statements about the Earth’s temperature comes closest to your view?”, which had four response options: “The Earth is getting warmer mostly because of humanity activity such as burning fossil fuels”, “The Earth is getting warmer mostly because of natural patterns in the Earth’s environment”, “There is no solid evidence that the Earth is getting warmer”, and “Don’t know” (we coded “The Earth is getting warmer mostly because of humanity activity such as burning fossil fuels” as correct and the other response options as incorrect). These three questions were interspersed with a lightly edited version of the 15-item Belief in Conspiracy Theories Inventory (Swami et al., 2010, 2011) and four additional factual questions, each of which had the three response options: “True”, “False”, and “Don’t know”.

Next, participants selected their religious affiliation from a set of options, completed the 6-item version of the Supernatural Belief Scale (Jong & Halberstadt, 2016), and indicated whether or not they describe themselves as a “born-again” or evangelical Christian.

Next, participants were asked which of a set of media sources they most trust to provide accurate information (response options: “Fox News”, “The New York Times”, “CNN”, “MSNBC”, “NPR”, “USA Today”, “Local television/local newspapers”, “The Wall Street Journal”, “The major broadcast networks (ABC, NBC, CBS)”, and “Don’t know”).

Next, we measured analytic thinking by summing the number of correct response to a reworded version of the original three-item Cognitive Reflection Test (CRT; Frederick, 2005) from Shenhav et al. (2012) and a non-numerical four-item CRT from Thomson and Oppenheimer (2016). While it is possible that some participants had seen some of these questions previously, the CRT has been shown to retain predictive validity across time (Stagnaro et al., 2018) and after multiple exposures (Bialek & Pennycook, 2018). The full seven-item CRT had low reliability, Cronbach’s α = .68 (95% CI [.65, .71]). After completing the CRT participants were asked, “Have you seen any of the last seven-word problems before?” (response options: “yes”, “maybe”, “no”).

Next, participants were asked a series of additional questions about the photos of the two inauguration events.

Next, participants were asked to complete a study experience questionnaire and were asked if they had any comments about the study.

Finally, participants were asked some debriefing questions.

Hypotheses 1 through 7 were pre-registered, and Hypotheses 8 through 14 are exploratory.

Hypothesis 1: More Republicans than Democrats select the incorrect photo as showing more people. Republicans selected the incorrect photo 6.9% of the time, while Democrats selected it 3.6% of the time, and this difference was statistically significant, χ2(1) = 5.365, p = .021. Figure 2 shows the proportion of participants from each partisanship category selecting the incorrect photo.

Figure 2.
Bar plot showing the percentage of participants who chose the incorrect photo as a function of political partisanship when responding to the question “Which photo has more people?”
Figure 2.
Bar plot showing the percentage of participants who chose the incorrect photo as a function of political partisanship when responding to the question “Which photo has more people?”
Close modal

Hypothesis 2: Among Republicans, fusion with Trump predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, B = 0.015, SE = 0.016, p = .350.6

Hypothesis 3: Among Republicans, analytic thinking predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, B = -0.176, SE = 0.142, p = .213.

Hypothesis 4: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that anthropogenic climate change is happening. We did not find evidence for this hypothesis, χ2(1) = 1.321, p = .250.

Hypothesis 5: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama is not the Antichrist. We found evidence for this hypothesis, χ2(1) = 6.266, p = .012.

Hypothesis 6: Among Republicans, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama was born in the USA. We did not find evidence for this hypothesis, χ2(1) = 0.861, p = .353.

Hypothesis 7: Among Republicans, having a college education predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, χ2(1) = 0.206, p = .650.

Hypothesis 8: More Trump voters than Clinton voters select the incorrect photo as showing more people. Trump voters selected the incorrect photo 6.6% of the time, while Clinton voters selected it 3.0% of the time, and this difference was statistically significant, χ2(1) = 4.569, p = .033.

Hypothesis 9: Among Trump voters, fusion with Trump predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, B = 0.016, SE = 0.021, p = .440.

Hypothesis 10: Among Trump voters, analytic thinking predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, B = -0.184, SE = 0.174, p = .290.

Hypothesis 11: Among Trump voters, selecting the incorrect photo as showing more people predicts a failure to correctly respond that anthropogenic climate change is happening. We did not find evidence for this hypothesis, χ2(1) = 0.040, p = .842.

Hypothesis 12: Among Trump voters, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama is not the Antichrist. We found evidence for this hypothesis, χ2(1) = 5.712, p = .017.

Hypothesis 13: Among Trump voters, selecting the incorrect photo as showing more people predicts a failure to correctly respond that Obama was born in the USA. We did not find evidence for this hypothesis, χ2(1) = 0.332, p = .565.

Hypothesis 14: Among Trump voters, having a college education predicts selecting the incorrect photo as showing more people. We did not find evidence for this hypothesis, χ2(1) = 1.347, p = .256.

In the present study we replicated the central finding of Schaffner and Luks (2018) by showing that more Republicans than Democrats provide an incorrect response when questioned about which of two photos has more people. Given that the photo of Obama’s inauguration rally clearly has more people than the photo of Trump’s inauguration rally, this finding supports the hypothesis that some Republicans engaged in expressive responding by intentionally selecting the incorrect photo to show support for Trump. The robustness of this result is supported by an exploratory analysis, which more closely followed the original study, which found that when participants were divided according to who they voted for in the 2016 Presidential Election, more Trump voters than Clinton voters provided the incorrect response.

Interestingly, Figure 2 suggests that participants who did not strongly identify with either political party (i.e., “Democratic”, “Lean Democratic”, “Lean Republican” and “Republican”) chose the incorrect photo at a roughly uniform rate, while those who strongly identified with a political party (i.e., “Strongly Democratic” or “Strongly Republican”) responded differently. Specifically, “Strongly Democratic” participants appear to select the incorrect photo at a lower rate than more moderate participants, while “Strongly Republican” participants appear to select the incorrect photo at a higher rate than more moderate participants. We suggest that a plausible interpretation of this finding is that strong Democrats and strong Republicans are more likely to follow political news more closely, be better able to recall the controversy about crowd sizes that occurred over two years ago, and be more motivated to express support for their political in-group with their responses. In other words, strong Democrats likely responded more accurately than more moderate Democrats to express opposition to Trump, and strong Republicans likely responded less accurately than more moderate Republicans to express support for Trump. This finding suggests an important direction for future research on expressive (and sincere) responding – a focus on individuals who strongly identify with their partisan groups. Nonetheless, despite the plausibility of this account, we did not find evidence that fusion with Trump is a predictor of selecting the incorrect photo among Republicans, which suggests that alternative measures of the strength of partisan identity should be explored.

Our replication extended the original study by Schaffner and Luks (2018) in two key respects. First, the original study was not pre-registered and, thus, is best interpreted as exploratory, while the current study was pre-registered and, thus, is best interpreted as confirmatory and, therefore, considerably strengthens the evidence that the key finding from the original study is robust (Hardwicke & Wagenmakers, in press; Nosek et al., 2018). Second, the original study was conducted in the immediate aftermath of a widely discussed controversy about the size of Trump’s presidential inauguration crowd, meaning that it only provided evidence for expressive responding in the context of a highly salient issue. By contrast, the present study was conducted more than two years after the event and, thus, provides novel evidence for expressive responding about a controversy that was not highly salient at the time the survey was conducted.

A noteworthy difference between the findings of the original study and the present study is that the percentage of Trump voters selecting the incorrect photo in the present study (6.6%) is considerably lower than the percentage of Trump voters selecting the incorrect photo in the original study (15.0%).7 There are several plausible explanations for this, which could act together or alone. First, and we suspect most importantly, it is surely the case that fewer participants recalled the details of controversy about Trump’s crowd size more than two years after the event than in the immediate aftermath of the controversy, meaning that fewer Republicans and Trump voters were aware of the opportunity to engage in expressive responding. Second, support for Trump among Republicans and Trump voters might have become weaker during Trump’s presidential term, resulting in fewer Republicans and Trump voters being motivated to engage in expressive responding to show support for him. Third, the composition of people in the YouGov participant pool and the Lucid participant pool may have differed, and it is possible that the YouGov pool includes more people who (for whatever reason) are willing to engage in expressive responding about this issue.

We tested three hypotheses pertaining to associations with individual differences that plausibly promote expressive responding. First, we examined whether analytic thinking predicted selecting the incorrect photo among Republicans. Second, we examined whether identity fusion with Trump predicted selecting the incorrect photo among Republicans. Third, we examined whether college education predicted selecting the incorrect photo among Republicans. None of these hypotheses was supported.

We tested three hypotheses concerning whether expressive responding about crowd sizes might predict other factual claims that Republicans regularly endorse that might be driven, to some degree, by expressive responding. We found support for an association between selecting the incorrect photo and claiming to believe that Obama is the Antichrist, but did not find support for an association between selecting the incorrect photo and claiming to believe untrue statements about climate change or Obama’s birthplace. This mixed evidence is difficult to interpret. One possibility is that some Republicans who claimed to believe that Obama is the Antichrist were engaging in expressive responding, but Republicans who reported the other two beliefs were being sincere. A potential explanation for such a pattern is that expressive responding might be more common for particularly bizarre claims, such as, claiming that Obama is the Antichrist and claiming that a photo of Trump’s rally that clearly has fewer people actually has more people. This is plausible because, according to some accounts, expression of particularly bizarre beliefs can serve as signals of solidarity and belonging (Ganapini, 2021; Mercier, 2020; Williams, 2022). In any case, this suggestion is speculative, and it is easily possible that our study was simply underpowered to consistently identify relatively small associations. Alternatively, the association between the photo question and the Antichrist question may have been a false positive – after all, when we controlled for multiple comparisons this result fell just under the threshold for statistical significance (see footnote 6).

Our study has two important limitations. First, this study could only be used to test for expressive responding among Republicans, not Democrats (or individuals outside the USA). A sincere response to the crowd size question supports Democrats, so Democrats had no opportunity to express support for their group with an insincere belief report. Similarly, new research on expressive responding focuses on Trump’s false claim that the 2020 American presidential election was stolen from him due to widespread voter fraud (Fahey, 2022; Graham & Yair, 2022) and, thus, does not provide an opportunity for Democrats to engage in expressive responding. More generally, to the best of our knowledge, the expressive responding literature lacks compelling demonstrations of expressive responding among Democrats (or non-Americans). Consequently, we suggest that an key direction for future research is the development of studies that have the capacity to provide strong evidence of expressive responding among members of groups other than Republicans if it exists. This is important because Trump utters egregious lies with remarkable frequency (Kessler et al., 2020), which some Republicans may identify as expressive responding from Trump and, thus, as license to engage in expressive responding themselves. If this (speculative) account is correct then expressive responding might be considerably rarer in individuals who belong to groups whose leaders have not normalized expressive responding.

A second limitation of our study is that while we collected a relatively large sample of 1018 participants, only 422 of these participants were Republicans. Thus, this study would be underpowered to identify true associations between individual differences and expressive responding if they are small. Indeed, the original study by Schaffner and Luks (2018) found a positive association between expressive responding and college education, but we did not. Thus, it could be worthwhile for future research to explore whether individual differences that did not predict expressive responding in the present study might predict expressive responding with larger samples and in other contexts where effect sizes could plausibly be larger.

Contributed to conception and design: RMR, NL

Contributed to acquisition of data: RMR

Contributed to analysis and interpretation of data: RMR, NL

Drafted and/or revised the article: RMR, NL

Approved the submitted version for publication: RMR, NL

This research was supported by the Australian Research Council Discover Project awarded to NL (grant number: DP180102384), a Macquarie University Postdoctoral Fellowship awarded to RMR, and a Templeton Foundation grant awarded to NL and RMR (grant ID: 62631).

The authors have no completing interests to declare.

All the stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page on the Open Science Framework (OSF): https://osf.io/dhr36/

1.

This phenomena has also been termed “partisan cheerleading” (Bullock & Lenz, 2019) and “partisan badmouthing” (Hannon, 2021).

2.

To be precise, participants were randomised to either this condition or another condition that asked which photo showed Trump’s rally and which showed Obama’s. The condition that asked which photo had more people provides the strongest evidence for expressive responding, so we only discuss that condition here.

3.

Because identifying the correct photo is consistent with expressing support for Obama, this study cannot be used to test for expressive responding among Democrats.

4.

However, this difference between groups did not quite meet the conventional threshold for statistical significance (p = .054).

5.

In the methods section below we explain why our pre-registered hypotheses divides participants into groups based on their preference for the Democrat Party versus the Republican Party rather than who they voted for in the 2016 presidential election.

6.

An reviewer suggested that it might be more appropriate to correct for multiple comparisons when testing Hypotheses 2 through 7 because these hypotheses examine different correlates of a single measure of expressive responding. However, we note there is considerable disagreement about whether controlling for multiple comparisons would be appropriate in cases such as this (Lakens, 2022). Consequently, our preference is to focus on our analysis plan as it is pre-registered, which does not control for multiple comparisons. Nonetheless, we appreciate that there is scope for legitimate disagreement, so we have calculated correct p-values too and find that if a Holm-Bonferroni correction is applied to Hypotheses 2 though 7 that Hypothesis 5 is no longer statistically significant (p = .072).

7.

By contrast, the percentage of Democrats selecting the incorrect photo in the present study is very similar to the percentage of Clinton voters and non-voters selecting it in the original study (3.6%, 3%, and 2% respectively).

Atkeson, L. R. (1999). “Sure, I voted for the winner!” Overreport of the primary vote for the party nominee in the national election studies. Political Behavior, 21(3), 197–215. https://doi.org/10.1023/a:1022031432535
Bialek, M., & Pennycook, G. (2018). The cognitive reflection test is robust to multiple exposures. Behavior Research Methods, 50(5), 1953–1959. https://doi.org/10.3758/s13428-017-0963-x
Bullock, J. G., & Lenz, G. (2019). Partisan bias in surveys. Annual Review of Political Science, 22(1), 325–342. https://doi.org/10.1146/annurev-polisci-051117-050904
Coppock, A., McClellan, O. A. (2019). Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents. Research Politics, 6(1), 205316801882217. https://doi.org/10.1177/2053168018822174
Dassonneville, R., Hooghe, M. (2017). The noise of the vote recall question: The validity of the vote recall question in panel studies in Belgium, Germany, and the Netherlands. International Journal of Public Opinion Research, 29(2), 316–338. https://doi.org/10.1093/ijpor/edv051
Enders, A., Farhart, C., Miller, J., Uscinski, J., Saunders, K., Drochon, H. (2022). Are Republicans and Conservatives more likely to believe conspiracy theories? Political Behavior. https://doi.org/10.1007/s11109-022-09812-3
Evans, J. S. B. T., Stanovich, K. E. (2013). Dual-process theories of higher cognition: advancing the debate. Perspectives on Psychological Science, 8(3), 223–241. https://doi.org/10.1177/1745691612460685
Fahey, J. (2022). The big lie: Expressive responding and misperceptions in the United States. Journal of Experimental Political Science. https://doi.org/10.1017/XPS.2022.33
Farley, R., Robertson, L. (2017). The facts on crowd size. https://www.factcheck.org/2017/01/the-facts-on-crowd-size/
Flynn, D. J. (2016). The scope and correlates of political misperceptions in the mass public. http://djflynn.org/wp-content/uploads/2016/08/Flynn_APSA2016.pdf
Frankovic, K. (2016). Belief in conspiracies largely depends on political identity. https://today.yougov.com/topics/politics/articles-reports/2016/12/27/belief-conspiracieslargely-depends-political-iden.
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. https://doi.org/10.1257/089533005775196732
Ganapini, M. B. (2021). The signaling function of sharing fake stories. Mind Language. https://doi.org/10.1111/mila.12373
Gómez, Á., Brooks, M. L., Buhrmester, M. D., Vázquez, A., Jetten, J., Swann, W. B. (2011). On the nature of identity fusion: Insights into the construct and a new measure. Journal of Personality and Social Psychology, 100(5), 918–933. https://doi.org/10.1037/a0022642
Graham, M. H., Yair, O. (2022). Expressive tesponding and Trump’s big lie. https://m-graham.com/papers/GrahamYair_BigLie.pdf
Hannon, M. (2021). Disagreement or partisan badmouthing? The role of expressive discourse in politics. In E. Edenberg M. Hannon (Eds.), Political Epistemology (pp. 297–318). https://doi.org/10.1093/oso/9780192893338.003.0017
Hardwicke, T. E., Wagenmakers, E.-J. (in press). Reducing bias, increasing transparency, and calibrating confidence with preregistration. Nature Human Behavior. https://doi.org/10.31222/osf.io/d7bcu
Jerit, J., Barabas, J. (2012). Partisan perceptual bias and the information environment. The Journal of Politics, 74(3), 672–684. https://doi.org/10.1017/s0022381612000187
Jong, J., Halberstadt, J. (2016). Death anxiety and religious belief: an existential psychology of religion. Bloomsbury Publishing.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Kessler, G., Rizzo, S., Kelly, M. (2020). Donald Trump and his assault on truth. Scribner.
Kunst, J. R., Dovidio, J. F., Thomsen, L. (2019). Fusion with political leaders predicts willingness to persecute immigrants and political opponents. Nature Human Behaviour, 3(11), 1180–1189. https://doi.org/10.1038/s41562-019-0708-1
Lakens, D. (2022). Improving your statistical inferences. https://lakens.github.io/statistical_inferences/
Levy, N. (2021). Bad beliefs: Why they happen to good people. Oxford University Press. https://doi.org/10.1093/oso/9780192895325.001.0001
Levy, N., Ross, R. M. (2021). The cognitive science of fake news. In M Hannon J. de Ridder (Eds.), Routledge Handbook of Political Epistemology (pp. 181–191). Routledge. https://doi.org/10.4324/9780429326769-23
Malka, A., Adelman, M. (2022). Expressive survey responding: A closer look at the evidence and its implications for American democracy. Perspectives on Politics, 1–12. https://doi.org/10.1017/s1537592721004096
Mercier, H. (2020). Not born yesterday: The science of who we trust and what we believe. Princeton University Press.
Mercier, H., Altay, S. (2022). Do cultural misbeliefs cause costly behavior? In J. Musolino, P. Hemmer, J. Sommer (Eds.), The Cognitive Science of Belief (pp. 193–208). Cambridge University Press. https://doi.org/10.1017/9781009001021.014
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114
Pennycook, G., Fugelsang, J. A., Koehler, D. J. (2015a). What makes us think? A three-stage dual-process model of analytic engagement. Cognitive Psychology, 80, 34–72. https://doi.org/10.1016/j.cogpsych.2015.05.001
Pennycook, G., Fugelsang, J. A., Koehler, D. J. (2015b). Everyday consequences of analytic thinking. Current Directions in Psychological Science, 24(6), 425–432. https://doi.org/10.1177/0963721415604610
Pennycook, G., Rand, D. G. (2021). Research note: Examining false beliefs about voter fraud in the wake of the 2020 Presidential Election. Harvard Kennedy School Misinformation Review, 2(1). https://doi.org/10.37016/mr-2020-51
Schaffner, B. F., Luks, S. (2018). Misinformation or expressive responding? What an inauguration crowd can tell us about the source of political misinformation in surveys. Public Opinion Quarterly, 82(1), 135–147. https://doi.org/10.1093/poq/nfx042
Shenhav, A., Rand, D. G., Greene, J. D. (2012). Divine intuition: Cognitive style influences belief in God. Journal of Experimental Psychology: General, 141(3), 423–428. https://doi.org/10.1037/a0025391
Stagnaro, M. N., Pennycook, G., Rand, D. G. (2018). Performance on the Cognitive Reflection Test is stable across time. Judgment and Decision Making, 13(3), 260–267. https://doi.org/10.1017/s1930297500007695
Swami, V., Chamorro-Premuzic, T., Furnham, A. (2010). Unanswered questions: A preliminary investigation of personality and individual difference predictors of 9/11 conspiracist beliefs. Applied Cognitive Psychology, 24(6), 749–761. https://doi.org/10.1002/acp.1583
Swami, V., Coles, R., Stieger, S., Pietschnig, J., Furnham, A., Rehim, S., Voracek, M. (2011). Conspiracist ideation in Britain and Austria: evidence of a monological belief system and associations between individual psychological differences and real-world and fictitious conspiracy theories. British Journal of Psychology, 102(3), 443–463. https://doi.org/10.1111/j.2044-8295.2010.02004.x
Swann, W. B., Jr., Buhrmester, M. D. (2015). Identity fusion. Current Directions in Psychological Science, 24(1), 52–57. https://doi.org/10.1177/0963721414551363
Thomson, K. S., Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113. https://doi.org/10.1017/s1930297500007622
Varmann, A. H., Kruse, L., Bierwiaczonek, K., Gomez, A., Vázquez, A. . M., Kunst, J. R. (2022). How identity fusion predicts extreme pro-group orientations: A meta-analysis. https://doi.org/10.31234/osf.io/prasc
Wells, A. (2019). False recall, and how it affects polling. https://yougov.co.uk/topics/politics/articles-reports/2019/07/17/false-recall-and-how-it-affects-polling
Whitehouse, H. (2018). Dying for the group: Towards a general theory of extreme self-sacrifice. Behavioral and Brain Sciences, 41(e192), 1–64. https://doi.org/10.1017/s0140525x18000249
Williams, D. (2022). Signalling, commitment, and strategic absurdities. Mind Language, 37(2), 1011–1029. https://doi.org/10.1111/mila.12392
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material