How do people estimate the prevalence of beliefs and knowledge among others? Here, we examine the hypothesis that mere repetition of information increases such perceptions of consensus — an “illusory consensus effect.” Although existing evidence suggests that repeated exposure to information may increase its perceived consensus, the impact of repetition has not been tested in isolation from other source and contextual cues. We conducted two experiments to fill in this gap. Prolific participants located in the U.S. read a series of trivia claims — half true and half false in Experiment 1 and all true in Experiment 2. These claims were not attributed to any source. After a short delay, participants made consensus judgments about previously seen (repeated) and new trivia claims. Repetition significantly increased perceived consensus in both experiments; in Experiment 1, participants judged that more Americans would believe repeated (vs. new) information, and in Experiment 2, participants judged that more Americans knew repeated (vs. new) information. These findings provide strong evidence for an illusory consensus effect, such that mere exposure to information increases perceptions of two different measures of consensus: how many others would believe it as well as estimates of current public knowledge. These findings are relevant to our understanding of how our information environments may contribute to (mis)perceptions of consensus.

People’s beliefs and behaviors are driven, in part, by what they think others believe and know (e.g., Cialdini, 2009; Festinger, 1954; Tankard & Paluck, 2016). Prior research has demonstrated the important role of perceived consensus on a vast array of outcomes, from health-related decisions like vaccination uptake (Moehring et al., 2023) to attitudes toward polarized topics like climate change (Lewandowsky et al., 2019) to the likelihood of engaging in undesirable behaviors like drinking and driving (Perkins et al., 2010) to the success of teaching practices (Sadler et al., 2013). These estimates of what others think are often constructed through an individual’s own experience rather than obtaining actual information about the rates of behaviors and opinions (Nelson et al., 1998). When constructing estimates of what others know or believe, individuals appear to be particularly influenced by the frequency with which they have previously encountered the information they are judging, relying on this cue more than other relevant cues such as the variety of sources sharing that information or opinion (Pillai & Fazio, 2024; Weaver et al., 2007).

However, this past research has generally examined the effects of exposure to claims attributed to distinct sources in a specific context, for example, from named individuals in a focus group (Weaver et al., 2007) or from social media users distinctly identified by photograph (Pillai & Fazio, 2024). The methodological choice of pairing the prior exposure with specific source cues makes it difficult to disentangle the distinct role of repetition as the context of the repeated exposure also provides relevant information that bears on judgments of consensus. For instance, exposure to a statement from a social media user provides suggestive evidence that the statement is believed by social media users. Thus, whether repetition per se can influence perceptions of social consensus remains an open question. Here, we examine for the first time whether mere repetition affects the perceived consensus of claims that are presented without any source information or contextual clues. We present two experimental studies, each focused on the effects of repetition on a different measure of social consensus: judgments of how many others would believe true and false claims (Experiment 1) and estimates of current public knowledge about true claims (Experiment 2). We hypothesized that we would observe an “illusory consensus effect” in both, such that participants would judge that more others would believe or already know repeated (vs. new) information.

Existing research provides insights into how repetition may increase consensus estimates. First, exposure to a stimulus increases how easy it feels to process that stimulus when it is encountered again. This increased processing fluency may then be used as a heuristic cue to infer that the stimulus is widely known (for reviews, see Schwarz et al., 2021; Schwarz & Jalbert, 2020). Because people are more likely to encounter widely shared beliefs, these beliefs are also likely to feel familiar and easy to process. Thus, processing ease can serve as a valid cue for consensus if it is a result of having encountered information many times from a variety of sources. However, processing fluency may also result from incidental influences that do not directly reflect consensus. For example, information may feel easy to process if it has been encountered repeatedly from the same source or a source that is not relevant to the target of the consensus estimate. Because people are more sensitive to their experiences of processing fluency than they are to the sources of those experiences (Schwarz, 2012), sources of processing ease that are unrelated to consensus estimates may be misattributed to consensus. For example, seeing a name repeated during an experiment increased participants’ perception that the person was famous (Jacoby et al., 1989), and hearing one person repeat an opinion increased its perceived popularity (Weaver et al., 2007). There is also initial evidence that fluency resulting from variables unrelated to prior exposure may also influence consensus: Valsesia & Schwarz (2016) found that products with easier-to-pronounce names (that were thus easier to process) were judged to be more popular than products with more difficult-to-pronounce names. Specific to estimates of what others believe, researchers have found that repeatedly hearing a trivia question increases participants’ perceptions that their peers know the answer (Birch et al., 2017), and reading news headlines shared by the same social media user multiple times increases perceptions that others believe it to be true (Pillai & Fazio, 2024).

A second way that repetition may also increase estimates of consensus is through changes in one’s own beliefs and perceived knowledge. Like with consensus estimates, people rely on experiences of processing fluency to judge what they believe and know themselves. Information that is repeated is judged to be more true than new information, a robust finding called the “illusory truth effect” (Hasher et al., 1977; for recent reviews, see Pillai & Fazio, 2021; Udry & Barber, 2024). More recently, researchers have also found that repetition increases an individual’s perceptions that they already knew the information — an illusion of knowledge effect (Speckmann & Unkelbach, 2024). When making judgments about what others believe and know, individuals often draw on their own beliefs and knowledge as a source of information (Nelson et al., 1998). The tendency to be biased by one’s own beliefs and knowledge when making estimates about what others believe and know represents a form of egocentric mentalizing (Todd & Tamir, 2024). In the case of knowledge estimates, this has been called the “curse of knowledge” effect (Camerer et al., 1989). For example, people judge that more of their peers would know the answer to a trivia question when they themselves know the answer (Birch et al., 2017; Nickerson et al., 1987; Tullis, 2018). Thus, repetition may facilitate perceptions of consensus by first increasing one’s own belief in or feelings of knowledge about the claim. In turn, people may draw on these personal states of knowledge and belief as a source of information to estimate that others are more likely to believe or know that information as well.

Although prior literature provides compelling evidence for a repetition-consensus link, information in these studies has typically been presented with accompanying source or contextual cues (e.g., from a specific person in a specific context) and has thus not provided an isolated test of the effects of repetition on consensus. To fill this gap, we conducted two studies with Prolific users located in the U.S. testing whether repeating information — in the absence of any accompanying source or contextual cues about where that information came from — increases perceptions of its consensus. In Experiment 1, we investigated whether participants judged that more Americans would believe true and false claims that were repeated (vs. new). In Experiment 2, we investigated the effect of mere repetition on participants’ judgments of how many others know true information. Given that individuals tend to rely more on their own beliefs and knowledge when making estimates about more similar others for whom one’s own beliefs and knowledge is more likely to be a relevant cue (Nelson et al., 1998; Todd & Tamir, 2024), we additionally explored whether the effects of repetition were moderated by claim veracity in Experiment 1 (which used both true and false claims).

Finally, we also explored whether these effects varied by individual differences in thinking styles (Need for Cognition and Cognitive Reflection) in both experiments. Need for Cognition (NFC) measures the extent to which people enjoy and engage in thinking (Cacioppo & Petty, 1982), and the Cognitive Reflection Test (CRT) measures peoples’ ability to override intuitive responses (Frederick, 2005). In principle, people higher on these measures could be better able to leverage cognitive resources to discount the effects of fluency when making judgments of consensus. On the other hand, such individual differences in epistemic processing have largely been found to be irrelevant to the effects of repetition on belief (De keersmaecker et al., 2020, but see Newman et al., 2020). Similarly, repetition might increase perceptions of consensus uniformly across participants. To investigate these possibilities, we asked participants to complete a 12-item NFC and 7-item CRT measure.

Data, scripts for analyses, and materials for both the experiments in this paper can be accessed at osf.io/wbvcg/. Methods and analysis were preregistered at aspredicted.org/VZV_LQ7 for Experiment 1 and at aspredicted.org/R7W_N4J for Experiment 21. All analyses reported are pre-registered unless labeled exploratory.

Our first experiment investigated whether mere repetition increases perceptions that more others would believe that information. All procedures for both experiments in this paper were conducted in compliance with the University of Washington’s Institutional Review Board (IRB).

Method

Experimental Design

We manipulated claim repetition (repeated or new) within-subjects. Participants made consensus judgments for 72 claims, half of which they had seen in the earlier exposure phase and half of which were new. In addition, half of both the repeated and new claims were factually true and half were factually false, allowing us to explore the influence of claim veracity.

Participants

To determine the number of participants to recruit, we performed a power analysis using an estimated effect size dz = 0.48, an effect size observed in a recent study from our research group testing the effects of repetition on perceived expert consensus (Arya, 2024, Chapter 3, Experiment 1). Although this study used different materials (health claims) and a different consensus measure (expert consensus), it was the only study we were aware of that tested the impact of mere repetition on some type of consensus measure. It also followed the same basic similar within-subjects design and was conducted with a similar sample of participants (Prolific users located in the U.S.). Using this estimated effect size, we found that a total sample of 59 participants would be required to detect the effect in a repeated measures design with α = .05, power (1-β) = .95, and using a two-tailed test according to G*Power (Faul et al., 2007). Given that we did not yet have an estimate for the effect of repetition for each of our consensus measures, we decided to overrecruit from this estimate and aim for 100 total participants in each experiment.

We recruited 100 Prolific workers located in the U.S. with a 95%+ approval rating to participate in a survey on “Item Perception”. We estimated our study would take approximately 15 minutes and participants were given $3.00 for their participation, consistent with Prolific’s current recommended rate of $12/hour. As a requirement of our funding, people who were affiliated with the University of Washington and involved in this research were ineligible for this study. In total, 101 participants completed the survey in Experiment 1 (Mage = 40.66, SDage = 15.75; 40.6% male, 57.4% female, 2.0% non-binary, 0.0% choose to self-describe). Note that this number of participants is one more than preregistered due to Prolific procedures allowing an additional participant to complete the study before it closed.

Materials and Measures

Stimuli. We used a set of 72 true and false trivia claims about a variety of topics (sports, geography, food, animals, and science) from Jalbert, Newman, and Schwarz (2020). These claims were selected from a broader set of claims previously normed in a sample of online workers (Jalbert et al., 2019) to be ambiguous in regards to truth, with participants judging each claim selected to be true between 35% and 65% of the time in the norming. Claims were also selected such that truth ratings were similar for true and false claims, with M = 0.52 (SD = 0.08) for both. Examples of true claims include “Halvah is a confection made of sesame seeds” and “Fly-fishing is the oldest method of recreational fishing”, while examples of false claims include “Biking is the first event in a triathlon” and “Mayonnaise is usually made with raw egg whites.”

During the initial exposure phase, participants were presented with 36 of these trivia claims (half true and half false), and during the judgment phase, participants saw the same 36 claims along with 36 new claims (also half true and half false). To control for item effects, we also counterbalanced which half of the claims were repeated between participants such that half of the participants saw one set of 36 claims repeated while the other half saw the other set of 36 claims repeated.

Consensus Measure. For each trivia claim, participants were asked “How many Americans do you think would believe this claim?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (coded as 1) on the left and “almost everyone” (coded as 10) on the right.

Additional Measures. We included four additional measures assessing individual characteristics for exploratory purposes. We were interested in whether the effects of repetition on consensus estimates might vary across differences in any of these measures. Two of these measures assessed aspects of individual thinking styles. These were a 12-item Need for Cognition Scale (NFC; Cacioppo & Petty, 1982) to capture individual differences in tendency to engage in elaborative thinking, and a 7-item Cognitive Reflective Task (CRT; 3-item reworded version of Frederick, 2005 via Shenhav et al. (2012, and 4-item CRT by Thomson & Oppenheimer, 2016) to capture individual differences in reliance on analytical (vs. intuitive) thinking. An additional two items were included to assess participants’ identification with Americans, the target of judgment for our consensus measure. The first item was an adapted Inclusion of Others in the Self Scale (Aron et al., 1992), where participants select one of seven pairs of circles overlapping to various extents that best describes their relationship with Americans. These responses were coded from one to seven, with higher values corresponding to higher levels of overlap. An image of how this scale item appeared to participants can be seen in Figure 1. We also asked participants to answer the question, “How similar do you think you are to the typical American?” with responses made on a seven-point unnumbered scale with the endpoints “not at all similar” (coded as 1) on the left and “extremely similar” (coded as 7) on the right.

Figure 1.
An adapted version of the Inclusion of Others in the Self Scale (Aron et al., 1992) presented to participants to assess their perceived relationship to Americans in each experiment.
Figure 1.
An adapted version of the Inclusion of Others in the Self Scale (Aron et al., 1992) presented to participants to assess their perceived relationship to Americans in each experiment.
Close modal

Procedure

When participants signed up for the experiment, they read an information sheet and indicated their agreement to participate. Participants were required to complete the survey on a computer (not a phone or tablet).

Exposure Phase. In the exposure phase, participants were told that, for approximately the next three minutes, they would see a series of statements, and that these statements would be presented automatically. They were asked to read the statements carefully as they were presented but to not do anything else. They were additionally informed that they would not be able to pause the study so they should make sure they had no distractions before they started.

Participants then saw the series of 36 trivia claims that would be their repeated claims. Each trivia claim appeared on the screen for five seconds before auto-advancing to the next claim. The order in which these claims were presented was randomized for each participant.

Delay. Next, to serve as a short delay, participants completed the 12-item NFC Scale. For this task, participants were told they would be shown a series of statements, and for each statement to indicate to what extent the statement was characteristic of them. Participants made their responses on an unnumbered five-point scale from “extremely uncharacteristic” (coded as -2) on the left to “extremely characteristic” (coded as 2) on the right.

Judgment Phase. In the judgment phase, participants were presented with 72 trivia claims — the 36 they had viewed during the exposure phase along with 36 new claims — and answered the consensus measure for each claim. Claims were presented one at a time and the order was fully randomized for each participant.

In the instructions for this task, participants were informed that they would see another series of claims appear on the screen. They were additionally told that they may have seen some of these claims earlier in the study and that some of the claims were true and some of the claims were false. Participants were also told that when they saw each claim appear on the screen, to please read it carefully, and were then shown the consensus measure and an example rating scale they would be using to make their response. Finally, participants were asked to not search for answers online, and that if they were unsure of an answer, to just make their best guess.

Additional Measures and Demographics. After the judgment phase, participants answered the questions relating to their American identity (the adapted Inclusion of Others in the Self Scale and the perceived similarity question) followed by the 7-item CRT. Finally, participants answered a few demographic questions including age and gender.

Results

To answer our primary research question of whether repeating information increases perceptions that more others would believe that information, we conducted a paired samples t-test comparing the mean consensus ratings of repeated and new claims. For this analysis and in future analyses in this paper, we report a Cohen’s dz for the difference in mean ratings between new and repeated claims. Our calculations for this effect size take into account the correlation between repeated measures and use Hedges’ correction for small sample sizes.

Consistent with our hypothesis, we found a significant effect of repetition on perceived consensus, with, participants judging that more Americans would believe claims that had been repeated (M = 6.65, SD = 1.07) compared to claims that were new (M = 6.17, SD = 0.96), mean difference = 0.48, 95% CI [0.36, 0.60], t (100) = 8.06, p < .001, dz = 0.80 95% CI [0.57, 1.02]. A plot showing the range of mean consensus ratings for new and repeated claims in this experiment can be seen in Figure 2, and an additional visualization of this data can be found in the supplementary materials.

Figure 2.
Panel A shows mean consensus ratings for new and repeated claims in Experiment 1. Each dot represents average ratings for an individual participant, shifted to represent the density distribution of ratings. Diamonds reflect group-level means and error bars reflect 95% confidence intervals. Ratings were responses to the question, “How many Americans do you think would believe this claim?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (1) on the left and “almost everyone” (10) on the right. Panel B shows a boxplot of difference scores between each participant’s average response in the repeated condition minus their average response in the new condition.
Figure 2.
Panel A shows mean consensus ratings for new and repeated claims in Experiment 1. Each dot represents average ratings for an individual participant, shifted to represent the density distribution of ratings. Diamonds reflect group-level means and error bars reflect 95% confidence intervals. Ratings were responses to the question, “How many Americans do you think would believe this claim?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (1) on the left and “almost everyone” (10) on the right. Panel B shows a boxplot of difference scores between each participant’s average response in the repeated condition minus their average response in the new condition.
Close modal

Exploratory Analyses

We next turned to our exploratory analysis. We first investigated whether the effect of repetition on consensus judgments varied across differences in our individual difference measures (NFC, CRT, adapted Inclusion of Others in the Self Scale, and, ratings of perceived similarity to the typical American). For each individual difference measure we conducted an ANCOVA with the relevant scale score or item response included as a continuous mean-centered between-subjects variable and claim repetition (repeated or new) as a within-subjects factor. We then looked at whether the interaction between repetition and each individual difference measure was significant.

We also investigated whether the impact of repetition varied by claim truth using a repeated-measures ANOVA with repetition and claim truth as within-subjects variables. Given that our stimuli were chosen to be ambiguous in regards to truth (for both true and false claims) based on norming data, we did not anticipate that the effects of repetition on perceptions of consensus would vary by claim veracity in this case. However, we thought this analysis may be of interest to others.

Overall, we found that none of these variables had a significant interaction with repetition. The impact of repetition on judgments of consensus did not significantly vary across individual differences in NFC, F (1, 99) = 2.69, p = .104, partial eta2 = 0.026, or CRT, F (1, 99) = 2.65, p = .107, partial eta2 = 0.026. Nor did they differ by ratings on our adapted Inclusion of Others in the Self Scale, F (1, 99) = 0.12, p = .728, partial eta2 = 0.001, or by how similar participants perceived themselves to be to the typical American, F (1, 99) = 0.19, p = .663, partial eta2 = 0.002. In addition, the impact of repetition did not vary by claim veracity in Experiment 1, F (1, 100) = 0.38, p = .538, partial eta2 = 0.004, indicating that the repetition had a similar effect on perceived consensus across true and false claims. A detailed report of our exploratory analyses (including coding information, descriptives, and main effects) can be found in our supplementary materials.

In Experiment 1, participants judged how many Americans they thought would believe new and repeated claims. While this is one measure of perceived consensus, a limitation is that participants are only judging potential beliefs and not making estimates about a current state of belief or knowledge. Thus, in Experiment 2, we focused on a different measure of consensus: estimates of current public knowledge. Participants made judgments about how many Americans knew each claim.

Unlike in typical illusory truth studies, in this study we wanted participants to (correctly) believe all of the claims were true at exposure and at test. This was because it does not make sense to ask participants how many Americans “know” a claim if they think it is false. Therefore, in Experiment 2, we only used true claims and informed participants at exposure and at judgment that the claims were true. We also included some additional questions at the end of the study checking if participants reported correctly perceiving all claims as true at exposure and test. This also allowed us to explore if any effects we observed held up among the subset of participants who indicated thinking all of the claims were true.

Method

Experimental Design

We once again manipulated claim repetition (repeated or new) within-subjects. Participants made consensus judgments for 72 claims, half repeated and half new. This time, all claims were factually true.

Participants

We again recruited 100 Prolific participants using the same methods in Experiment 1. In Experiment 1, the median time for participants to complete the survey was a few minutes longer than estimated, so we increased our payment to $3.60 for this study. Overall, 100 participants completed the study (Mage = 37.21, SDage = 14.24; 43.0% male, 56.0% female, 1.0% non-binary, 0.0% choose to self-describe).

Materials and Measures

Stimuli. In Experiment 2, we only wanted to use true claims as stimuli. In the broader set of normed claims that served as the source of the true and false claims in Experiment 1 (Jalbert et al., 2019), each false claim had a corresponding true claim that was created by altering one word in the true claim (e.g., the false claim “Snakes have movable eyelids” had the true version “Snakes lack movable eyelids”). To create a set of true claims in Experiment 2, we simply took the set of 72 claims from Experiment 1 and replaced the 36 false claims with their corresponding true versions from the normed data set, keeping the counterbalancing the same as in Experiment 1. Looking at the mean proportion of time claims were judged to be true from the original norming data, this new set was still relatively ambiguous, with M = 0.64 (SD = 0.14).

Consensus Measure. Participants were asked “How many Americans know this information?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (coded as 1) on the left and “almost everyone” (coded as 10) on the right.

Additional Measures. We included the same four exploratory measures assessing individual differences that were included in Experiment 1. In addition, in this study, we wanted to check whether participants (correctly) thought the claims to be true during the exposure and test phase as we informed them. We thus added additional questions about this. Participants were first asked to think back to either the first task when they were reading the claim (the exposure phase) or the task later in the study where they were rating claims (the judgment phase) and indicate if they thought all of the claims were true while completing this task (yes/no). If participants marked no (indicating that they didn’t think all of the claims were true), they received a follow-up question asking them approximately how many of the claims they thought were true and made their ratings on an unnumbered Likert-type scale with the endpoints “Almost none” (coded as 1) and “Almost all”(coded as 7), with “about half” in the middle (coded as 4). We randomized whether participants were asked about the claims at exposure or test first.

Procedure

Our procedure was also identical to that used in Experiment 1, except we now used our new set of all true claims and asked our new consensus measure. Participants also answered our new question assessing their perceptions of claim truth after our individual difference measures. There was one change in the instructions from Experiment 1. In this experiment, we referred to the claims (correctly) as “true claims” throughout the instructions, and, instead of being told that “some of the claims were true and some of the claims were false” prior to the judgment phase as in Experiment 1, participants were told that “all of these claims are true.”

Results

To answer our primary research question of whether repeating information increases perceptions that more others already know that information, we (as in Experiment 1) conducted a paired samples t-test comparing the mean consensus ratings of repeated and new claims. As in Experiment 1, we found a significant effect of repetition on perceived consensus, Participants judged that more Americans knew information that had been repeated (M = 4.39, SD = 1.28) compared to new information (M = 4.09, SD = 1.21), mean difference = 0.31, 95% CI [0.22, 0.39], t (99) = 6.65, p < .001, dz = 0.66, 95% CI [0.44, 0.87]. A plot showing the range of mean consensus ratings for new and repeated claims in Experiment 2 can be seen in Figure 3, and an additional visualization can be found in the supplementary materials.

Figure 3.
Panel A shows mean consensus ratings for new and repeated claims in Experiment 2. Each dot represents average ratings for an individual participant, shifted to represent the density distribution of ratings. Diamonds reflect group-level means and error bars reflect 95% confidence intervals. Ratings were responses to the question, “How many Americans know this information?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (1) on the left and “almost everyone” (10) on the right. Panel B shows a boxplot of difference scores between each participant’s average response in the repeated condition minus their average response in the new condition.
Figure 3.
Panel A shows mean consensus ratings for new and repeated claims in Experiment 2. Each dot represents average ratings for an individual participant, shifted to represent the density distribution of ratings. Diamonds reflect group-level means and error bars reflect 95% confidence intervals. Ratings were responses to the question, “How many Americans know this information?”, and made their responses on an unnumbered ten-point scale with the endpoints “hardly anyone” (1) on the left and “almost everyone” (10) on the right. Panel B shows a boxplot of difference scores between each participant’s average response in the repeated condition minus their average response in the new condition.
Close modal

Exploratory Analyses

We again turned to our exploratory analysis, first using the same approach as Experiment 1 to investigate whether the effects of repetition on consensus judgments varied across differences in our individual difference measures (NFC, CRT, adapted Inclusion of Others in the Self Scale, and, ratings of perceived similarity to the typical American).

We then explored participants’ answers to our questions about whether they thought all claims were true at exposure and judgment and whether the impact of repetition on consensus varied depending on these responses. To do this, we conducted a 2 (claim repetition: repeated or new) by 2 (truth perception: indicating thought all claims were true and exposure and test or did not think all claims were true) mixed model ANOVA. Finally, we conducted our main analyses (a paired samples t-test comparing the mean consensus ratings of repeated and new claims) only for participants who indicated thinking all claims were true at exposure and test.

Replicating Experiment 1, we found that none of our individual difference variables had a significant interaction with repetition. The impact of repetition on judgments of consensus did not significantly vary across individual differences in NFC, F (1, 98) = 0.97, p = .327, partial eta2 = 0.010, or CRT, F (1, 98) = 0.21, p = .646 partial eta2 = 0.002. Nor did they differ by ratings on our adapted Inclusion of Others in the Self Scale, F (1, 98) = 2.43, p = .122, partial eta2 = 0.024, or by how similar participants perceived themselves to be to the typical American,, F (1, 98) = 1.40, p = .240, partial eta2 = 0.014.

We then turned to our exploratory questions at the end where we asked participants “Did you think that all of these claims were true?”. The majority of participants (57%) reported thinking all of the claims were true at exposure (57%) and at judgment (72%), with 54% indicating they thought all of the claims were true at both times. We also checked whether the impact of repetition on consensus varied depending on whether or not participants said they thought (correctly) that all of the claims were true at exposure and judgment. We found that the impact of repetition did not significantly vary by whether participants said they thought all claims were true at exposure and at judgment, F (1, 98) < 0.01, p = .982, partial eta2 < .001. Finally, we looked at the effect of repetition just among participants who said they perceived all claims to be true at test and exposure. The effects of repetition on consensus judgments remained significant just with these individuals, t (53) = 5.30, p < .001, dz = 0.71, 95% CI [0.41, 1.01].

Although exploratory and relying on a retrospective judgment, these findings support that there was an illusory consensus effect with our intended conditions (participants correctly thinking all of the claims were true throughout). As with Experiment 1, a more detailed report of our exploratory analyses (including coding information, descriptives, and main effects) can be found in our supplementary materials.

In two studies, we demonstrate that the mere repetition of information increases perceptions of its consensus — an “illusory consensus effect”. In Experiment 1, U.S. participants judged that more other Americans would believe repeated (vs. new) true and false claims. In Experiment 2, U.S. participants judged that more others already knew repeated (vs. new) true claims. Our research demonstrates for the first time (to our knowledge) that the mere repetition of information — in the absence of any accompanying source or contextual cues — is sufficient for these effects.

In addition, our analysis revealed that effects did not significantly vary across individual differences in participants’ tendency to engage in elaborative processing (assessed using the Need for Cognition) and to utilize analytical (vs. intuitive) thinking (as assessed using the Cognitive Reflection Test). These results are consistent with findings from the illusory truth literature that the effects of repetition are relatively robust across individual differences in cognitive style and individual differences (see Pillai & Fazio, 2021 for a review). However, it is important to note that these analyses were exploratory and that our studies were powered to detect a main effect of repetition and not a more subtle interaction.

The finding that mere repetition increases perceptions of consensus is theoretically consistent with existing work in a variety of domains linking repeated exposure to perceptions of broader support and knowledge (e.g., Birch et al., 2017; Jacoby et al., 1989; Kwan et al., 2015; Pillai & Fazio, 2021; Weaver et al., 2007), and builds on existing theories of how people draw on different sources of information to make judgments of others’ cognition (Nelson et al., 1998; Thomas & Jacoby, 2013; Tullis, 2018). In addition, this work bears on a related observation that people often have a difficult time differentiating between true consensus (conclusions drawn from independent sources) and false consensus (conclusions drawn from one primary source) when making judgments from information (Ransom et al., 2021; Yousif et al., 2019), and that people are often similarly influenced by hearing information multiple times from one source or for many sources (Pillai & Fazio, 2024; Weaver et al., 2007). In short, we find that people’s perceptions of others’ beliefs and knowledge are reliably influenced by mere repetition — even in the absence of any diagnostic social cues associated with these exposures.

Potential Mechanisms

We expected that repetition would facilitate perceptions of consensus through two mechanisms. First, repetition increases processing fluency, which is then used as a heuristic cue to estimate consensus (e.g., Schwarz et al., 2021; Schwarz & Jalbert, 2020). Second, repetition increases an individual’s own belief in the information (i.e., an illusory truth effect; Hasher et al., 1977), as well as their own perceived knowledge about it (i.e., an illusion of knowledge effect; Speckmann & Unkelbach, 2024) and people draw on their own beliefs and knowledge to make estimates about what others believe and know (Nelson et al., 1998; Todd & Tamir, 2024). Individuals’ judgments are likely to be informed to some degree through both of these paths, and we view them to be highly interconnected. As discussed by Birch et al. (2017), the effects of knowledge and processing fluency are likely to be confounded in most real-world situations (e.g., exposure to information increases both knowledge and processing fluency), making the specific impact of each difficult to disentangle. In addition, we expect the degree to which participants rely on each of these pathways are likely to depend on the specifics of the context (see Nelson et al., 1998; Thomas & Jacoby, 2013; Tullis, 2018 for relevant discussions).

Potential Implications

Our findings broadly speak to how an individual’s unique information environment may shape their perceptions of the state of knowledge and beliefs of those around them. On the one hand, repeated exposure to credible information may increase judgments of its consensus — a reassuring finding given that repeated exposure is often a valid cue for consensus (see Reber & Unkelbach, 2010 for a similar argument regarding repetition and truth). Thus, in situations where it is important for individuals to know or follow credible information — as in public health campaigns and science communication — we speculate that organizations could benefit from prioritizing repeating this information to communicate its broader acceptance. On the other hand, our findings are potentially concerning when considering contexts where individuals are likely to be repeatedly exposed to false, problematic, or controversial information that is not representative of true consensus views. On social media, for instance, a small number of vocal users with extreme views may drive attention and engagement, despite being unrepresentative of the majority view (Robertson et al., 2024). Relatedly, news coverage may skew perceptions of the consensus around topics such as climate change by providing equal coverage to non-consensus and consensus views (Imundo & Rapp, 2022). Future research should further investigate the specific dynamics of illusory consensus in these contexts, and consider testing the effectiveness of potential interventions such as adjusting the flow of information itself (i.e., through the removal or deprioritization of repeated false content) or providing additional information to communicate the true state of consensus (for an example of one such possible intervention, see Imundo & Rapp, 2022 on weight-of-evidence statements).

Limitations and Future Directions

As these were initial investigations of this illusory consensus effect, there are some constraints on generalizability to consider. First, we only tested the impact of a singular repetition and type of stimuli (trivia claims). We do not yet know how multiple exposures to statements may affect perceptions of consensus, though one possibility is that, similar to the effects of repetition on belief, they do so in a logarithmic manner, with later exposures increasing belief less than earlier exposures (Fazio et al., 2022; Hassan & Barber, 2021). We also do not yet know whether repetition increases perceptions of consensus around other types of claims besides news (e.g., health claims, political news), but it seems plausible that these effects would replicate in other contexts for similar reasons (Pillai & Fazio, 2021).

The effect of repetition may also depend on the target group of the consensus estimate. For example, we initially speculated that repetition may be more likely to affect consensus estimates for similar others, for whom one’s own knowledge is more diagnostic (e.g., Nelson et al., 1998; Todd & Tamir, 2024). In our studies, we did not find that the effects of repetition did not vary across differences in our measures of identification with or perceived similarity to Americans, the target group of our judgments. However, like with our other individual difference measures, these analyses were exploratory and we did not power our studies to this interaction.

It would also be valuable to further explore how the effects of repetition on consensus play out in polarized news media environments where individuals are selectively exposed to information that is consistent with their own political affiliations. Shedding light on these processes, recent research by Beattie and Beattie (2023) found that people tend to overestimate how familiar people on the other side of the political spectrum are with stories from their own preferred news sources, potentially contributing to affective political polarization because the other side “should know better”. An additional consideration is whether the coverage of extreme views held by a few counter-partisans could lead to perceptions that those views are more commonly held by all counter-partisans, contributing further to this perceived divide.

Conclusion

The results of our two studies provide compelling evidence that the mere repetition of information is sufficient to increase judgments of how many others would believe it or currently know it - an “illusory consensus effect”. Simply hearing a statement, even when it is not attributed to a particular source or context, increases perceptions of its consensus.

Contributed to conception and design: MJ, RP

Contributed to the acquisition of data: MJ

Contributed to analysis and interpretation of data: MJ

Drafted and/or revised the article: MJ, RP

Approved the submitted version for publication: MJ, RP

The preparation of this article was supported by the University of Washington’s Center for an Informed Public and the John S. and James L. Knight Foundation through funding to the first author.

The authors have no competing interests.

Data, analysis scripts, and materials can be found at osf.io/wbvcg/.

1.

Our preregistrations state that we varied “which 36 of the 72 claims appear with photos to control for item effects” when they should have said that we varied “which 36 of the 72 claims were repeated to control for item effects” (consistent with our methods section below). There were no photos presented with the claims in these studies.

Aron, A., Aron, E. N., & Smollan, D. (1992). Inclusion of Other in the Self Scale and the structure of interpersonal closeness. Journal of Personality and Social Psychology, 63(4), 596–612. https://doi.org/10.1037/0022-3514.63.4.596
Arya, P. (2024). Perceived social consensus: A metacognitive perspective [Doctoral disseration, University of Southern California]. https:/​/​digitallibrary.usc.edu/​asset-management/​2A3BF1MGH7OTL?=SearchResults
Beattie, P., & Beattie, M. (2023). Political polarization: A curse of knowledge? Frontiers in Psychology, 14, 1200627. https://doi.org/10.3389/fpsyg.2023.1200627
Birch, S. A. J., Brosseau-Liard, P. E., Haddock, T., & Ghrear, S. E. (2017). A ‘curse of knowledge’ in the absence of knowledge? People misattribute fluency when judging how common knowledge is among their peers. Cognition, 166, 447–458. https://doi.org/10.1016/j.cognition.2017.04.015
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131. https://doi.org/10.1037/0022-3514.42.1.116
Camerer, C., Loewenstein, G., & Weber, M. (1989). The curse of knowledge in economic settings: An experimental analysis. The Journal of Political Economy, 97(5), 1232–1254. https://doi.org/10.1086/261651
Cialdini, R. B. (2009). Influence: Science and practice. Pearson Education.
De keersmaecker, J., Dunning, D., Pennycook, G., Rand, D. G., Sanchez, C., Unkelbach, C., & Roets, A. (2020). Investigating the robustness of the illusory truth effect across individual differences in cognitive ability, need for cognitive closure, and cognitive style. Personality and Social Psychology Bulletin, 46(2), 204–215. https://doi.org/10.1177/0146167219853844
Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/BF03193146
Fazio, L. K., Pillai, R. M., & Patel, D. (2022). The effects of repetition on belief in naturalistic settings. Journal of Experimental Psychology: General, 151(10), 2604–2613. https://doi.org/10.1037/xge0001211
Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7(2), 117–140. https://doi.org/10.1177/001872675400700202
Frederick, S. (2005). Cognitive reflection and decision making. Journal of Economic Perspectives, 19(4), 25–42. https://doi.org/10.1257/089533005775196732
Hasher, L., Goldstein, D., & Toppino, T. (1977). Frequency and the conference of referential validity. Journal of Verbal Learning & Verbal Behavior, 16, 107–112. https://doi.org/10.1016/S0022-5371(77)80012-1
Hassan, A., & Barber, S. J. (2021). The effects of repetition frequency on the illusory truth effect. Cognitive Research: Principles and Implications, 6(1), 38. https://doi.org/10.1186/s41235-021-00301-5
Imundo, M. N., & Rapp, D. N. (2022). When fairness is flawed: Effects of false balance reporting and weight-of-evidence statements on beliefs and perceptions of climate change. Journal of Applied Research in Memory and Cognition, 11(2), 258. https://doi.org/10.1016/j.jarmac.2021.10.002
Jacoby, L. L., Kelley, C., Brown, J., & Jasechko, J. (1989). Becoming famous overnight: Limits on the ability to avoid unconscious influences of the past. Journal of Personality and Social Psychology, 56(3), 326–337. https://doi.org/10.1037/0022-3514.56.3.326
Jalbert, M., Newman, E., & Schwarz, N. (2019). Trivia Claim Norming Methods Paper.pdf (p. 239280Bytes). figshare. https://doi.org/10.6084/M9.FIGSHARE.9975602
Jalbert, M., Newman, E., & Schwarz, N. (2020). Only half of what I’ll tell you is true: Expecting to encounter falsehoods reduces illusory truth. Journal of Applied Research in Memory and Cognition, 9(4), 602–613. https://doi.org/10.1016/j.jarmac.2020.08.010
Kwan, L. Y.-Y., Yap, S., & Chiu, C. (2015). Mere exposure affects perceived descriptive norms: Implications for personal preferences and trust. Organizational Behavior and Human Decision Processes, 129, 48–58. https://doi.org/10.1016/j.obhdp.2014.12.002
Lewandowsky, S., Cook, J., Fay, N., & Gignac, G. E. (2019). Science by social media: Attitudes towards climate change are mediated by perceived social consensus. Memory & Cognition, 47(8), 1445–1456. https://doi.org/10.3758/s13421-019-00948-y
Moehring, A., Collis, A., Garimella, K., Rahimian, M. A., Aral, S., & Eckles, D. (2023). Providing normative information increases intentions to accept a COVID-19 vaccine. Nature Communications, 14(1), 126. https://doi.org/10.1038/s41467-022-35052-4
Nelson, T. O., Kruglanski, A. W., & Jost, J. T. (1998). Knowing thyself and others: Progress in metacognitive social psychology. In Metacognition: Cognitive and social dimensions (pp. 69–89). Sage Publications, Inc. https://doi.org/10.4135/9781446279212.n5
Newman, E. J., Jalbert, M. C., Schwarz, N., & Ly, D. P. (2020). Truthiness, the illusory truth effect, and the role of need for cognition. Consciousness and Cognition, 78, 102866. https://doi.org/10.1016/j.concog.2019.102866
Nickerson, R. S., Baddeley, A., & Freeman, B. (1987). Are people’s estimates of what other people know influenced by what they themselves know? Acta Psychologica, 64(3), 245–259. https://doi.org/10.1016/0001-6918(87)90010-2
Perkins, H. W., Linkenbach, J. W., Lewis, M. A., & Neighbors, C. (2010). Effectiveness of social norms media marketing in reducing drinking and driving: A statewide campaign. Addictive Behaviors, 35(10), 866–874. https://doi.org/10.1016/j.addbeh.2010.05.004
Pillai, R. M., & Fazio, L. K. (2021). The effects of repeating false and misleading information on belief. WIREs Cognitive Science, 12(6), e1573. https://doi.org/10.1002/wcs.1573
Pillai, R. M., & Fazio, L. K. (2024). Repeated by many versus repeated by one: Examining the role of social consensus in the relationship between repetition and belief. Journal of Applied Research in Memory and Cognition. https://doi.org/10.1037/mac0000166
Ransom, K., Perfors, A., & Stephens, R. G. (2021). Social meta-inference and the evidentiary value of consensus [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/49sb5
Reber, R., & Unkelbach, C. (2010). The epistemic status of processing fluency as a source for judgments of truth. Review of Philosophy and Psychology, 1(4), 563–581. https://doi.org/10.1007/s13164-010-0039-7
Robertson, C., Del Rosario, K., & Van Bavel, J. J. (2024). Inside the Funhouse Mirror Factory: How Social Media Distorts Perceptions of Norms. https://doi.org/10.31234/osf.io/kgcrq
Sadler, P. M., Sonnert, G., Coyle, H. P., Cook-Smith, N., & Miller, J. L. (2013). The influence of teachers’ knowledge on student learning in middle school physical science classrooms. American Educational Research Journal, 50(5), 1020–1049. https://doi.org/10.3102/0002831213477680
Schwarz, N. (2012). Feelings-as-Information Theory. In P. Van Lange, A. Kruglanski, & E. Higgins (Eds.), Handbook of Theories of Social Psychology: Volume 1 (pp. 289–308). SAGE Publications Ltd. https://doi.org/10.4135/9781446249215.n15
Schwarz, N., & Jalbert, M. (2020). When (fake) news feels true: Intuitions of truth and the acceptance and correction of misinformation. In C. Mc Mahon (Ed.), Psychological Insights for Understanding COVID-19 and Media and Technology (1st ed., pp. 9–25). Routledge. https://doi.org/10.4324/9781003121756-2
Schwarz, N., Jalbert, M., Noah, T., & Zhang, L. (2021). Metacognitive experiences as information: Processing fluency in consumer judgment and decision making. Consumer Psychology Review, 4(1), 4–25. https://doi.org/10.1002/arcp.1067
Shenhav, A., Rand, D. G., & Greene, J. D. (2012). Divine intuition: Cognitive style influences belief in God. Journal of Experimental Psychology: General, 141, 423–428. https://doi.org/10.1037/a0025391
Speckmann, F., & Unkelbach, C. (2024). Illusions of knowledge due to mere repetition. Cognition, 247, 105791. https://doi.org/10.1016/j.cognition.2024.105791
Tankard, M. E., & Paluck, E. L. (2016). Norm perception as a vehicle for social change. Social Issues and Policy Review, 10(1), 181–211. https://doi.org/10.1111/sipr.12022
Thomas, R. C., & Jacoby, L. L. (2013). Diminishing adult egocentrism when estimating what others know. Journal of Experimental Psychology: Learning, Memory, and Cognition, 39(2), 473–486. https://doi.org/10.1037/a0028883
Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113. https://doi.org/10.1017/S1930297500007622
Todd, A. R., & Tamir, D. I. (2024). Factors that amplify and attenuate egocentric mentalizing. Nature Reviews Psychology, 3(3), 164–180. https://doi.org/10.1038/s44159-024-00277-1
Tullis, J. G. (2018). Predicting others’ knowledge: Knowledge estimation as cue utilization. Memory & Cognition, 46(8), 1360–1375. https://doi.org/10.3758/s13421-018-0842-4
Udry, J., & Barber, S. J. (2024). The illusory truth effect: A review of how repetition increases belief in misinformation. Current Opinion in Psychology, 56, 101736. https://doi.org/10.1016/j.copsyc.2023.101736
Valsesia, F., & Schwarz, N. (2016). Easy to pronounce? Everybody has it!: Brand name fluency and consumer differentiation motives [Poster]. Society for Personality and Social Psychology.
Weaver, K., Garcia, S. M., Schwarz, N., & Miller, D. T. (2007). Inferring the popularity of an opinion from its familiarity: A repetitive voice can sound like a chorus. Journal of Personality and Social Psychology, 92(5), 821–833. https://doi.org/10.1037/0022-3514.92.5.821
Yousif, S. R., Aboody, R., & Keil, F. C. (2019). The illusion of consensus: A failure to distinguish between true and false consensus. Psychological Science, 30(8), 1195–1204. https://doi.org/10.1177/0956797619856844
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material