Public opinion about research can affect how society gathers evidence through public support for research funding. Studies consistently show that people selectively search for and evaluate evidence in ways that are partial to their pre-existing views. The present research tested how these processes influence public support for new research on politicized topics, examining individuals’ preferences for conducting studies that were otherwise identical except for the direction of the hypothesis. In two preregistered experiments, participants made choices between two hypothetical studies with opposing hypotheses on a polarized topic, first in the absence of evidence and then with conflicting evidence after researchers had collected evidence supporting their respective hypotheses. We predicted that participants would report greater belief-consistent preferences in the absence of evidence than presence of conflicting evidence. However, participants preferred to conduct the belief-consistent study in both the absence and presence of conflicting evidence. Importantly, individual differences emerged in participants’ preferences and reasoning: those who reported no preference scored higher in scientific reasoning and actively open-minded thinking. These findings suggest that, on average, laypeople prioritize research with belief-consistent hypotheses, but those with stronger scientific reasoning and actively open-minded thinking were more likely to recognize the studies were scientifically equivalent and report a neutral preference.

Public opinion about research can affect how society gathers evidence to address pressing and often politically polarized societal challenges, from gun violence to climate change. Policymakers, many of whom are non-scientists (Petersen, 2012), can set funding priorities and regulations for the conduct of research. Private foundations can also set funding priorities and award grants (Robinson, 1984). Presidential administrations can influence the budgets and actions of federal agencies, such as the Environmental Protection Agency, that fund, conduct, and regulate research (Fredrickson et al., 2018). We therefore conducted a series of experiments to better understand the psychological processes influencing public preferences for performing scientific research on politicized topics. Specifically, we draw on prior research in social and cognitive psychology to investigate the role of beliefs and individual differences in people’s preferences for conducting studies on a polarized topic. We present the results of two pre-registered1 experiments2 testing whether people’s preferences for new research on politicized topics are influenced by the researchers’ hypotheses, asking whether participants prefer research testing a belief-consistent hypothesis when presented with two identical studies that differ only in the hypothesis. We also ask whether participants change their preferences in the presence of conflicting evidence, where they could continue to try to gather belief-consistent evidence or scrutinize the belief-inconsistent evidence by subjecting it to replication. Below, we summarize findings from prior research on motivated reasoning and confirmatory and disconfirmatory strategies before describing how these findings informed our experimental design and hypotheses.

Research on motivated reasoning, confirmation and myside bias, and hypothesis testing strategies suggests that goals, expectations, beliefs, and heuristics influence how people gather and evaluate evidence (Baron, 1995; Klayman & Ha, 1987; Koehler, 1993; Kunda, 1990; Stanovich et al., 2013; Vedejová & Čavojová, 2022).

Motivated Reasoning. Motivated reasoning occurs when goals influence how people process information: although people can be motivated by accuracy goals to reach objective and valid conclusions, they can also be motivated by directional goals to reach particular conclusions (Kunda, 1990). Though empirically it can be challenging to separate these two categories of goals, directional goals have been shown to influence how people seek out, interpret, and evaluate evidence. People tend to selectively search for, weigh, and evaluate evidence in ways that support their prior views, favoring evidence consistent with their views over evidence inconsistent with their views (Drummond & Fischhoff, 2019; Kunda, 1990; Lord et al., 1979; Munro & Ditto, 1997; Taber & Lodge, 2006; Vedejová & Čavojová, 2022). Closely related to motivated reasoning are confirmatory and disconfirmatory strategies, discussed below.

Confirmatory Strategies. Confirmatory and disconfirmatory strategies are two common strategies people use to evaluate evidence and hypotheses. Research examining confirmatory strategies for judgment and decision making focuses on processes aimed at confirming prior beliefs and expectations, often in a biased way. The terms confirmation bias and myside bias describe biases that result from a variety of reasoning processes, including searching for, interpreting, evaluating, weighting, and recalling information in ways that are partial to pre-existing, expected, or desired3 views (Klayman, 1995; Mercier, 2017; Nickerson, 1998; Stanovich et al., 2013; Vedejová & Čavojová, 2022). The term confirmation bias has been used as a catch all term for a host of reasoning processes (Fischhoff & Beyth-Marom, 1983; Klayman, 1995; Nickerson, 1998; Vedejová & Čavojová, 2022), including processes that occur when people gather and evaluate evidence in ways that favor their prior beliefs and situations in which expectations guide information processing but individuals hold no personal stake in the outcome. Similarly, myside bias has been defined variously as the tendency to: (a) seek out information in ways that favor pre-existing views (Mercier, 2017; Stanovich et al., 2013), (b) generate more arguments in favor of vs. opposed to one’s views (Toplak & Stanovich, 2003), and (c) ignore or avoid evidence inconsistent with one’s views (Wolfe & Britt, 2008).

Hot vs. Cold Cognition. Related empirical research finds that while people tend to search for evidence that confirms their views (Nickerson, 1998; Wason, 1968), they may do so for different reasons. Specifically, confirmation bias in information search can occur through “hot” motivated or “cold” cognitive mechanisms (Fischhoff & Beyth-Marom, 1983; Koehler, 1993; Kunda, 1990; MacCoun, 1998; Taber & Lodge, 2006). This literature suggests that motivated biases are driven by the desire to obtain a particular outcome, whereas cognitive biases are a function of cognitive defaults that can occur in the absence of strong motivation and emotion.

Positive Test Strategy. For example, a common heuristic people rely on to evaluate scientific claims is a positive test strategy (Klayman & Ha, 1987; Koehler, 1993), in which people search for evidence that will confirm (rather than disconfirm) a hypothesis. Studies have shown that people use this cognitive shortcut even when they do not have a motivation to obtain a particular outcome (Fischhoff & Beyth-Marom, 1983; Nisbett & Ross, 1980), and this strategy may be useful in some situations (Klayman & Ha, 1987).

The classic Wason (1968) selection task demonstrates a “cold” cognitive bias arising from a positive test strategy. This task involves four cards, each with a letter on one side and a number on the other, and a rule about the connection between the letter and number (e.g., if the letter is a vowel, the number is even). Viewing one side of each card, participants must decide which cards to turn over to test the rule. The majority of individuals exhibit a confirmatory hypothesis testing strategy, turning over cards that would confirm the rule but not those that would disconfirm it. People tend to adopt a confirmatory strategy in initially searching for evidence to test a hypothesis, looking for information that supports expected or desired outcomes instead of information that most directly tests the validity of the hypothesis (Klayman & Ha, 1987; Kunda, 1990; Nickerson, 1998; Stanovich et al., 2013; Wason, 1968).

Directionally Motivated Confirmation Bias in Hypothesis Testing. These processes have not only been observed in unmotivated contexts (like the Wason task) but also in information seeking on polarized topics, such that people are more likely to seek out evidence consistent vs. inconsistent with beliefs (Taber & Lodge, 2006; Vedejová & Čavojová, 2022). Motivated confirmation bias occurs in hypothesis testing situations when people have strong motivation to reach a particular conclusion and selectively search for and evaluate evidence in ways that favor a preferred hypothesis or conclusion. For example, motivated confirmation biases often emerge when evaluating scientific claims on polarized topics, particularly those bearing on strongly held beliefs or values (Kunda, 1990; Vedejová & Čavojová, 2022) that evoke an emotional response (Munro & Ditto, 1997). Researchers have noted, however, that it is difficult to determine whether a motivational or cognitive mechanism accounts for an observed bias or preference (Tetlock & Levi, 1982), and at least sometimes, both types of processes may be at play (MacCoun, 1998).

Prior research finds that these various reasoning processes influence how people evaluate scientific evidence, including whether they trust or doubt the findings. People rate studies with belief-consistent results as higher in quality (Drummond & Fischhoff, 2019; Lord et al., 1979; Munro & Ditto, 1997), less biased (MacCoun & Paletz, 2009), and more scientific (Munro, 2010) than studies with belief-inconsistent results, and perceive research on belief-consistent vs. inconsistent topics as more important to conduct: when presented with a variety of studies on politicized topics, people more strongly support the conduct of those consistent vs. inconsistent with their beliefs (Anglin & Jussim, 2017).

Disconfirmatory Strategies. In contrast to the confirmatory strategies reviewed above, people may also employ disconfirmatory strategies, seeking to find weaknesses in evidence or claims or ignoring, downplaying, or rejecting the evidence (Ditto & Lopez, 1992; Edwards & Smith, 1996; Kahan et al., 2017; Klaczynski, 2000; Klaczynski & Gordon, 1996). The term disconfirmation bias has been used to refer to experimental findings in which people more strongly critique and scrutinize belief-inconsistent than belief-consistent evidence (Edwards & Smith, 1996). According to quantity of processing theory, people require more evidence to accept an undesired conclusion (Ditto & Lopez, 1992). People take longer to read arguments incompatible with their prior beliefs on controversial issues, challenge them more, and rate them as weaker than arguments compatible with their beliefs (Edwards & Smith, 1996; Taber & Lodge, 2006). Furthermore, people are more likely to identify flaws in research opposing vs. supporting their beliefs, suggesting that people may process belief-inconsistent evidence more effortfully and deeply than belief-consistent evidence (Klaczynski & Gordon, 1996).

We designed a research paradigm to examine confirmatory and disconfirmatory strategies in people’s preferences for new research. We report two pre-registered experiments in which participants read research proposals describing two hypothetical studies on a polarized topic. The hypothetical study descriptions were otherwise identical except for the direction of the hypothesis being tested: one tested a belief-consistent hypothesis and the other a belief-inconsistent hypothesis. We asked participants to choose which of the two studies they preferred to see conducted, first in the absence of prior evidence, and later in the presence of conflicting evidence. We also collected open-ended responses in which participants described the rationale for their preference (or lack thereof) to better understand the reasoning underlying their choices. Our usage of qualitative data represents a unique contribution of our paper in that we classify participants’ rationales in terms of the psychological processes (e.g., confirmatory vs. disconfirmatory and motivated by directional vs. non-directional goals) highlighted above.

Our research design thus builds on prior research in judgment and decision-making, which finds a robust framing effect, such that participants display systematic preferences across two identical options when those options are framed differently (e.g., choosing different policies when an identical decision is framed in terms of lives saved vs lives lost; Tversky & Kahneman, 1981), to examine preferences for new research based on the framing of their hypotheses. Further, our research design also extends prior work on motivated scientific reasoning, which has typically examined perceptions of results consistent or inconsistent with beliefs (e.g., Drummond & Fischhoff, 2019; Lord et al., 1979; Munro & Ditto, 1997; Taber & Lodge, 2006), to examine preferences for hypotheses consistent or inconsistent with beliefs. Below, we describe how our three key hypotheses were derived from the above literatures, and how we tested these hypotheses in our studies.

Hypothesis 1: In the absence of prior evidence, people will more strongly support research when the researchers’ hypotheses align with their views than when they do not.

Prior research has shown that people rate studies on belief-consistent vs. belief-inconsistent topics as more important to conduct (Anglin & Jussim, 2017); however, prior studies have not tested whether people more strongly support research on the same research topic when the researchers’ hypotheses align with their views than when they do not. In the absence of prior evidence, we hypothesized that people would prefer to conduct a study in which the researchers’ hypothesis supports vs. opposes their views. This effect may occur through the “hot” or “cold” mechanisms described above. Specifically, prior research on the usage of a positive test strategy predicts that people will prioritize research testing a belief-consistent (vs. inconsistent) hypothesis as a result of this cognitive bias, which can be independent of a motivation to obtain a particular outcome (e.g., Klayman & Ha, 1987; Koehler, 1993). Research on motivated confirmation bias would also predict that people prefer to conduct the study testing a belief-consistent hypothesis because they seek to accumulate evidence supporting their preferred outcome (e.g., Taber & Lodge, 2006; Vedejová & Čavojová, 2022), and may expect researchers’ findings to align with their hypotheses because published studies often present positive results (Fanelli, 2010). Research on motivated reasoning would predict that people may also make assumptions about the study methods and researchers from the hypotheses, even if these details are not directly stated, including perceptions about the quality and trustworthiness of the research (Drummond & Fischhoff, 2019; Lord et al., 1979; MacCoun & Paletz, 2009).

Hypothesis 2: Preferences for conducting research will vary in the absence of prior evidence vs. the presence of conflicting evidence: in the presence of conflicting evidence, people will more strongly support conducting research with belief-inconsistent hypotheses.

Previous studies have focused on how people evaluate belief-consistent and belief-inconsistent hypotheses separately, and in the absence of prior evidence. Prior literature has not tested whether people, when faced with conflicting evidence, will seek to gather additional evidence to support a preferred conclusion or to scrutinize a disfavored conclusion by subjecting it to replication. Scientific research can be performed when there is very little evidence (e.g., for emergent polarized topics such as how to mitigate COVID-19) or in the presence of a lot of conflicting evidence (e.g., nutrition topics). Previous research on disconfirmatory strategies focuses on scrutinizing evidence, which can only be done in contexts where prior evidence supporting a belief-inconsistent hypothesis exists. We predicted above, in Hypothesis 1, that without prior evidence, participants will rely on confirmatory strategies, as there is no evidence to scrutinize. However, in the presence of conflicting evidence, participants now have two options: they can either continue to apply a confirmatory strategy, or they can pivot toward a disconfirmatory strategy by scrutinizing the belief-inconsistent evidence. We predict that preferences for running the belief-inconsistent study will increase in the presence of conflicting evidence. Thus, Hypothesis 2 predicts that people would adopt a disconfirmatory strategy when presented with conflicting results, seeking to disconfirm the study with the belief-inconsistent findings by subjecting it to further scrutiny and/or failing to replicate the findings.

This prediction is supported by previous research suggesting that people hold belief-inconsistent evidence to a higher burden of proof than belief-consistent evidence because it challenges their expectations based on prior beliefs, knowledge, and experience (Fischhoff & Beyth-Marom, 1983; Jern et al., 2014; Koehler, 1993; MacCoun, 1998; Tversky & Kahneman, 1974). Likewise, quantity of processing theory suggests that people require more evidence to accept an undesired conclusion (Ditto & Lopez, 1992), and people tend to more deeply scrutinize and challenge evidence opposing vs. supporting their expected or desired views (Edwards & Smith, 1996; Klaczynski & Gordon, 1996; Lord et al., 1979; Taber & Lodge, 2006). Our predicted effect may be driven by “hot” motivated reasoning (e.g., participants might have the goal of failing to replicate and discrediting the evidence by subjecting it to further scrutiny) or a “cold” cognitive bias (e.g., this is their natural reaction for resolving conflict in evidence) that may even be considered rational (Jern et al., 2014).

However, prior literature suggests other alternative hypotheses for preferences for polarized research in the presence of conflicting evidence. Given that prior research finds evidence for confirmation bias and the usage of a positive test strategy in hypothesis testing (Klayman & Ha, 1987; Koehler, 1993; Taber & Lodge, 2006; Vedejová & Čavojová, 2022), one alternative hypothesis is that people will exhibit a preference for the study with the belief-consistent hypothesis in both the absence of evidence and presence of conflicting evidence.

A second alternative hypothesis, for both Hypothesis 1 and 2, is that people will report no preference in either circumstance. Although directional goals sometimes motivate people to reason selectively to reach particular preferred conclusions, accuracy goals can also motivate people to reason independently of their prior beliefs (Kunda, 1990, though Klaczynski and Gordon (1996) found that telling participants to be accurate was not sufficient to reduce bias in their reasoning). Indeed, although research on motivated reasoning has focused on the biased processes people use to maintain their views, recent research suggests that people may be receptive to information challenging their beliefs (Anglin, 2019), and that confirmation biases may be more likely to emerge in some reasoning processes versus others (Anglin, 2019; Vedejová & Čavojová, 2022). These findings predict that people will report a neutral preference, either in an effort to remain objective and reach an accurate conclusion (e.g., examining questions in a non-directional fashion or searching for evidence on both sides), and/or because they recognize the studies are scientifically equivalent and thus do not have a preference.

Hypothesis 3: There exist individual differences in preferences for conducting research when the researchers’ hypotheses do or don’t align with prior beliefs: participants with more critical reasoning skills and cognitive styles supporting critical reasoning will be more likely to report a neutral preference.

We hypothesized the existence of individual differences in participants’ preference for which of two identically constructed studies with opposing hypotheses to conduct and the reasoning (e.g., confirmatory vs. disconfirmatory and motivated by directional or accuracy goals) underlying their choices.4 Prior studies have found that hypothesis testing strategies (Hendrickson et al., 2016) vary across individuals, and that those with greater scientific reasoning ability (Drummond & Fischhoff, 2017) and endorsement of actively open-minded thinking (e.g., Haran et al., 2013) are more likely to hold scientifically consistent beliefs on politicized scientific issues above and beyond their personal political views (Drummond & Fischhoff, 2017; Stenhouse et al., 2018). We hypothesized that those with greater scientific reasoning ability would be more likely to recognize that, all else being equal, the study hypotheses should be independent of the outcome, and thus would be less likely to report a preference between two identically constructed studies with opposing hypotheses. Further, we hypothesized that individuals who more strongly endorse actively open-minded thinking would be more open to either outcome, and be less likely to report a preference. In addition, need for cognition and faith in intuition are two information processing styles theorized to separate rational, analytical reasoning driven by systematic processing from intuitive experiential reasoning driven by heuristic processing (Cacioppo et al., 1996; Epstein et al., 1996). We predicted that those high in need for cognition would recognize that the study outcomes should be independent of the hypotheses and thus be less likely to report a preference. We predicted that those high in faith in intuition would be more likely to favor a belief-consistent hypothesis, relying more on their emotions and intuitions to guide their reasoning and decision making.

Experiment 1 investigated research preferences for hypothetical studies on a politically polarized (gun control) or non-polarized topic (skin cream), using modified versions of the stimuli from Kahan et al. (2017). We included questionnaires measuring individual differences in reasoning and thinking styles (scientific reasoning ability, actively open-minded thinking, need for cognition, and faith in intuition) to explore individual differences in hypothesis testing strategies. We also included questions assessing participants’ beliefs about the ability of science to provide answers to the research question and general support for science as additional individual differences relevant to reasoning about science, though we did not make specific predictions regarding these variables.

Pilot Experiment

Prior to Experiment 1, we conducted an initial pilot experiment to develop the materials to test our hypotheses (available in the Supplementary Materials and preregistered on OSF5). Following this pilot experiment, we further refined our materials by turning textual descriptions of the researchers’ hypothesized and actual results into contingency tables (described below). We then conducted a second pilot to pretest these tables for comprehension prior to launching Experiment 1 (also available in the Supplementary Materials and preregistered on OSF).6

Disclosure Statement

This paper reports all manipulations and exclusions. All measures collected are reported in the text or Supplementary Materials. No additional data were collected for any of the experiments once data analysis began. Our preregistered predictions, materials, and data analysis plans, along with data and code for all experiments, are available on OSF at https://osf.io/854kq/.7

Participants

A total of 453 Mechanical Turk users (255 men, 196 women, 2 other; Mage = 34.36, SD = 10.07) completed this study in exchange for $1.75. An additional 37 began the experiment but dropped out before completing it. Participants were included in all analyses where they had full data. Of those who responded to the scenarios, 281 identified as liberal (86 as very liberal, 121 as liberal, and 74 as somewhat liberal), 82 as moderate, and 99 as conservative (25 as very conservative, 33 as conservative, and 41 as somewhat conservative).

Experimental Design

Experiment 1 followed a 2 (polarization condition: experimental vs. control) x 2 (prior evidence: none vs. conflicting) mixed-model design, with polarization condition as a between-subject factor and prior evidence as a within-subject factor. Ideology was also included as a continuous between-subjects factor. Although study order (each time participants learned about opposing hypotheses/study findings, we randomized which hypothesis/study finding they saw first) was an additional factor, participants’ responses did not vary by order, so we collapsed responses across order.

Materials and Procedure

At the beginning of the experiment, participants answered several questions about themselves, including their political orientation (1=very liberal, 7=very conservative), belief about gun control (1=strongly decreases crime, 7=strongly increases crime) and position on gun control (1=strongly oppose, 7=strongly support).8

Polarization condition. Next, participants were randomly assigned to polarization condition: gun control served as the experimental (polarized) stimulus study topic and skin cream as the control (non-polarized) topic.

Prior evidence condition. The within subjects factor manipulated the absence or presence of conflicting evidence using two sequential scenarios. Each scenario presented participants with two studies with opposing predictions or results. In the first scenario, the research groups had not yet collected any data, but made conflicting predictions about the outcome of their studies. In the second scenario, the research groups had conducted their studies and observed results consistent with their predictions (again, conflicting with each another). The information about the two groups’ studies was presented side-by-side, and the hypotheses/results of each hypothetical study were presented in a table and described in words (see the supplementary materials to view the stimuli).

For clarity, throughout the paper we use scenario to refer to the absence of evidence and presence of conflicting evidence conditions, and study to refer to the studies with opposing predictions within each scenario.

Absence of evidence scenario. In the absence of evidence scenario, two research groups each plan to compare changes in [crime rates between 150 cities that enact handgun bans and 150 cities that do not enact hand gun bans / skin rashes between 150 patients who use the new skin cream and 150 patients who do not use the new skin cream]. The research groups (we randomly assigned one to be called Research Group A and the other Research Group B) made opposing hypotheses regarding whether gun control will increase or decrease crime rates (or the skin cream will increase or decrease skin rashes).

We were interested in how people prioritize new research in a context in which resources are scarce, reflecting the conditions under which real research gets funded. Thus, participants were told that, “Both research groups submit a proposal to a research institute to conduct their study. The research institute conducts studies for researchers, but has a limited capacity in the number of studies it can conduct.” For the primary dependent variable, participants were asked which study they preferred to have conducted (1=Strongly prefer Study A [proposed by Research Group A], 2=Prefer Study A; 3=No preference, 4=Prefer Study B, 5=Strongly prefer Study B [proposed by Research Group B]). They were also asked to explain their preference. Because we randomized the order of presenting the opposing predictions, we recoded participants’ responses to reflect which hypothesis (gun control will increase/decrease crime rates; skin cream will increase/decrease skin rashes) they preferred, such that higher scores indicated a stronger preference for the gun control/skin cream decreases crime/skin rashes study.

Presence of conflicting evidence scenario. Participants were then asked to imagine a new scenario that was separate but similar to Scenario 1. In the presence of conflicting evidence scenario, the two research groups have now each conducted a study comparing changes in [crime rates between 150 cities that enacted handgun bans and 150 cities that do not enact hand gun bans / skin rashes between 150 patients who used the new skin cream and 150 patients who do not use the new skin cream].9 The research groups found opposing results, consistent with their hypotheses, regarding whether gun control increased or decreased crime or skin rashes (see supplementary materials for the stimuli).

Participants were told that, “Both research groups submit a proposal to a research institute to conduct their study on another sample. The research institute conducts studies for researchers, but has a limited capacity in the number of studies it can conduct.” They were then again asked the main dependent variable question, which study they preferred to have conducted, and to explain why. Again, we recoded responses to reflect which study they preferred, such that higher scores indicated a stronger preference for the gun control/skin cream decreases crime/skin rashes study.

Scientific impotence (Munro, 2010). After both scenarios, participants were asked to rate the extent to which they believe the research question is one that cannot be answered using scientific methods (1=Strongly disagree, 8=Strongly agree).

In the second part of the experiment, participants completed the following individual difference measures, presented in randomized order:

Scientific reasoning scale (SRS; Drummond & Fischhoff, 2017). The SRS is an 11-item measure of scientific reasoning ability, assessing the analytical reasoning skills needed to evaluate scientific evidence in terms of the factors that determine its quality. For each question, participants were presented with a short scientific scenario followed by a true/false statement containing a correct answer (e.g., “A researcher finds that American states with larger parks have fewer endangered species. True or False? These data show that increasing the size of American state parks will reduce the number of endangered species.”). Each question was scored as correct or incorrect, and the total number of correct responses was summed; higher scores indicate greater scientific reasoning ability (α = 0.74).

Actively open-minded thinking scale (AOT; Haran et al., 2013). The AOT is a 7-item measure of attitudes toward actively open-minded thinking. Participated rated their agreement or disagreement with each statement on a 7-point scale (1=Completely disagree, 7=Completely agree; e.g. “People should revise their beliefs in response to new information or evidence.”; α = 0.80).

Need for Cognition and Faith in Intuition (Epstein et al., 1996). The 10-item rational experiential inventory was used to measure need for cognition (NFC) and faith in intuition (FI). Each subscale contains 5 items (NFC: α = 0.84; FI: α = 0.92). Participants rated whether or not each statement is characteristic of them and what they believe on a 5-point scale (1=Extremely uncharacteristic of me, 5=Extremely characteristic of me).

Support for science. Participants rated the extent to which they support scientific research and how often they trust the results of scientific studies on 7-point scales ranging from 1 (not at all) to 7 (completely). These items were combined into an overall measure of general support for science (α = 0.72).

At the end of the study, participants answered demographic questions, were reminded that the research groups and studies were hypothetical and do not reflect real research, and were provided with a text box to provide comments.

Hypotheses

Based on Hypotheses 1 and 2, we expected to find a three-way condition x prior evidence x ideology interaction. In the experimental condition, we predicted that liberals would report a stronger preference for the gun control decreases crime study in the absence of evidence vs. presence of conflicting evidence, whereas conservatives would report a stronger preference for the gun control increases crime study in the absence of evidence vs. presence of conflicting evidence. We did not expect to see this reversal among participants in the control condition.

Power Analyses

The pilot experiment produced small-medium size effects. Although we modified the stimuli and design in Experiment 1, we used a small effect size in the a priori power analysis to ensure sufficient power to detect effects. This analysis indicated that, for a 2x2x2 mixed-model design, a sample size of 436 is necessary to detect small effects (f = 0.10) at 95% power and α = .05. Therefore, we set our target sample size to N=450 for this study. A sensitivity power analysis with these parameters and the obtained sample size of N=453 indicated that the study was powered to detect small effects (f = 0.10).

Descriptive Statistics

Table 1 summarizes frequencies of belief-consistent, belief-inconsistent, and no preferences among participants in the gun control condition (i.e., the only condition where participants had clear preferences). The data indicate considerable variation in participants’ preferences. In the skin cream condition, participants tended to prefer the skin cream effective study in the absence of evidence (M = 3.71, SD = 1.24) and presence of conflicting evidence (M = 3.71, SD = 1.26).

Table 1.
Descriptive statistics for preferences of which study to conduct
Experiment 1
(Gun control condition only)
Experiment 2
Preference Absence of
evidence 
Presence of
conflicting
evidence 
Absence of
evidence 
Presence of
conflicting
evidence 
Strong belief-consistent 47 40 90 94 
Belief-consistent 47 56 113 108 
No preference 55 42 152 118 
Belief-inconsistent 25 28 72 82 
Strong belief-⁠inconsistent 16 21 35 49 
Experiment 1
(Gun control condition only)
Experiment 2
Preference Absence of
evidence 
Presence of
conflicting
evidence 
Absence of
evidence 
Presence of
conflicting
evidence 
Strong belief-consistent 47 40 90 94 
Belief-consistent 47 56 113 108 
No preference 55 42 152 118 
Belief-inconsistent 25 28 72 82 
Strong belief-⁠inconsistent 16 21 35 49 

Note. Preference for the study with the belief-consistent or belief-inconsistent hypothesis were determined based on participants’ self-reported political orientation; moderates (Experiment 1: n=82, Experiment 2: n=97) excluded.

Preferred study to conduct (absence of evidence vs. presence of conflicting evidence)

Responses to each preferred study to conduct question (in the absence of evidence and presence of conflicting evidence) were coded so that higher scores indicated a stronger preference for the gun control decreases crime study in the experimental condition and the skin cream makes rashes better study in the control condition. A general linear model analysis was then performed, with polarization condition (experimental vs. control) and ideology as between-subjects factors and prior evidence (absence vs. conflicting) as a within-subjects factor. The model included all two and three-way interactions among condition, ideology, and prior evidence. Based on Hypotheses 1 and 2, we predicted a three-way condition x prior evidence x ideology interaction, such that, in the experimental condition, liberals would report a stronger preference for the gun control decreases crime study in the absence of evidence vs. presence of conflicting evidence, whereas conservatives would report a stronger preference for the gun control increases crime study in the absence of evidence vs. presence of conflicting evidence. We did not expect to see this reversal among participants in the control condition.

The main effects for polarization condition, F(1, 453) = 0.11, p = 0.74, ηp2 < 0.001, and prior evidence, F(1, 453) = 2.11, p = 0.13, ηp2 = 0.005, were non-significant, as were the two-way interactions between polarization condition and prior evidence, F(1, 453) = 0.10, p = 0.92, ηp2 < 0.001, and prior evidence and ideology, F(1, 453) = 3.38, p = 0.07, ηp2 = 0.007. There was a main effect for ideology, F(1, 453) = 17.90, p < 0.001, ηp2 = 0.04, but this was qualified by a significant polarization condition x ideology interaction, F(1, 453) = 10.61, p = 0.001, ηp2 = 0.02.

Supporting Hypothesis 1 but not Hypothesis 2, simple effects analyses revealed that participants in the experimental condition were more likely to favor the study with the belief-consistent hypothesis across both scenarios: those with a stronger liberal ideology preferred to have the gun control decreases crime study conducted (and those with a stronger conservative political ideology preferred to have the gun control increases crime study conducted), β = -0.33, B = -0.22, SE = 0.04, t = -5.27, p < 0.001. Ideology was unrelated to study preference in the control condition, β = -0.04, B = -0.03, SE = 0.04, t = -0.59, p = 0.56. The predicted three-way condition x prior evidence x ideology interaction was nonsignificant, F(1, 453) = 0.14, p = 0.71, ηp2 < 0.001.

To test the robustness of the findings, we ran these analyses three additional times, first replacing ideology with gun control belief as preregistered, and then again for gun control position and a composite prior belief measure (averaging across ideology, gun control belief, and gun control position10; α = .72). For gun control belief, the polarization condition x gun control belief interaction remained significant, F(1, 453) = 27.69, p < 0.001, ηp2 = 0.06; in the experimental condition, a stronger belief in the effectiveness of gun control was associated with a preference for the gun control decreases crime study, β = -0.43, B = -0.32, SE = 0.05, t = -7.15, p < 0.001, whereas in the control condition, gun control belief was unrelated to study preference, β = 0.03, B = 0.05, SE = 0.04, t = 0.62, p = 0.54. The predicted three-way interaction remained nonsignificant, F(1, 453) = 0.23, p = 0.63, ηp2 = 0.001.

Likewise, for gun control position, the polarization condition x gun control position interaction remained significant, F(1, 450) = 12.65, p < 0.001, ηp2 = 0.03, such that stronger support for gun control was associated with a preference for the gun control decreases crime study in the experimental condition, β = 0.45, B = 0.25, SE = 0.03, t = 7.60, p < 0.001, but position was not significantly related to study preference in the control condition, β = 0.13, B = 0.07, SE = 0.04, t = 1.90, p = 0.06. Again, the three-way interaction was nonsignificant, F(1, 450) = 0.04, p = 0.85, ηp2 < 0.001.

For the composite prior belief measure, the interaction between polarization condition and the composite prior belief remained significant, F(1, 453) = 22.76, p < 0.001, ηp2 = 0.05, such that a stronger liberal political ideology and attitude toward gun control was associated with a preference for the gun control decreases crime study in the experimental condition, β = -0.49, B = 0.39, SE = 0.05, t = -8.55, p < 0.001, but not related to study preference in the control condition, β = -0.06, B = -0.05, SE = 0.05, t = -0.92, p = 0.36. The three-way interaction was again nonsignificant, F(1, 453) = 0.16, p = 0.69, ηp2 < 0.001.

These effects also persisted in another series of robustness checks (for ideology, gun control belief, gun control position, and the composite prior belief measure) with demographic variables included in the model as covariates.

To further examine whether participants displayed a preference to conduct the study testing the belief-consistent hypothesis (Hypotheses 1 and 2), we conducted a post hoc analysis in which we divided participants into 3 groups based on whether they reported a liberal (somewhat to very; gun control: n = 141, skin cream: n = 140), moderate (gun control: n = 40, skin cream: n = 42), or conservative (somewhat to very; gun control: n = 49, skin cream: n = 50) political orientation. We performed one sample t-tests to test whether each political group’s preferences differed from the midpoint of no preference, for those in the gun control (polarized) condition only. Liberals’ (M = 3.40, SD = 1.04) and conservatives’ (M = 2.62, SD = 1.31) preferences significantly differed from no preference, liberals: t(140) = 4.63, p < 0.001, d = 0.38; conservatives: t(48) = -2.40, p = 0.02, d = -0.29, whereas moderates’ (M = 3.06, SD = 1.18) did not, t(39) = 0.34, p = 0.74, d = 0.05. Figure 1 illustrates these preferences.

Figure 1.
Experiment 1 preferences for which study to conduct (1=strong preference for the gun control/skin cream ineffective study, 3=no preference, 5=strong preference for the gun control/skin cream effective study) as a function of ideology (liberal, moderate, conservative), polarization condition (experimental vs. control), and prior evidence (absence of evidence, presence of conflicting evidence).

Error bars indicate one standard error above and below the mean. Dotted line indicates neutral preference.

Figure 1.
Experiment 1 preferences for which study to conduct (1=strong preference for the gun control/skin cream ineffective study, 3=no preference, 5=strong preference for the gun control/skin cream effective study) as a function of ideology (liberal, moderate, conservative), polarization condition (experimental vs. control), and prior evidence (absence of evidence, presence of conflicting evidence).

Error bars indicate one standard error above and below the mean. Dotted line indicates neutral preference.

Close modal

Individual Differences

To test for individual differences in participants’ preferences and reasoning (Hypothesis 3), we analyzed participants’ open-ended explanations for their preferences and compared participants’ preferences to their scores on the individual difference measures collected.

Open-ended explanations. To analyze participants’ open-ended explanations for their preferences, we read through their responses and inductively generated categories to capture the range of rationales provided. We grouped responses into categories of rationales capturing confirmatory and disconfirmatory processes described in the introduction, along with a third group of rationales for selecting no preference. Table 2 presents categories of confirmatory, disconfirmatory, and no preference rationales and an example response from each category.

Table 2.
Categories of rationales for preferences
Confirmatory rationaleExperiment 1 ExamplesExperiment 2 Examples
Preference for one study or outcome “Because I prefer the results found by Study B” “I’d rather see a decrease in crime.” 
Support for cause “Again I think it shows concrete evidence to support legislation that will ultimately ban handguns” “I would like to see confirmation of study B in that policy makers can observe that gun bans are ineffective in reducing crime.” 
Supports belief “I prefer Study B because it reinforces my beliefs regarding gun regulations” “Their prediction is what I believe will happen. I’d rather see that study.” 
One study less (or more) biased “As I mentioned in my previous answer, I feel that Research Group B is more likely to be dishonest with its findings. I just have a feeling that they cherry-picked data to support their initial hypothesis. I would rather have group A conduct the study because I don’t trust Group B” “Study B is clearly correct while study A seems seriously flawed and potentially tampered with.” 
One study better or more important “I believe the study that aims to lessen the risk is more important than studying the beneficial effects of a similar product” “Still less complicated. Easier to implement.” 
Supports existing evidence “Effective gun control has worked in several foreign countries as well as within the United States in the past, including the assault weapons ban.” “Study A reflects the views [of] many other studies on the topic I have read.” 
More logical/accurate/likely; more worth investment “Study B seems to have more realistic findings in my opinion”
“I think it's more likely to find a correlation than study B would, and therefore be worth the time and funding.” 
“Study A makes logical sense and it can be tested through the experiment. Study B does not make logical sense because less handguns should also mean less crime. Seems like a waste to check study B and its assessment because it is so unlikely.” 
Disconfirmatory rationale   
Scrutinize belief-inconsistent study “I doubt the findings from Group B and think the research study should be repeated to verify the findings.” “I don’t believe Study A’s results. It needs to be re-done under closer scrutiny.” 
More interesting/unexpected “I think it’s interesting to go in the opposite direction from normal thought and test for that.” “I chose Group B because that is a really surprising outcome. Banning guns increased crime a lot.” 
Scrutinize own beliefs or preferences “Since my preconceived idea is that crime will decrease with gun control, it would make more sense to read about the opposing views and the facts behind it.” “If the researchers were completely honest, I would rather have Study A be conducted. I expect the results will be the opposite of what they predict, so I feel they would be more skeptical. A result from that study might carry more weight than the other.” 
No preference   
Studies the same/shouldn’t matter “Both testing methods are identical.” “They are both looking at the same amount of cities that enact and do not enact hand gun laws. Just because their hypothesis is different, doesn't dictate what the outcomes will be. Their hypothesis may be wrong.” 
Fine with either “I think either one, if done properly, can show valid data.” “More data for either would be useful.” 
Want a high quality study  “I would mainly be looking for quality research and not one that just goers along with what I already believe.” 
Want both conducted “Both studies should be replicated to find out which is correct.” “I’d prefer both be conducted since their findings are so different.” 
Both biased; want unbiased study or neither “I think they are both messed up. I think they both found what they wanted to.” “I feel like both of the research groups might be cherry picking cities in order to come to a conclusion that will fit their narrative.” 
Need more information “Research studies like this are hard to compare because you don’t know which cities they are each looking at. I want to know if the cities are the same.” “I would have to see how they collected data and many other factors that led to the numbers. The numbers themselves are a very small part of the story.” 
Confirmatory rationaleExperiment 1 ExamplesExperiment 2 Examples
Preference for one study or outcome “Because I prefer the results found by Study B” “I’d rather see a decrease in crime.” 
Support for cause “Again I think it shows concrete evidence to support legislation that will ultimately ban handguns” “I would like to see confirmation of study B in that policy makers can observe that gun bans are ineffective in reducing crime.” 
Supports belief “I prefer Study B because it reinforces my beliefs regarding gun regulations” “Their prediction is what I believe will happen. I’d rather see that study.” 
One study less (or more) biased “As I mentioned in my previous answer, I feel that Research Group B is more likely to be dishonest with its findings. I just have a feeling that they cherry-picked data to support their initial hypothesis. I would rather have group A conduct the study because I don’t trust Group B” “Study B is clearly correct while study A seems seriously flawed and potentially tampered with.” 
One study better or more important “I believe the study that aims to lessen the risk is more important than studying the beneficial effects of a similar product” “Still less complicated. Easier to implement.” 
Supports existing evidence “Effective gun control has worked in several foreign countries as well as within the United States in the past, including the assault weapons ban.” “Study A reflects the views [of] many other studies on the topic I have read.” 
More logical/accurate/likely; more worth investment “Study B seems to have more realistic findings in my opinion”
“I think it's more likely to find a correlation than study B would, and therefore be worth the time and funding.” 
“Study A makes logical sense and it can be tested through the experiment. Study B does not make logical sense because less handguns should also mean less crime. Seems like a waste to check study B and its assessment because it is so unlikely.” 
Disconfirmatory rationale   
Scrutinize belief-inconsistent study “I doubt the findings from Group B and think the research study should be repeated to verify the findings.” “I don’t believe Study A’s results. It needs to be re-done under closer scrutiny.” 
More interesting/unexpected “I think it’s interesting to go in the opposite direction from normal thought and test for that.” “I chose Group B because that is a really surprising outcome. Banning guns increased crime a lot.” 
Scrutinize own beliefs or preferences “Since my preconceived idea is that crime will decrease with gun control, it would make more sense to read about the opposing views and the facts behind it.” “If the researchers were completely honest, I would rather have Study A be conducted. I expect the results will be the opposite of what they predict, so I feel they would be more skeptical. A result from that study might carry more weight than the other.” 
No preference   
Studies the same/shouldn’t matter “Both testing methods are identical.” “They are both looking at the same amount of cities that enact and do not enact hand gun laws. Just because their hypothesis is different, doesn't dictate what the outcomes will be. Their hypothesis may be wrong.” 
Fine with either “I think either one, if done properly, can show valid data.” “More data for either would be useful.” 
Want a high quality study  “I would mainly be looking for quality research and not one that just goers along with what I already believe.” 
Want both conducted “Both studies should be replicated to find out which is correct.” “I’d prefer both be conducted since their findings are so different.” 
Both biased; want unbiased study or neither “I think they are both messed up. I think they both found what they wanted to.” “I feel like both of the research groups might be cherry picking cities in order to come to a conclusion that will fit their narrative.” 
Need more information “Research studies like this are hard to compare because you don’t know which cities they are each looking at. I want to know if the cities are the same.” “I would have to see how they collected data and many other factors that led to the numbers. The numbers themselves are a very small part of the story.” 

Confirmatory explanations were the most common. Many rationales clearly reflected “hot” directionally motivated reasoning (e.g., preference for a particular study or outcome; consistent with prior beliefs; support for a cause; that one study was better or more important; and that one study was less biased), whereas other could reflect a “cold” cognitive bias such as a positive test strategy, or mix between the two (e.g., that one study was consistent with prior evidence; that one study was more logical, accurate, and worth the investment). Disconfirmatory explanations were less common but varied as well. Some discussed the “hot” disconfirmatory strategy consistent with studies showing that people are more skeptical of belief-inconsistent evidence (e.g., Edwards & Smith, 1996; Taber & Lodge, 2006) and require more evidence to accept a belief-inconsistent conclusion (Ditto & Lopez, 1992), applying further scrutiny to the findings that opposed their beliefs or expectations. However, other participants wanted to subject their own hypothesis or beliefs to further scrutiny or thought it would be more interesting to test an unexpected hypothesis, applying a “cold” disconfirmatory strategy. Participants who reported no preference also provided a range of rationales, but displayed stronger accuracy motivated reasoning: noting that the studies were the same and it shouldn’t matter which is conducted; not mentioning that the studies were the same but indicating that either would be acceptable; wanting to see both conducted; viewing both studies as biased and preferring to have neither conducted or an unbiased study conducted; and wanting more information before making a decision.

Preference vs. no preference. We tested Hypothesis 3 by examining whether scores on the individual difference measures related to reporting a preference. We predicted that those who scored higher in scientific reasoning, actively open-minded thinking, and need for cognition, and lower in faith in intuition, would be less likely to report a preference in each scenario. Independent groups t-tests were performed to examine whether participants who reported a preference differed from those who did not on each of the individual difference measures. All participants were included in these analyses. Consistent with Hypothesis 3, in both the absence and presence of conflicting evidence, participants who reported no preference scored higher in scientific reasoning and actively open-minded thinking than did those who reported a preference, and these differences were moderate in strength (see Table 4). Similarly, participants who reported no preference scored lower in faith in intuition than did those who reported a preference in the absence of evidence, though this difference was small and not statistically significant in the presence of conflicting evidence. Need for cognition also did not differ between those who reported preferences and those who did not, nor did scientific impotence and support for science, two additional individual difference variables included for exploratory purposes.

Belief-consistent vs. belief-inconsistent preference. We also conducted exploratory correlational analyses to test whether belief-consistent (vs. inconsistent) preferences related to scores on the individual difference measures. Only liberals and conservatives in the gun control condition were included in these analyses, as their ideologies align with beliefs on this topic. Responses were coded so that higher scores indicated a stronger belief-consistent preference for all participants (i.e., a stronger preference for the gun control decreases crime study for participants who identified as somewhat to strongly liberal and a stronger preference for the gun control increases crime study for participants who identified as somewhat to strongly conservative). In both the absence and presence of conflicting evidence, belief-consistent vs. inconsistent preferences were unrelated to scientific reasoning, actively open-minded thinking, faith in intuition, need for cognition, scientific impotence, and support for science (see Table 3).

Table 3.
Correlations among individual difference measures
12345678
1. Belief-consistent preference, absence of evidence --- .57* -.01 .04 .03 -.09 -.01 .06 
2. Belief-consistent preference, conflicting evidence .57* --- .02 .02 .03 -.08 -.08 .11 
3. SRS .07 .04 --- .52* -.31* .24* -.42* .23* 
4. AOT .06 .06 .48* --- -.32* .34* -.48* .43* 
5. Faith in intuition -.05 -.05 -.27* -.30* --- -.07 .24* -.07 
6. Need for cognition .04 .08 .20* .34* -.04 --- -.25* .23* 
7. Scientific impotence -.10* -.03 -.29* -.41* .18* -.23* --- -.32* 
8. Support for science .12* .12* .12* .33* -.02 .15* -.25* --- 
9. Social desirability -.07 -.07 -.19* -.09 .01 .18* .06 -.06 
12345678
1. Belief-consistent preference, absence of evidence --- .57* -.01 .04 .03 -.09 -.01 .06 
2. Belief-consistent preference, conflicting evidence .57* --- .02 .02 .03 -.08 -.08 .11 
3. SRS .07 .04 --- .52* -.31* .24* -.42* .23* 
4. AOT .06 .06 .48* --- -.32* .34* -.48* .43* 
5. Faith in intuition -.05 -.05 -.27* -.30* --- -.07 .24* -.07 
6. Need for cognition .04 .08 .20* .34* -.04 --- -.25* .23* 
7. Scientific impotence -.10* -.03 -.29* -.41* .18* -.23* --- -.32* 
8. Support for science .12* .12* .12* .33* -.02 .15* -.25* --- 
9. Social desirability -.07 -.07 -.19* -.09 .01 .18* .06 -.06 

Note. Higher values for the preference questions indicate a stronger preference for the study with the belief-consistent (vs. inconsistent) hypothesis. Experiment 1 results are displayed above the diagonal and Experiment 2 results are displayed below. For Experiment 1, only liberals and conservatives in the gun control condition were included. For Experiment 2, all participants were included as all received the gun control stimulus. *p < .05.

Table 4.
Scores on individual difference measures between participants who reported preferences and did not
 Scenario 1 (Absence of Evidence) Scenario 2 (Presence of Conflicting Evidence) 
 Preference No preference   Preference No preference   
 M
(SD) 
M
(SD) 
t
(df) 
d M
(SD) 
M
(SD) 
t
(df) 
d 
Experiment 1 
SRS 6.02
(2.69) 
7.34
(2.76) 
-4.66*
(452) 
.48 6.06
(2.67) 
7.69
(2.81) 
-5.21*
(452) 
.60 
AOT 5.11
(1.12) 
5.53
(0.94) 
-3.77*
(453) 
.41 5.15
(1.09) 
5.54
(1.04) 
-3.13*
(453) 
.37 
Faith in intuition 3.55
(1.01) 
3.29
(1.00) 
2.50*
(453) 
.26 3.53
(1.02) 
3.31
(0.98) 
1.85
(453) 
.22 
Need for cognition 3.44
(1.00) 
3.53
(0.94) 
-0.84
(453) 
.09 3.45
(0.99) 
3.51
(0.98) 
-0.55
(453) 
.06 
Scientific impotence 3.48
(2.19) 
3.24
(2.05) 
1.10
(455) 
.11 3.50
(2.20) 
3.08
(1.93) 
1.68
(455) 
.20 
Support for science 5.49
(1.08) 
5.56
(0.93) 
-0.56
(451) 
.07 5.52
(1.05) 
5.48
(1.02) 
0.45
(451) 
.04 
Experiment 2 
SRS 5.73
(2.58) 
7.64
(2.44) 
-7.47*
(446) 
.76 5.85
(2.64) 
7.77
(2.31) 
-7.00*
(445) 
.77 
AOT 5.14
(1.09) 
5.56
(0.96) 
-4.05*
(449) 
.41 5.19
(1.08) 
5.52
(0.97) 
-2.94*
(448) 
.32 
Faith in intuition 3.63
(0.85) 
3.36
(0.90) 
3.08*
(448) 
.31 3.62
(0.86) 
3.34
(0.88) 
3.02*
(447) 
.32 
Need for cognition 3.60
(0.85) 
3.74
(0.86) 
-1.61
(448) 
.16 3.62
(0.87) 
3.73
(0.83) 
-1.17
(447) 
.13 
Scientific impotence 3.86
(1.97) 
3.57
(1.95) 
1.49
(449) 
.15 3.83
(1.99) 
3.58
(1.91) 
1.18
(448) 
.13 
Support for science 5.44
(1.02) 
5.47
(0.86) 
-0.28
(449) 
.03 5.46
(0.99) 
5.42
(0.91) 
0.43
(448) 
.04 
Social desirability 8.21
(1.38) 
7.94
(1.39) 
1.97*
(448) 
.20 8.19
(1.42) 
7.92
(1.28) 
1.84
(447) 
.20 
 Scenario 1 (Absence of Evidence) Scenario 2 (Presence of Conflicting Evidence) 
 Preference No preference   Preference No preference   
 M
(SD) 
M
(SD) 
t
(df) 
d M
(SD) 
M
(SD) 
t
(df) 
d 
Experiment 1 
SRS 6.02
(2.69) 
7.34
(2.76) 
-4.66*
(452) 
.48 6.06
(2.67) 
7.69
(2.81) 
-5.21*
(452) 
.60 
AOT 5.11
(1.12) 
5.53
(0.94) 
-3.77*
(453) 
.41 5.15
(1.09) 
5.54
(1.04) 
-3.13*
(453) 
.37 
Faith in intuition 3.55
(1.01) 
3.29
(1.00) 
2.50*
(453) 
.26 3.53
(1.02) 
3.31
(0.98) 
1.85
(453) 
.22 
Need for cognition 3.44
(1.00) 
3.53
(0.94) 
-0.84
(453) 
.09 3.45
(0.99) 
3.51
(0.98) 
-0.55
(453) 
.06 
Scientific impotence 3.48
(2.19) 
3.24
(2.05) 
1.10
(455) 
.11 3.50
(2.20) 
3.08
(1.93) 
1.68
(455) 
.20 
Support for science 5.49
(1.08) 
5.56
(0.93) 
-0.56
(451) 
.07 5.52
(1.05) 
5.48
(1.02) 
0.45
(451) 
.04 
Experiment 2 
SRS 5.73
(2.58) 
7.64
(2.44) 
-7.47*
(446) 
.76 5.85
(2.64) 
7.77
(2.31) 
-7.00*
(445) 
.77 
AOT 5.14
(1.09) 
5.56
(0.96) 
-4.05*
(449) 
.41 5.19
(1.08) 
5.52
(0.97) 
-2.94*
(448) 
.32 
Faith in intuition 3.63
(0.85) 
3.36
(0.90) 
3.08*
(448) 
.31 3.62
(0.86) 
3.34
(0.88) 
3.02*
(447) 
.32 
Need for cognition 3.60
(0.85) 
3.74
(0.86) 
-1.61
(448) 
.16 3.62
(0.87) 
3.73
(0.83) 
-1.17
(447) 
.13 
Scientific impotence 3.86
(1.97) 
3.57
(1.95) 
1.49
(449) 
.15 3.83
(1.99) 
3.58
(1.91) 
1.18
(448) 
.13 
Support for science 5.44
(1.02) 
5.47
(0.86) 
-0.28
(449) 
.03 5.46
(0.99) 
5.42
(0.91) 
0.43
(448) 
.04 
Social desirability 8.21
(1.38) 
7.94
(1.39) 
1.97*
(448) 
.20 8.19
(1.42) 
7.92
(1.28) 
1.84
(447) 
.20 

Note. *p < .05.

Experiment 1 tested the predictions that participants would prefer to conduct a hypothetical study with a belief-consistent hypothesis in the absence of evidence (Hypothesis 1) but that this preference would reverse in the presence of conflicting evidence (Hypothesis 2). When presented with hypothetical studies on a politicized topic, we found that participants tended to prefer to conduct the study with the belief-consistent hypothesis in both the absence of evidence and presence of conflicting evidence. In the control condition, participants preferred to conduct the study in which researchers hypothesized and found the skin cream to be effective in both the absence of evidence and presence of conflicting evidence.

These findings suggest that people favor research with belief-consistent hypotheses, regardless of whether no prior evidence or conflicting evidence exists, consistent with Hypothesis 1 but inconsistent with Hypothesis 2. However, the failure to observe the predicted reversal of preferences in the presence of conflicting evidence vs. absence of evidence condition may stem from the design of our hypothetical scenarios: specifically, participants were told an independent research institute would be conducting the hypothetical studies for the researchers. Classic work on social identity theory has shown that people prefer to maximize in-group over out-group monetary gains, even at a loss to their own personal gain (Billig & Tajfel, 1973; Tajfel & Turner, 1986). It is possible that participants’ preference to allocate resources to their in-group (i.e., research with a belief-consistent hypothesis) overrode the preference to retest belief-inconsistent findings in presence of conflicting evidence. Experiment 2 tested this possible explanation for the findings.

Our quantitative findings are consistent with either “cold” (positive test strategy) or “hot” (directionally motivated reasoning) processes influencing participants’ preference for studies with belief-consistent hypotheses, as either a directionally motivated or cognitive bias would produce a belief-consistent preference. That is, participants might have favored the study with the belief-consistent hypothesis because they were motivated to accumulate evidence supporting their preferred outcome or because it supported their expectations based on prior evidence and experience. Our qualitative results suggest that both contributed to participants’ decisions: some stated that they preferred research supporting their beliefs for clear directionally motivated reasons, some indicated that the belief-consistent hypothesis was more likely (positive test strategy), and some gave both types of explanations. However, some participants also indicated no preference, demonstrating a stronger accuracy motivation, desire to remain neutral, and avoidance of positive test strategy. Indeed, participants’ reasoning ranged from scientific to practical to directionally motivated and ideological; confirmation biases were present in many rationales, but others reflected more objective reasoning less or un-directed toward obtaining a particular outcome. Many participants expressed strong opinions, regardless of which approach they took.

To better understand the role these different processes played in our findings, we compared participants’ preferences to their scores on various individual differences measures, and found that, consistent with Hypothesis 3, participants were more likely to indicate no preference if they scored higher in scientific reasoning ability and actively open-minded thinking. Those with greater scientific reasoning ability might have been more likely to recognize that, to test the research hypothesis, it shouldn’t matter which study they select, as many indicated in their open-ended responses. Similarly, those higher in actively open-minded thinking might have been less likely to let their preferences influence their decision. Participants higher in scientific reasoning and actively open-minded thinking appeared to display less directional and more accuracy motivated reasoning. However, inconsistent with Hypothesis 3, strong differences did not emerge in need for cognition and faith in intuition between those who reported a preference and those who did not. In Experiment 2, we sought to further explore and replicate these individual differences.

We designed Experiment 2 to replicate the main findings of Experiment 1 while also investigating whether the independent research institute described in Experiment 1 affected participant preferences. We were concerned that participants may view having an independent research group allocate limited resources to conduct the study as rewarding one of the research groups, which may strengthen their preference for the research group with the belief-consistent hypothesis. Without a statement about having a research institute run the study, participants would simply be stating which study they would prefer, with no implications for allocating resources. Therefore, in Experiment 2 we manipulated whether participants were told that a research institute, with limited capacity in the number of studies it can conduct, would be conducting the study or not.

We predicted that, when it was not stated that an independent research institute would be allocating limited resources to conduct the study, participants would show a stronger preference to conduct the study with the belief-inconsistent hypothesis in the presence of conflicting evidence than in the absence of evidence (consistent with Hypothesis 1 and Hypothesis 2). However, we predicted that the preference to repeat the study with the belief-inconsistent hypothesis in the presence of conflicting evidence would not be observed (or would significantly weaken) when an independent research institute would be allocating resources to conduct the study, as this decision involves directly supporting belief-inconsistent (over belief-consistent) research. When an independent research institute would implement the research, we predicted that participants would show a belief-consistent preference in both scenarios (consistent with Hypothesis 1 but not Hypothesis 2).

Because the control condition showed no difference in preferences for the skin cream study based on ideology in Experiment 1 (and the comparison was not central to the predictions of Experiment 2), we removed it in Experiment 2. Experiment 2 tested the replicability of the individual differences in study preferences observed in Experiment 1. We also added a measure of social desirability to assess whether participants who reported no preference were more susceptible to socially desirable responding than those who reported preferences.

Participants

A total of 451 Mechanical Turk users (241 men, 209 women, 1 other; Mage = 36.40, SD = 11.40) completed this study in exchange for $1.75. An additional 25 participants began the study but dropped out before completing it. Participants were included in all analyses were they had full data. Of those who responded to the scenarios, 242 identified as liberal (60 as very liberal, 114 as liberal, and 68 as somewhat liberal), 97 as moderate, and 123 as conservative (15 as very conservative, 57 as conservative, and 51 as somewhat conservative).

Experimental Design

Experiment 2 followed a 2 (implementer: independent research institute vs. not explicitly stated) x 2 (prior evidence: absence, conflicting) x 2 (study order: gun control effective listed first vs. gun control ineffective listed first) mixed-model design, with implementer and order as between-subjects factors and prior evidence as a within-subject factor. Ideology was also included as a continuous between-subjects factor. Study order (gun control effective listed first vs. gun control ineffective listed first) was an additional factor but was not expected to influence participants’ responses.

Materials and Procedure

The materials and procedure were identical to Experiment 1, with the following exceptions:

1. All participants were presented with the gun control stimuli from Experiment 1.

2. Participants were randomly assigned to one of two conditions, varying whether it was explicitly stated who would implement the study: an independent research institute (as in Experiment 1) or not explicitly stated. The only difference between conditions was that for those in the independent research institute condition, before asked which study they would prefer to have conducted, they were told: “Both research groups submit a proposal to a research institute to conduct their study. The research institute conducts studies for researchers, but has a limited capacity in the number of studies it can conduct” (absence of evidence scenario) and “Both research groups submit a proposal to a research institute to conduct their study again on another sample. The research institute conducts studies for researchers, but has a limited capacity in the number of studies it can conduct” (presence of conflicting evidence scenario).

3. After the scenarios, participants answered a few additional questions assessing their perceptions of their own and others’ bias, included for another project. These questions are provided in the supplementary materials for interested readers.

4. In addition to the Scientific Reasoning Scale (SRS; Drummond & Fischhoff, 2017; α = 0.71), Actively Open-Minded Thinking Scale (AOT; Haran et al., 2013; α = 0.83), and short Rational-Experiential Inventory measuring Need for Cognition and Faith in Intuition (REI; Epstein et al., 1996; NFC: α = 0.81; FI: α = 0.90), participants completed an 11-item Social Desirability Scale (SDS; Crowne & Marlowe, 1960; α = 0.74). The SDS contained 11 True/False questions (e.g., “I’m always willing to admit it when I make a mistake”); responses were coded so that higher scores indicate greater social desirability bias. Because of their similar format, the REI, AOT, and SDS were presented first (in random order), and the SRS was presented last.

Hypotheses

Based on Hypothesis 1 and 2, we expected to find a three-way condition x prior evidence x ideology interaction. When there was no explicit statement of who would conduct the study and that there were limited resources to support one research proposal or the other, we predicted that participants would show a stronger preference for the study with a belief-consistent hypothesis in the absence of evidence vs. presence of conflicting evidence: liberals would show a stronger preference to have the gun control decreases crime study conducted in the absence of evidence vs. presence of conflicting evidence, whereas conservatives would show a stronger preference to have the gun control increases crime study in the absence of evidence vs. presence of conflicting evidence (consistent with Hypothesis 1 and 2). When an independent research institute allocates limited resources to implement the study, however, we did not expect to see this reversal; instead, we expected participants to prefer the study with the belief-consistent hypothesis in both scenarios (consistent with Hypothesis 1 but not Hypothesis 2).

Power Analysis

Because Experiment 2 contained the same number of factors as Experiment 1 (replacing the polarization condition with whether an independent research institute would implement the study), we used the same target sample size in Experiment 2 as in Experiment 1 (i.e., N = 450). A sensitivity power analysis for a 2x2x2 mixed-model ANOVA at 95% power and α = .05 indicated that the obtained sample size of N=451 was sufficiently powered to detect small effects (f = 0.10).

Descriptive Statistics

Table 1 reports frequencies of belief-consistent, belief-inconsistent, and no preferences for which study to conduct in the absence of evidence and presence of conflicting evidence, revealing considerable variation in participant preferences.

Preferred study to conduct (absence of evidence vs. presence of conflicting evidence)

As in Experiment 1, responses to each preferred study to conduct question (in the absence of evidence and presence of conflicting evidence) were coded so that higher scores indicated a stronger preference for the gun control decreases crime study. A general linear model analysis was performed, with implementer (research institute vs. not stated) and ideology as between-subjects factors and prior evidence (absence, conflicting) as a within-subjects factor. The model included all two- and three-way interactions among implementer, ideology, and prior evidence. Although we did not expect study order (gun control increases crime vs. gun control decreases crime study listed first) to influence responses, in the presence of conflicting evidence scenario, participants were more likely to select the gun control decreases crime study when the gun control increases crime study was listed first (M = 3.44, SD = 1.24) vs. second (M = 3.16, SD = 1.28), t(449) = -2.34, p = .02. As a result, we ran the analysis a second time with order included as a factor in the analysis. Because the primary findings did not differ with order included or not, for simplicity, we present the analysis collapsed across order below.

We predicted a three-way condition x prior evidence x ideology interaction. When not explicitly stated who would implement the study and that limited resources were involved in the decision, we expected participants to show a stronger preference for the study with a belief-consistent hypothesis in the absence of evidence vs. presence of conflicting evidence: liberals would show a stronger preference to have the gun control decreases crime study conducted in the absence of evidence vs. presence of conflicting evidence, whereas conservatives would show a stronger preference to have the gun control increases crime study in the absence of evidence vs. presence of conflicting evidence. When an independent research institute would allocate limited resources to implement the study, we expected participants to prefer the study with the belief-consistent hypothesis in both scenarios.

The main effects for implementer, F(1, 447) = 2.00, p = 0.16, ηp2 = 0.004, and prior evidence, F(1, 447) = 0.39, p = 0.54, ηp2 = 0.001, were nonsignificant, as were the two-way interactions between implementer and prior evidence, F(1, 447) = 0.35, p = 0.56, ηp2 = 0.001, prior evidence and ideology, F(1, 447) = 0.50, p = 0.48, ηp2 = 0.001, and implementer and ideology, F(1, 447) = 1.20, p = 0.27, ηp2 = 0.003. Moreover, the predicted three-way implementer, prior evidence, and ideology interaction was nonsignificant, F(1, 447) = 1.24, p = 0.27, ηp2 = 0.003.

The only significant effect was a main effect for ideology, F(1, 447) = 33.05, p < 0.001, ηp2 = 0.07. Across both the absence of evidence and presence of conflicting evidence scenarios, ideology was correlated with preferences, such that a stronger liberal ideology was associated with a stronger preference for the gun control decreases crime study, β = -0.27, B = -0.17, SE = 0.03, t = -5.99, p < 0.001. In other words, again supporting Hypothesis 1 but not Hypothesis 2, participants favored the study with the belief-consistent hypothesis, regardless of whether evidence was absent or conflicting evidence was present, and regardless of who would implement the study.

As in Experiment 1, to test the robustness of the findings, we ran these analyses three additional times, first replacing ideology with gun control belief as preregistered, and then again for gun control position and a composite prior belief measure (averaging across ideology, gun control belief, and gun control position11; α = .77). As with ideology, the main effect for gun control belief was the only significant effect, F(1, 447) = 51.37, p < 0.001, ηp2 = 0.10; across scenarios, a stronger belief in the effectiveness of gun control was associated with a preference for the gun control decreases crime study, β = -0.33, B = -0.17, SE = 0.02, t = -7.57, p < 0.001. The three-way interaction remained nonsignificant, F(1, 447) = 1.50, p = 0.22, ηp2 = 0.003. Likewise, in the analysis with gun control, the main effect for gun control position was the only significant effect, F(1, 447) = 73.91, p < 0.001, ηp2 = 0.14; across scenarios, stronger support for gun control was associated with a preference for the gun control decreases crime study, β = 0.38, B = 0.21, SE = 0.02, t = 8.81, p < 0.001, and three-way interaction was nonsignificant, F(1, 447) = 0.03, p = .87, ηp2 < .001. For the composite prior belief measure, the main effect for the composite measure was also the only significant effect, F(1, 447) = 77.87, p < 0.001, ηp2 = 0.15, such that a stronger liberal political ideology and attitude toward gun control was associated with a preference for the gun control decreases crime study, β = -0.39, B = -0.27, SE = 0.03, t = -9.27, p < 0.001. The three-way interaction was again nonsignificant, F(1, 447) = 0.93, p = 0.34, ηp2 = 0.002. These results also persisted in another series of robustness checks (for ideology, gun control belief, gun control position, and the composite prior belief measure) in which demographic variables were included in the model as covariates.

As in Experiment 1, to further test Hypotheses 1 and 2, we divided participants into 3 groups based on whether they reported a liberal (somewhat to very; n = 242), moderate (n = 97), or conservative (somewhat to very; n = 123) political orientation (see Figure 2). To test whether participants preferred studies with belief-consistent hypotheses, one sample t-tests were performed to test whether each group differed from the midpoint of no preference. Across scenarios, liberals’ preference (M = 3.56, SD = 0.99) significantly differed from the midpoint of no preference, t(241) = 8.80, p < 0.001, d = 0.57. The difference was weaker and did not reach the significance level for moderates (M = 3.20, SD = 0.98), t(96) = 2.01, p = 0.047, d = 0.20, or conservatives (M = 2.84, SD = 1.17): t(122) = -1.54, p = 0.13, d = -0.14. These findings suggest that participants held a slight preference to conduct the study with the belief-consistent hypothesis regardless of whether they were choosing which initial study to conduct in the absence of evidence or which to repeat in the presence of conflicting evidence. In addition, whether an independent research institute would allocate limited resources to implement the research did not influence participants’ decisions about which study to conduct. Liberals were more likely to report a belief-consistent preference than conservatives or moderates, though this may partially be explained by the larger number of liberals in the sample (and thus higher power for the analysis).

Figure 2.
Experiment 2 preferences for which study to conduct (1=strong preference for the gun control ineffective study, 3=no preference, 5=strong preference for the gun control effective study) as a function of ideology (liberal, moderate, conservative), implementer (independent research institute vs. not explicitly stated), and prior evidence (absence of evidence, presence of conflicting evidence).

Error bars represent one standard error above and below the mean. Dotted line indicates neutral preference.

Figure 2.
Experiment 2 preferences for which study to conduct (1=strong preference for the gun control ineffective study, 3=no preference, 5=strong preference for the gun control effective study) as a function of ideology (liberal, moderate, conservative), implementer (independent research institute vs. not explicitly stated), and prior evidence (absence of evidence, presence of conflicting evidence).

Error bars represent one standard error above and below the mean. Dotted line indicates neutral preference.

Close modal

Individual Differences

To test for individual differences in participants’ preferences and reasoning (Hypothesis 3), we analyzed participants’ open-ended explanations for their preferences and compared participants’ preferences to their scores on the individual difference measures.

Open-ended explanations. As in Experiment 1, we inductively generated categories of the confirmatory, disconfirmatory, and no preference rationales participants provided for their preferences (see Table 2 for the categories and examples of each). The same categories of responses emerged in Experiment 2 as in Experiment 1, except that an additional category was added to capture a few no preference explanations describing a desire to see a high quality study conducted, regardless of its predictions and findings.

Preference vs. no preference. To test Hypothesis 3, we tested whether reporting a preference at all regarding which study to conduct related to scores on the individual difference measures. Independent groups t-tests were performed to examine whether participants who reported a preference to each scenario differed from those who did not on each of the individual difference measures. All participants were included in these analyses.

Consistent with Hypothesis 3 and the results from Experiment 1, participants who reported no preference in each scenario scored higher in scientific reasoning and actively open-minded thinking and lower in faith in intuition than did those who reported a preference; this difference was large for scientific reasoning and moderate for actively open-minded thinking and faith in intuition (see Table 4). As in Experiment 1, need for cognition, scientific impotence, and support for science did not differ between those who reported preferences and those who did not in both scenarios. Participants who reported a preference tended to score higher in social desirability than did those who did not in the absence of evidence scenario, but this difference was small and nonsignificant in the presence of conflicting evidence scenario.

Belief-consistent vs. belief-inconsistent preference. We conducted exploratory correlational analyses to test whether preferences to conduct the study with the belief-consistent or belief-inconsistent hypothesis related to scores on the individual difference measures. Only liberals and conservatives were included in these analyses, as their ideologies align with beliefs on the stimulus topic. Responses were coded so that higher scores indicated favoring the belief-consistent study for all participants (i.e., a stronger preference for the gun control decreases crime study for participants who identified as somewhat to strongly liberal and a stronger preference for the gun control increases crime study for participants who identified as somewhat to strongly conservative). In the absence of evidence and presence of conflicting evidence, preferences were unrelated to scientific reasoning, actively open-minded thinking, faith in intuition, need for cognition, scientific impotence, and social desirability (see Table 3). A belief-consistent preference was unrelated to support for science in Experiment 1 but moderately correlated with stronger support for science in both scenarios in Experiment 2.

Experiment 2 found that, like in Experiment 1, participants tended to prefer to conduct a hypothetical study with a belief-consistent hypothesis in both the absence of evidence and presence of conflicting evidence, supporting Hypothesis 1 but not Hypothesis 2. In Experiment 2, participants displayed this preference even when an independent research institute would be allocating limited resources to conduct it, addressing the potential alternative explanation from Experiment 1 that participants might prefer to conduct a hypothetical study with a belief-inconsistent hypothesis in the presence of conflicting evidence, but might not want to give resources to researchers with that hypothesis. It is possible that participants viewed selecting one study over another as allocating resources to a particular side even when an allocation of resources was not explicitly stated. Even so, these results suggest that people have a tendency to favor research with belief-consistent hypotheses and do not significantly alter their decision-making strategy from before the outcomes are known to after competing outcomes are obtained.

Our quantitative findings cannot separate whether participants are using a “cold” positive test strategy or displaying “hot” motivated reasoning in favor of prior beliefs, but our qualitative findings revealed considerable variation in preferences and rationales across participants, including both “hot” and “cold” reasoning, suggesting the presence of strong individual differences in reasoning. Participants favored confirmatory, disconfirmatory, and non-directional test strategies for a range of reasons. Those who adopted a confirmatory strategy ranged from strongly motivated to obtain a particular outcome (e.g., preferring a particular outcome or seeking to advance a political agenda), reflecting a “hot” directionally motivated myside bias, to less clearly motivated, potentially independent of the motivation to obtain a desired outcome or reflecting a mix between hot and cold mechanisms (e.g., basing their decision on prior evidence or finding one study to be more logical, important, accurate, or unbiased). Disconfirmatory strategies were adopted by those seeking to scrutinize the belief-inconsistent study, reflecting “hot” directionally motivated reasoning, but also by those interested in subjecting their own preferences or expectations to further scrutiny. Participants who chose no preference appeared to display stronger accuracy motivations, with rationales ranging from viewing both studies as critically important to neither as trustworthy to both as equally valid or wanting further information before making a decision. Regardless of rationale, participants who chose no preference again scored higher in scientific reasoning and actively open-minded thinking and lower in faith in intuition, also supporting Hypothesis 3 and the results of Experiment 1. These findings suggest that participants who remained neutral tended to be those who think more critically and scientifically.

Previous research has shown that people more strongly support research on belief-consistent than belief-inconsistent topics (e.g., Anglin & Jussim, 2017) and seek to confirm a hypothesis when initially testing it (Klayman, 1995; Wason, 1968) but challenge belief-inconsistent evidence (Edwards & Smith, 1996; Klaczynski & Gordon, 1996; Taber & Lodge, 2006; Vedejová & Čavojová, 2022) and require more evidence to accept conclusions inconsistent with their views (Ditto & Lopez, 1992). We build on this prior research to investigate whether people favor a study testing a belief-consistent hypothesis over a study testing a belief-inconsistent hypothesis when those studies differ only in their hypotheses. We asked whether people would prioritize a belief-consistent vs. inconsistent study in the absence of evidence, and whether that preference would reverse in the presence of conflicting evidence on the topic such that they would prioritize a belief-inconsistent vs. consistent study.

In two experiments, we found support for our Hypothesis 1: in the absence of evidence on a polarized topic, participants displayed a general preference to conduct a study with a belief-consistent (vs. inconsistent) hypothesis. We further found that when presented with conflicting evidence, participants continued to prefer the study with the belief-consistent hypothesis. These findings were inconsistent with our Hypothesis 2, which stated that people would prefer to repeat the study with the belief-inconsistent hypothesis, requiring a higher burden of proof for belief-inconsistent evidence. We expected participants to show this preference based on quantity of processing theory, which proposes that people require more evidence to accept a preference-inconsistent hypothesis (Ditto & Lopez, 1992), and studies suggesting that people more deeply process and scrutinize belief-inconsistent than consistent evidence (Edwards & Smith, 1996; Klaczynski & Gordon, 1996; Taber & Lodge, 2006). However, across Experiments 1 and 2, participants reported a belief-consistent preference in both the absence of evidence and presence of conflicting evidence.

There are multiple possible explanations for why participants adopted a confirmatory strategy in both situations, and our quantitative findings are unable to separate these competing explanations. Participants’ preferences may reflect a positive test strategy in which they seek evidence confirming their expectations (Klayman & Ha, 1987). That is, some may not be motivated to obtain a particular outcome but may favor the study with the belief-consistent evidence because it is consistent with their prior knowledge and experience (Fischhoff & Beyth-Marom, 1983; Jern et al., 2014; Koehler, 1993; MacCoun, 1998; Tversky & Kahneman, 1974). However, participants’ preferences may also reflect directionally motivated reasoning: participants might choose to repeat the study supporting a belief-consistent conclusion because they expect the findings to align with the previous study and seek to accumulate evidence consistent with their views (Fanelli, 2010; Taber & Lodge, 2006; Vedejová & Čavojová, 2022). Rather than allocate limited resources to more closely scrutinize belief-inconsistent evidence, people may prefer to allocate resources to research favoring their in-group (i.e., those whose views align with their own; Tajfel & Turner, 1986) and defer to heuristic reasoning processes to disregard the belief-inconsistent findings. Indeed, although people can use analytic processing to undermine belief-inconsistent evidence, they can also employ heuristic processing to reject the evidence (Kahan et al., 2017; Klaczynski, 2000), perceiving the belief-inconsistent evidence as less important, lower quality, and less trustworthy. They may also disregard the belief-inconsistent results as a fluke, perceiving it as a false positive and the belief-consistent evidence as a true positive based on their prior knowledge and experience (Jern et al., 2014).

A unique contribution of our study design is our usage of both quantitative and qualitative data: we asked participants to provide open-ended explanations for their preferences, and we classified participants’ rationales in terms of the psychological processes (e.g., confirmatory vs. disconfirmatory and motivated by directional vs. non-directional goals) highlighted above. Their coded rationales reflect a range of possible strategies that may have driven their preferences. Some communicated a strong motivation to obtain a belief-consistent outcome (e.g., because they preferred one conclusion or wished to advance a political agenda), suggesting that directionally motivated reasoning drove their decision (MacCoun, 1998; Munro & Ditto, 1997; Taber & Lodge, 2006; Vedejová & Čavojová, 2022). Others provided explanations for their choice that were less clearly motivated (e.g., because one study supported their expectations or prior evidence) that may reflect a pure cognitive strategy independent of motivation (Jern et al., 2014; Klayman & Ha, 1987; Koehler, 1993), or a mix between a motivated and cognitive bias. Thus, “hot” motivated biases dominated some but not all participants’ reasoning, supporting the literature on different mechanisms underlying confirmation bias (MacCoun, 1998), the ambiguity and overlap between motivated and cognitive mechanisms (Tetlock & Levi, 1982), and individual differences in reasoning strategies (Hendrickson et al., 2016).

Although participants tended to prefer a politicized study with a belief-consistent vs. belief-inconsistent hypothesis, it is important to note that participants did not exhibit strong preferences across the two experiments. These findings support evidence suggesting that accuracy goals play an important role in information processing (Anglin, 2019; Hart et al., 2009; Kunda, 1990), despite the strong focus in the literature on directionally motivated processing. In addition, although participants were more likely to prefer conducting the study with the belief-consistent hypothesis or report no preference, a subset of participants did report a preference to conduct the study with the belief-inconsistent hypothesis in an effort to scrutinize or undermine it.

With regards to Hypothesis 3, our findings reflect the presence of individual differences in scientific reasoning and decision-making, supporting the results of a recent study that also found individual differences in the usage of positive, negative, or mixed test strategies, but a general preference for a positive test strategy (Hendrickson et al., 2016). In the present research, individual differences emerged in participants’ choice between conducting one of two identically constructed studies that differed only in the direction of their hypothesis. In particular, those who scored higher in scientific reasoning and actively open-minded thinking were more likely to report no preference for which study to conduct. These differences were particularly strong for scientific reasoning ability. Those with stronger scientific reasoning skills may be more inclined to keep their views and preferences separate from research decisions, or better able to recognize that, if studies are designed equally, both are well-suited to test a research question, regardless of the researchers’ hypotheses. Many participants who indicated no preference for conducting either the belief-consistent or inconsistent study mentioned in open-ended responses that the studies were methodologically identical, or that differences in methodology could not be discerned from the information provided. Instead, a common justification for a belief-consistent preference was that a study testing a belief-consistent hypothesis was more logical and better than one testing a belief-inconsistent hypothesis. Individuals with stronger scientific reasoning skills may also recognize that science is not a completely objective enterprise (Jussim et al., 2016; Munafò et al., 2017; Redding, 2001), and thus report no preference because they believe neither study or both should be conducted if the research groups are biased toward a particular outcome. Indeed, another common justification for reporting no preference was that neither or both studies should be conducted so as not to favor one side over the other, whereas a frequent justification for the belief-consistent preference was the desire to obtain the preferred outcome. Scoring higher in scientific reasoning and actively open-minded thinking may be associated with stronger accuracy motivated reasoning in judgment and decision-making about politicized research, and scoring lower may be associated with stronger directionally motivated reasoning, though further research is needed to draw this conclusion.

Future research should also examine other individual differences related to reasoning independently of prior beliefs, such as intellectual humility (Leary et al., 2017), objectivism (Leary et al., 1986), and openness to opposing viewpoints (Minson et al., 2020). These constructs have been found to be associated with openness to new evidence, including evidence challenging one’s views (Bowes et al., 2022; Porter & Schumann, 2018), and/or impartial reasoning, including evaluating claims based on evidence rather than intuition (Leary et al., 1986) and arguments based on strength rather than preconceptions (Leary et al., 2017; Minson et al., 2020).

There were some important limitations to this research. The stimulus studies were designed to be identical, except for the direction of the findings, and thus were high in internal validity but low in external validity. In addition, although we tried to present the scenarios as separate in Experiments 1 and 2, the fact that the research group’s findings were consistent with their predictions led some participants to view the hypothetical studies as suspect. Furthermore, we were interested in how participants would respond in a context in which resources are scarce and thus were forced to choose only one study to conduct; however, participants might respond differently if asked how much they would like to see both studies conducted. Indeed, there is no clear normative response to the choice presented to participants in this research. Future research could examine whether participants respond similarly when rating each study separately. In addition, it is possible that participants would have responded differently to the conflicting evidence scenario if it had not been directly preceded by the absence of evidence scenario. As such, the findings may not reflect people’s preferences in real-world situations in which studies are not so similar to one another, results do not always neatly support the researchers’ hypotheses, and decisions about which studies to initially conduct are not immediately followed by decisions about which studies to repeat. Because the stimuli pitted studies favoring liberal and conservative preferences against each other, some participants might have been aware of what we were aiming to test and thus sought to avoid appearing biased. Even so, in prior research in which participants rated the importance of research on a range of belief-consistent and belief-inconsistent topics (i.e., rather than choosing between identically constructed studies to conduct), participants also reported preferences consistent with their beliefs (Anglin & Jussim, 2017). These findings may indicate that people generally aim to be neutral, but their preferences can still influence their decisions.

Although this study sought to test a general theory about preferences for conducting research on politicized topics in different situations, researchers and experts—rather than the non-scientists studied in this research—tend to be the ones that make research decisions in real-world contexts. At least among our non-scientist sample, those with greater scientific reasoning skills were more neutral in their decisions. However, even scientists are not immune to confirmation and other cognitive biases (Inbar & Lammers, 2012; Jussim et al., 2016; Lilienfeld, 2010). Future research is needed to examine whether these processes occur among scientists, and if so, how to reduce bias in peer review funding decisions through mechanisms such as research proposal funding lotteries (e.g., Smaldino et al., 2019). However, the present study provides preliminary evidence that the public more strongly supports research with hypotheses consistent with their beliefs, which holds important implications for garnering public support and setting funding priorities for research on politicized topics.

Contributed to conception and design: SA, CDO, SB

Contributed to acquisition of data: SA

Contributed to analysis of data: SA

Contributed to interpretation of data: SA, CDO, SB

Drafted and/or revised the article: SA, CDO, SB

Approved the submitted version for publication: SA, CDO, SB

We thank Jessica Chang and Crystal Song for their help coding qualitative data from Pilot Experiment 1.

This research was supported in part by Carnegie Mellon University’s Department of Social and Decision Sciences. The funding source had no involvement in any stage of this research. Data for the first experiment were collected while the first author was affiliated with Carnegie Mellon University.

The authors declare that they have no known competing interests or personal relationships that could have appeared to influence the work reported in this paper.

All stimuli, presentation materials, participant data, and analysis scripts can be found on this paper’s project page on the Open Science Framework at https://osf.io/854kq/?view_only=c2abbc117fd249a288ddec9529462dac.

1.

We uploaded time stamped pre-registration documents for our experiments in the files folder on OSF prior to data collection, but forgot to publish this document as an official registration for Experiment 1.

2.

To reduce confusion in our paper, we use the word “experiment” to describe the research performed in this paper, and the word “study” to describe the research presented as stimuli to participants.

3.

More recently, researchers have labeled reasoning processes aimed at confirming desired views as desirability bias (Tappin et al., 2017).

4.

Although the preregistration specified that we would analyze the individual difference variables in an exploratory fashion, because prior research and theory supported these predictions, we included them in the introduction as recommended by a reviewer.

7.

Direct links to preregistrations are as follows: https://osf.io/tywuf/?view_only=c2abbc117fd249a288ddec9529462dac (Experiment 1), https://osf.io/qr2da (Experiment 2).

8.

We note that although previous research suggests that gun control functions as a polarized stimulus topic and skin cream effectiveness as a non-polarized stimulus topic (e.g., Kahan et al., 2017), the present experiment did not measure beliefs about skin cream effectiveness to compare to the strength of participants’ beliefs about gun control. Therefore, the degree to which participants found the gun control stimulus to be more polarizing than the skin cream stimulus cannot be asserted from the present data (though participants’ open-ended explanations for their preferences suggest more polarizing responses to the gun control vs. skin cream stimuli).

9.

There was a minor typo in the gun control condition for this scenario. Study B was labeled appropriately, but the scenario began by stating that Research Group A (instead of B) compared changes in crime rates between 150 cities that enacted handgun bans and 150 cities that did not. However, the rest of the scenario clearly indicated that Research Group B obtained the results provided and clearly summarized Research Group B’s findings. We do not believe this typo confused participants as none mentioned it in their open-ended explanation following the scenario.

10.

This question was reverse-coded for the composite measure so all three prior belief questions were coded in the same direction. Higher scores indicated a stronger liberal political ideology and attitude toward gun control.

Anglin, S. M. (2019). Do beliefs yield to evidence? Examining belief perseverance vs. change in response to congruent empirical findings. Journal of Experimental Social Psychology, 82, 176–199. https://doi.org/10.1016/j.jesp.2019.02.004
Anglin, S. M., & Jussim, L. (2017). Science and politics: Do people support the conduct and dissemination of politicized research? Journal of Social and Political Psychology, 5(1), 142–172. https://doi.org/10.5964/jspp.v5i1.427
Baron, J. (1995). Myside bias in thinking about abortion. Thinking Reasoning, 1(3), 221–235. https://doi.org/10.1080/13546789508256909
Billig, M., Tajfel, H. (1973). Social categorization and similarity in intergroup behaviour. European Journal of Social Psychology, 3(1), 27–52. https://doi.org/10.1002/ejsp.2420030103
Bowes, S. M., Costello, T. H., Lee, C., McElroy-Heltzel, S., Davis, D. E., Lilienfeld, S. O. (2022). Stepping outside the echo chamber: Is intellectual humility associated with less political myside bias? Personality and Social Psychology Bulletin, 48(1), 150–164. https://doi.org/10.1177/0146167221997619
Cacioppo, J. T., Petty, R. E., Feinstein, J. A., Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119(2), 197–253. https://doi.org/10.1037/0033-2909.119.2.197
Crowne, D. P., Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24(4), 349–354. https://doi.org/10.1037/h0047358
Ditto, P. H., Lopez, D. F. (1992). Motivated skepticism: Use of differential decision criteria for preferred and nonpreferred conclusions. Journal of Personality and Social Psychology, 63(4), 568–584. https://doi.org/10.1037/0022-3514.63.4.568
Drummond, C., Fischhoff, B. (2017). Development and validation of the scientific reasoning scale. Journal of Behavioral Decision Making, 30(1), 26–38. https://doi.org/10.1002/bdm.1906
Drummond, C., Fischhoff, B. (2019). Does “putting on your thinking cap” reduce myside bias in evaluation of scientific evidence? Thinking Reasoning, 25(4), 477–505. https://doi.org/10.1080/13546783.2018.1548379
Edwards, K., Smith, E. E. (1996). A disconfirmation bias in the evaluation of arguments. Journal of Personality and Social Psychology, 71(1), 5–24. https://doi.org/10.1037/0022-3514.71.1.5
Epstein, S., Pacini, R., Denes-Raj, V., Heier, H. (1996). Individual differences in intuitive–experiential and analytical–rational thinking styles. Journal of Personality and Social Psychology, 71(2), 390–405. https://doi.org/10.1037/0022-3514.71.2.390
Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PloS One, 5(4), e10068. https://doi.org/10.1371/journal.pone.0010068
Fischhoff, B., Beyth-Marom, R. (1983). Hypothesis evaluation from a Bayesian perspective. Psychological Review, 90(3), 239–260. https://doi.org/10.1037/0033-295x.90.3.239
Fredrickson, L., Sellers, C., Dillon, L., Ohayon, J. L., Shapiro, N., Sullivan, M., Bocking, S., Brown, P., de la Rosa, V., Harrison, J., Johns, S., Kulik, K., Lave, R., Murphy, M., Piper, L., Richter, L., Wylie, S. (2018). History of US presidential assaults on modern environmental health protection. American Journal of Public Health, 108(S2), S95–S103. https://doi.org/10.2105/ajph.2018.304396
Haran, U., Ritov, I., Mellers, B. A. (2013). The role of actively open-minded thinking in information acquisition, accuracy, and calibration. Judgment and Decision Making, 8(3), 188–201. https://doi.org/10.1017/s1930297500005921
Hart, W., Albarracín, D., Eagly, A. H., Brechan, I., Lindberg, M. J., Merrill, L. (2009). Feeling validated versus being correct: A meta-analysis of selective exposure to information. Psychological Bulletin, 135(4), 555–588. https://doi.org/10.1037/a0015701
Hendrickson, A. T., Navarro, D. J., Perfors, A. (2016). Sensitivity to hypothesis size during information search. Decision, 3(1), 62–80. https://doi.org/10.1037/dec0000039
Inbar, Y., Lammers, J. (2012). Political diversity in social and personality psychology. Perspectives on Psychological Science, 7(5), 496–503. https://doi.org/10.1177/1745691612448792
Jern, A., Chang, K. K., Kemp, C. (2014). Belief polarization is not always irrational. Psychological Review, 121(2), 206–224. https://doi.org/10.1037/a0035941
Jussim, L., Crawford, J. T., Anglin, S. M., Stevens, S. T., Duarte, J. L. (2016). Interpretations and methods: Towards a more effectively self-correcting social psychology. Journal of Experimental Social Psychology, 66, 116–133. https://doi.org/10.1016/j.jesp.2015.10.003
Kahan, D. M., Peters, E., Dawson, E. C., Slovic, P. (2017). Motivated numeracy and enlightened self-government. Behavioural Public Policy, 1(1), 54–86. https://doi.org/10.1017/bpp.2016.2
Klaczynski, P. A. (2000). Motivated scientific reasoning biases, epistemological beliefs, and theory polarization: A two‐process approach to adolescent cognition. Child Development, 71(5), 1347–1366. https://doi.org/10.1111/1467-8624.00232
Klaczynski, P. A., Gordon, D. H. (1996). Self-serving influences on adolescents’ evaluations of belief-relevant evidence. Journal of Experimental Child Psychology, 62(3), 317–339. https://doi.org/10.1006/jecp.1996.0033
Klayman, J. (1995). Varieties of confirmation bias. Psychology of Learning and Motivation, 32, 385–418. https://doi.org/10.1016/s0079-7421(08)60315-1
Klayman, J., Ha, Y.-W. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94(2), 211–228. https://doi.org/10.1037/0033-295x.94.2.211
Koehler, J. J. (1993). The influence of prior beliefs on scientific judgments of evidence quality. Organizational Behavior and Human Decision Processes, 56(1), 28–55. https://doi.org/10.1006/obhd.1993.1044
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. https://doi.org/10.1037/0033-2909.108.3.480
Leary, M. R., Diebels, K. J., Davisson, E. K., Jongman-Sereno, K. P., Isherwood, J. C., Raimi, K. T., Deffler, S. A., Hoyle, R. H. (2017). Cognitive and interpersonal features of intellectual humility. Personality and Social Psychology Bulletin, 43(6), 793–813. https://doi.org/10.1177/0146167217697695
Leary, M. R., Shepperd, J. A., McNeil, M. S., Jenkins, T. B., Barnes, B. D. (1986). Objectivism in information utilization: Theory and measurement. Journal of Personality Assessment, 50(1), 32–43. https://doi.org/10.1207/s15327752jpa5001_5
Lilienfeld, S. O. (2010). Can psychology become a science? Personality and Individual Differences, 49(4), 281–288. https://doi.org/10.1016/j.paid.2010.01.024
Lord, C. G., Ross, L., Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109. https://doi.org/10.1037/0022-3514.37.11.2098
MacCoun, R. J. (1998). Biases in the interpretation and use of research results. Annual Review of Psychology, 49(1), 259–287. https://doi.org/10.1146/annurev.psych.49.1.259
MacCoun, R. J., Paletz, S. (2009). Citizens’ perceptions of ideological bias in research on public policy controversies. Political Psychology, 30(1), 43–65. https://doi.org/10.1111/j.1467-9221.2008.00680.x
Mercier, H. (2017). Confirmation bias - Myside bias. In R. F. Pohl (Ed.), Cognitive illusions intriguing phenomena in thinking, judgment and memory (2nd ed., pp. 99–114). A Psychology Press Book.
Minson, J. A., Chen, F. S., Tinsley, C. H. (2020). Why won’t you listen to me? Measuring receptiveness to opposing views. Management Science, 66(7), 3069–3094. https://doi.org/10.1287/mnsc.2019.3362
Munafò, M. R., Nosek, B. A., Bishop, D. V. M., Button, K. S., Chambers, C. D., Percie du Sert, N., Simonsohn, U., Wagenmakers, E.-J., Ware, J. J., Ioannidis, J. P. A. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1(1), 1–9. https://doi.org/10.1038/s41562-016-0021
Munro, G. D. (2010). The scientific impotence excuse: Discounting belief-threatening scientific abstracts. Journal of Applied Social Psychology, 40(3), 579–600. https://doi.org/10.1111/j.1559-1816.2010.00588.x
Munro, G. D., Ditto, P. H. (1997). Biased assimilation, attitude polarization, and affect in reactions to stereotype-relevant scientific information. Personality and Social Psychology Bulletin, 23(6), 636–653. https://doi.org/10.1177/0146167297236007
Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many guises. Review of General Psychology, 2(2), 175–220. https://doi.org/10.1037/1089-2680.2.2.175
Nisbett, R. E., Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Prentice-Hall.
Petersen, R. E. (2012). Representatives and Senators: Trends in Member Characteristics Since 1945 (Congressional Research Service Report R42365).
Porter, T., Schumann, K. (2018). Intellectual humility and openness to the opposing view. Self and Identity, 17(2), 139–162. https://doi.org/10.1080/15298868.2017.1361861
Redding, R. E. (2001). Sociopolitical diversity in psychology: The case for pluralism. American Psychologist, 56(3), 205–215. https://doi.org/10.1037/0003-066x.56.3.205
Robinson, M. (1984). Private foundations and social science research. Social Science and Public Policy, 21(4), 76–80. https://doi.org/10.1007/bf02695106
Smaldino, P. E., Turner, M. A., Contreras Kallens, P. A. (2019). Open science and modified funding lotteries can impede the natural selection of bad science. Royal Society Open Science, 6(7), 190194. https://doi.org/10.1098/rsos.190194
Stanovich, K. E., West, R. F., Toplak, M. E. (2013). Myside bias, rational thinking, and intelligence. Current Directions in Psychological Science, 22(4), 259–264. https://doi.org/10.1177/0963721413480174
Stenhouse, N., Myers, T. A., Vraga, E. K., Kotcher, J. E., Beall, L., Maibach, E. W. (2018). The potential role of actively open-minded thinking in preventing motivated reasoning about controversial science. Journal of Environmental Psychology, 57, 17–24. https://doi.org/10.1016/j.jenvp.2018.06.001
Taber, C. S., Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. https://doi.org/10.1111/j.1540-5907.2006.00214.x
Tajfel, H., Turner, J. C. (1986). The social identity theory of intergroup behavior. In S. Worchel W. G. Austin (Eds.), Psychology of intergroup relations (pp. 7–24). Nelson Hall.
Tappin, B. M., Van Der Leer, L., McKay, R. T. (2017). The heart trumps the head: Desirability bias in political belief revision. Journal of Experimental Psychology: General, 146, 1143–1149.
Tetlock, P. E., Levi, A. (1982). Attribution bias: On the inconclusiveness of the cognition-motivation debate. Journal of Experimental Social Psychology, 18(1), 68–88. https://doi.org/10.1016/0022-1031(82)90082-8
Toplak, M. E., Stanovich, K. E. (2003). Associations between myside bias on an informal reasoning task and amount of post-secondary education. Applied Cognitive Psychology, 17(7), 851–860. https://doi.org/10.1002/acp.915
Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
Tversky, A., Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211(4481), 453–458. https://doi.org/10.1126/science.7455683
Vedejová, D., Čavojová, V. (2022). Confirmation bias in information search, interpretation, and memory recall: Evidence from reasoning about four controversial topics. Thinking Reasoning, 28(1), 1–28. https://doi.org/10.1080/13546783.2021.1891967
Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20(3), 273–281. https://doi.org/10.1080/14640746808400161
Wolfe, C. R., Britt, M. A. (2008). The locus of the myside bias in written argumentation. Thinking amp; Reasoning, 14(1), 1–27. https://doi.org/10.1080/13546780701527674
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material