Evidence-based vaccination communication aims to support people in making informed decisions regarding vaccination. It is therefore important to learn how vaccination information is processed and how it might be biased. One potentially relevant bias that is overlooked in the vaccination literature is the feature positive effect (FPE), the phenomenon that people experience greater difficulty processing nonoccurring events than occurring events, which impacts judgment and decision making. The present study adopts an experimental design with sequential testing rules to examine a potential FPE for vaccination information processing. The results convincingly demonstrate that vaccination-related events described as nonoccurring (e.g., no side effects after vaccination) versus occurring (e.g., side effects after vaccination) indeed result in lower recall and are perceived as less important in evaluating the vaccine. The results regarding processing time remain inconclusive. These findings might help explain the appeal of vaccination-critical information and suggest that emphasizing what does happen as a result of vaccination, rather than what does not, can help debias the processing of evidence-based vaccination information.

Vaccines are widely acknowledged as a safe and effective means to reduce morbidity and mortality from infectious diseases. However, public confidence in vaccines has been decreasing (Dubé et al., 2015; MacDonald, 2015) and vaccine hesitancy has been identified as a major threat to global health (World Health Organization, 2019). Vaccine-preventable diseases like measles, that were nearly eradicated in developed countries a decade ago, have reemerged in both the US and Europe due to insufficient immunization coverage (Patel et al., 2020). Recently, the emergence of the COVID-19 virus has re-emphasized the importance of broadly carried vaccine acceptance (Randolph & Barreiro, 2020) and the necessity to improve our understanding of vaccine hesitancy (He et al., 2021; Machingaidze & Wiysonge, 2021; Sallam, 2021).

Vaccine hesitancy can arguably best be defined as indecisiveness regarding the acceptance or rejection of a vaccination (Bussink-Voorend et al., 2022). The main idea is that vaccine hesitancy is a predictor of vaccine rejection. To reduce vaccine hesitancy, strategic communication (e.g., from health professionals, governments, and scientists) provides evidence-based information on the scientific consensus regarding vaccinations, to help people make well-informed, rational vaccination decisions. However, a large majority of decisions regarding both the acceptance and the rejection of vaccinations does not classify as informed (i.e., deliberate, value-consistent, and based on knowledge; Lehmann et al., 2017). This suggests that vaccination decision making is not merely a deliberate and analytical process but is also susceptible to other influences.

Risks related to vaccine-preventable diseases and vaccine adverse events can be difficult to comprehend (Visschers et al., 2009) and possible outcomes of any given decision to (not) vaccinate remain uncertain (Serpell & Green, 2006). For such decisions under uncertainty, people often rely on heuristics, i.e., cognitive short-cuts in which (part of the) complex information is ignored in the aim to reach a decision (Gigerenzer, 2008; Tversky & Kahneman, 1974). Such heuristics are often useful (Gigerenzer, 2008; van der Linden et al., 2015). However, heuristics sometimes lead to systematic and severe errors in judgment, referred to as cognitive biases (Tversky & Kahneman, 1974). Vaccination decisions are arguably susceptible to such biases (Ball et al., 1998; Jacobson et al., 2015; MacDonald et al., 2012; Niccolai & Pettigrew, 2016).

Over the years, multiple heuristics and biases have been studied in the context of vaccination judgment and decision making, like the compression heuristic (Zimmerman et al., 2005), availability heuristic (Vandeberg et al., 2022), confirmation bias (Meppelink et al., 2019), and omission bias (Brown et al., 2010). However, one highly relevant but relatively unknown bias is overlooked, the feature positive effect (FPE, coined by Sainsbury & Jenkins, 1967). FPE refers to the relative difficulty humans (and other animals) have processing information about events that do not occur compared to events that do occur. This discrepancy in processing difficulty results in underweighing the informational value of nonoccurring events (Eerland et al., 2012; Eerland & Rassin, 2010; Newman et al., 1980; Wells & Lindsay, 1980). We argue that the (non)occurrence of events is an inherent part of informing about and understanding the risks involved in vaccination decisions. More specifically, the (non)occurrence of vaccine acceptance (i.e., whether or not a vaccination is administered) affects whether one might expect the (non)occurrence of a vaccine-preventable disease or the (non)occurrence of vaccine adverse events. Empirical work on FPE shows that people have more difficulty processing nonoccurrences than occurrences, and this asymmetry in processing difficulty impacts their judgment, which can have important implications. Therefore, investigation of FPE is essential to better understand how information processing impacts the vaccination decision making process.

Vaccine information

While health professionals are generally viewed as an important source of information about vaccination (Ames et al., 2017), the internet is frequently used to search for additional or “independent” information (Downs et al., 2008; Jones et al., 2012). This may be due to beliefs that health professionals disregard possible harms and mainly provide information about the benefits of vaccination (Jones et al., 2012; Paulussen et al., 2006). Although online vaccination content regularly takes a positive or neutral vaccination stance (Ache & Wallace, 2008; Habel et al., 2009; Keelan et al., 2007), vaccination-critical content that is not based on scientific evidence is abundant (Davies et al., 2002; Guidry et al., 2015; Jolley & Douglas, 2014) and much more effective in reaching and activating (vaccine-hesitant) populations (Johnson et al., 2020; Lutkenhaus et al., 2019b). Even brief exposures to such vaccination-critical information can decrease the perceived risk of non-vaccination, increase perceived vaccination risks, and negatively affect vaccination attitudes and intentions (Betsch et al., 2010; Jolley & Douglas, 2014; Nan & Madden, 2012). Conversely, the impact of evidence-based vaccination-supporting information is not so clear-cut. Although some findings show that scientific (consensus) information can help people correct misperceptions regarding vaccination (van der Linden et al., 2015), other findings suggest that scientific information does not have much impact on people’s vaccination perceptions and intentions (Kerr et al., 2021; Nan & Madden, 2012).

On the internet, vaccination-critical information differs from vaccination-supporting information in several ways. First, vaccination-critical information employs a wide variation of arguments ranging from disputing science to safety concerns, conspiracy theories, and alternative medicine (Johnson et al., 2020; Kata, 2012). Conversely, vaccination-supporting argumentation is more homogeneous (Johnson et al., 2020; Meppelink et al., 2021) and based on the repetition of facts, figures, and scientific studies (Lutkenhaus et al., 2019b). Second, vaccination-critical information often appears in an emotional, narrative format describing people’s lived experiences with vaccinations (Bean, 2011; Guidry et al., 2015; Haase et al., 2020; Sanders et al., 2019), whereas vaccination-supporting information generally adopts a more impersonal, expository format highlighting scientific research (Guidry et al., 2015; Lutkenhaus et al., 2019a, 2019b; Sanders et al., 2019). Third, and most important for the present study, both types of vaccination information appear to differ in their presentation of risk. Vaccination-critical sources are likely to link vaccination to the occurrence of adverse outcomes, including illness, idiopathic diseases such as autism, disability, and death (Bean, 2011; Leask et al., 2010; Zimmerman et al., 2005). Conversely, when vaccination-supporting sources discuss the topic of vaccination risk, they emphasize the lowered risk for illness when a vaccination is administered (Hobson-West, 2003), thereby highlighting the nonoccurrence of an outcome. As psychological research has shown that information about nonoccurrences is often underweighed, further examination of the (non)occurrence of events in communication about vaccination is warranted.

Feature positive effect

The feature positive effect (FPE) refers to the tendency to experience more difficulty in processing (i.e., recognizing, interpreting, storing, and retrieving) nonoccurrences than occurrences (Allison & Messick, 1988). As a result, occurring events are considered more important in judgment and decision making than nonoccurring events (Fazio et al., 1982; Rassin et al., 2008), even when these nonoccurrences may have important implications in, for instance, clinical (Rassin et al., 2008), judicial (Eerland et al., 2012; Eerland & Rassin, 2010), educational (Newman et al., 1980), or marketing (Kardes et al., 1990) domains. This bias does not only occur in adults, but also in children (Bitgood et al., 1976) and animals (Pace et al., 1980; Sainsbury & Jenkins, 1967). FPE is relatively understudied and the few existing studies vary widely with regards to their approach to the subject. That is, the underweighing of described nonoccurrences has been studied in a variety of paradigms and is shown to manifest in reading times (Eerland et al., 2012), recall (Eerland et al., 2012), learning (Hovland & Weiss, 1953; Newman et al., 1980), (confidence in) performance (Rassin, 2014), hypothesis testing (Cherubini et al., 2013; also see Klayman & Ha, 1987), self-perception (Fazio et al., 1982), and the evaluation of forensic evidence (Eerland et al., 2012; Eerland & Rassin, 2010).

For instance, Cherubini et al. (2013) investigated FPE in relation to abstract problem solving by asking participants to estimate from which deck of lettered cards a certain card was most likely to originate, based on the letters on the card and the known content of the decks. Findings showed that, in this task, participants overestimated the importance of information conveyed by the occurring letters on the card compared to information conveyed by the nonoccurring letters. Additionally, participants were more likely to solve problems correctly and felt more confident when the presence rather than the absence of cues directed participants to the solution. Furthermore, Eerland and colleagues (2012; Eerland & Rassin, 2010) studied FPE in a more applied setting relating to concrete, real-life situations. Participants were asked to judge the guilt of a crime suspect based on described (non)occurring diagnostic forensic evidence. Results indicated that participants had more difficulty processing nonoccurring events (e.g., “fingerprints of the suspect were absent on the victim”) than occurring events (e.g., “fingerprints of the suspect were present on the victim”), as indicated by slower reading times (Eerland et al., 2012). Information that is processed more easily (referred to as “processing fluency”) is assumed to be remembered better (Atkinson & Shiffrin, 1968; Hirshman & Mulligan, 1991). Indeed, the findings showed that nonoccurring events were also less likely to be recalled (Eerland et al., 2012) and taken into account when participants decided on the guilt of a crime suspect (Eerland et al., 2012; Eerland & Rassin, 2010) than occurring events.

Based on the illustrated FPE literature, we expect that the nonoccurrence of vaccination-related events is more difficult to process, more difficult to remember, and considered less important in evaluating the described vaccine than the occurrence of vaccination-related events. This results in the following hypotheses:

Information about nonoccurring (versus occurring) vaccination consequences results in (a) slower reading times, (b) lower recall, (c) lower perceived importance.

The FPE perspective is different from, and can be orthogonal to, for instance, a (gain-loss) framing perspective on vaccination communication (see e.g., O’Keefe & Nan, 2012; Penţa & Băban, 2018). That is, occurrences and nonoccurrences can conceptually both reflect a gain-framed outcome (i.e., “wellbeing” and “no illness”), but could also both reflect a loss-framed outcome (e.g., “illness” and “no wellbeing”). Potentially confounding conceptual issues like these are further discussed under “stimulus materials and design”.

Sample

Participants were recruited through the scientific crowdsourcing community Prolific Academic (https://www.prolific.co/) and considered eligible for participation when they (1) were at least 18 years old, (2) had an approval rate of ≥ 95% for previous work done through Prolific, (3) were fluent, native, primary speakers of the English language, and (4) did not have a language related disorder. Exclusion criteria are addressed in the preprocessing stage of the statistical analysis. Participants were paid the equivalent of $10 per hour for their participation. The experiment took approximately 15 minutes per participant.

To not waste any resources, we adopted a sequential testing paradigm. This allows for a well-powered study without recruiting more participants than necessary. The rationale behind sequential testing is that published effect sizes – and by extension, power analyses – are often inaccurate (Lakens, 2014). Rather than using the commonly adopted fixed-sample stopping rule at a sample size that is predetermined by a power analysis, it is less problematic and more practical and efficient to adopt a sequential stopping rule (Frick, 1998). A sequential stopping rule indicates that the statistical analyses will be performed during data collection at intermediate sample sizes. With this approach, data collection can be terminated whenever the results convincingly show that the hypothesized effect is either present or extremely unlikely (Lakens, 2014). The sequential stopping rule that we adopted is COAST (i.e., composite open adaptive sequential test; Frick, 1998). The COAST method dictates that, after reaching a predefined minimum sample size for the first statistical test (Nmin), the researcher can perform statistical analyses for subsequent sample sizes during data collection (e.g., Nmin+50; Nmin+100) while adhering to the following rules: If the outcome of the statistical test is 1) p < .01, data collection is terminated and the null hypothesis is rejected; 2) p > .36, data collection is terminated and the null hypothesis is not rejected; 3) .01 < p < .36, more participants are tested. Monte Carlo computer simulations show that the overall alpha level of .05 (and therefore the Type I error) is preserved, provided that these rules are followed (Frick, 1998).

The minimum sample size for the current within-subjects experiment was set at Nmin = 150. This is the minimum sample size we were willing to accept if a p-value above .36 (or below .01) emerged, which would force us to stop testing. However, for a p between .01 and .36, data collection would resume. COAST assumes that data collection and intermediate testing continues until a decision is reached (i.e., until p < .01 or p > .36). However, considering experimental costs (money and time) and loosely informed by a power analysis (Faul et al., 2007) indicating that a sample size of 324 should be sufficiently powered (using G*Power software for the statistical F-test “ANOVA – repeated measures, within factors”, hypothesizing a very small true effect with parameters set at partial η² = 0.01, α = 0.05, power = 0.95, for one group and two measurements), the point at which we would stop testing was set at Nmax = 350. The spacing between sequential analyses was set at 100 (Nmin+100; Nmin+200).

Stimulus materials and design

To investigate the impact of (non)occurrences on the processing, recall, and perceived importance of vaccine information and minimize the impact of existing knowledge and beliefs, we presented participants with a brief, fabricated news article on a new (fictional) virus, the “blue virus” (see Appendix A). After reading this news article, participants were presented with information on (non)occurring consequences of injections with the vaccine against the blue virus. This information was presented in the form of 16 news headlines (cf. Meppelink et al., 2019), half of which described occurring vaccination consequences and half described nonoccurring consequences. To improve ecological validity of the findings, the content maximally resembled the issues presented in natural vaccination information on the internet. However, some issues arose in the construction of the materials.

To start, natural vaccination-critical and vaccination-supporting texts often describe different negative health outcomes, which might confound our design. That is, the occurrence of idiopathic diseases like autism in vaccination-critical texts likely results in completely different associations, beliefs, and opinions across individuals than the occurrence of vaccine-preventable diseases like measles in vaccination-supporting texts. For this reason, described health outcomes were kept agnostic with respect to the underlying disease and reflected identical consequences across conditions (e.g., patients either did or did not get a fever after vaccination).

However, using only negatively valenced descriptors (e.g., fever) results in occurrences describing a negatively valenced outcome (i.e., fever is present) and nonoccurrences describing a positively valenced outcome (i.e., fever is absent). This distinction can be taken to reflect the difference between loss versus gain framed outcomes, respectively (Tversky & Kahneman, 1985; see also Mandel, 2001), which confounds any FPE effects. Therefore, stimulus counterparts were designed to include both negatively and positively valenced descriptors, to balance gain and loss framed outcomes, and – additionally – to balance the type of described consequences (physical versus social/emotional) across event conditions.

Furthermore, we did not use any negation operators (e.g., the word “not”) to describe nonoccurrences, thereby following the procedure by Eerland et al. (2012). The reason for this is twofold. First, research on language comprehension has shown that people find it more difficult to remember information that is under the scope of a negation operator (see Kaup & Zwaan, 2003). Second, adding a negation operator to nonoccurrences would make these headlines longer than the headlines describing occurrences. Because we were interested in processing times we aimed to rule out this possible confound. See Table 1 for all stimulus materials and Appendix A for the cover story and counterbalancing lists.

Table 1.
Stimulus materials.
OccurrenceNonoccurrence
New study confirms link between vaccine against blue virus and high fever (neg_loss_physical) New study disproves link between vaccine against blue virus and high fever (neg_gain_physical) 
Does the vaccine against blue virus cause insomnia? Experts say yes (neg_loss_physical) Does the vaccine against blue virus cause insomnia? Experts say no (neg_gain_physical) 
New evidence confirms that taking the vaccine against blue virus leads to cognitive malfunction (neg_loss_physical) New evidence rejects that taking the vaccine against blue virus leads to cognitive malfunction (neg_gain_physical) 
First recipients of vaccine against blue virus report presence of anxiety-like side effects (neg_loss_soc/emo) First recipients of vaccine against blue virus report absence of anxiety-like side effects (neg_gain_soc/emo) 
FDA: ‘presence of clotting problems as a result of vaccination against blue virus’ (neg_loss_physical) FDA: ‘absence of clotting problems as a result of vaccination against blue virus’ (neg_gain_physical) 
Mathematical modelers show that increases in hospitalization are present after first vaccination wave (neg_loss_physical) Mathematical modelers show that increases in hospitalization are absent after first vaccination wave (neg_gain_physical) 
Suggestion that vaccine against blue virus increases mortality accepted (neg_loss_physical) Suggestion that vaccine against blue virus increases mortality rejected (neg_gain_physical) 
Social stigma after taking the vaccine? “Yes, friends condone my decision to vaccinate.” (neg_loss_soc/emo) Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.” (neg_gain_soc/emo) 
Vaccine against blue virus promotes a feeling of safety in recipients (pos_gain_soc/emo) Vaccine against blue virus obstructs a feeling of safety in recipients (pos_loss_soc/emo) 
10 Does taking the vaccine against blue virus give you back your social life? Recipients say: Yes! (pos_gain_soc/emo) Does taking the vaccine against blue virus give you back your social life? Recipients say: No! (pos_loss_soc/emo) 
11 Suggestion that vaccine against blue virus increases mobility is confirmed in new study (pos_gain_soc/emo) Suggestion that vaccine against blue virus increases mobility is rejected in new study (pos_loss_soc/emo) 
12 Evidence is found that vaccine against blue virus is effective for young children (pos_gain_physical) Evidence is lacking that vaccine against blue virus is effective for young children (pos_loss_physical) 
13 World leaders decide that traveling is now permitted after vaccination against blue virus (pos_gain_soc/emo) World leaders decide that traveling is still prohibited after vaccination against blue virus (pos_loss_soc/emo) 
14 Voters say “Taking the jab against blue virus reinstates sense of freedom” (pos_gain_soc/emo) Voters say “Taking the jab against blue virus diminishes sense of freedom” (pos_loss_soc/emo) 
15 Accessto some public facilities is now granted after taking jab against blue virus (pos_gain_soc/emo) Accessto some public facilities is still denied after taking jab against blue virus (pos_loss_soc/emo) 
16 Does the vaccine against blue virus protect from illness? CDC says yes (pos_gain_physical) Does the vaccine against blue virus protect from illness? CDC says no (pos_loss_physical) 
OccurrenceNonoccurrence
New study confirms link between vaccine against blue virus and high fever (neg_loss_physical) New study disproves link between vaccine against blue virus and high fever (neg_gain_physical) 
Does the vaccine against blue virus cause insomnia? Experts say yes (neg_loss_physical) Does the vaccine against blue virus cause insomnia? Experts say no (neg_gain_physical) 
New evidence confirms that taking the vaccine against blue virus leads to cognitive malfunction (neg_loss_physical) New evidence rejects that taking the vaccine against blue virus leads to cognitive malfunction (neg_gain_physical) 
First recipients of vaccine against blue virus report presence of anxiety-like side effects (neg_loss_soc/emo) First recipients of vaccine against blue virus report absence of anxiety-like side effects (neg_gain_soc/emo) 
FDA: ‘presence of clotting problems as a result of vaccination against blue virus’ (neg_loss_physical) FDA: ‘absence of clotting problems as a result of vaccination against blue virus’ (neg_gain_physical) 
Mathematical modelers show that increases in hospitalization are present after first vaccination wave (neg_loss_physical) Mathematical modelers show that increases in hospitalization are absent after first vaccination wave (neg_gain_physical) 
Suggestion that vaccine against blue virus increases mortality accepted (neg_loss_physical) Suggestion that vaccine against blue virus increases mortality rejected (neg_gain_physical) 
Social stigma after taking the vaccine? “Yes, friends condone my decision to vaccinate.” (neg_loss_soc/emo) Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.” (neg_gain_soc/emo) 
Vaccine against blue virus promotes a feeling of safety in recipients (pos_gain_soc/emo) Vaccine against blue virus obstructs a feeling of safety in recipients (pos_loss_soc/emo) 
10 Does taking the vaccine against blue virus give you back your social life? Recipients say: Yes! (pos_gain_soc/emo) Does taking the vaccine against blue virus give you back your social life? Recipients say: No! (pos_loss_soc/emo) 
11 Suggestion that vaccine against blue virus increases mobility is confirmed in new study (pos_gain_soc/emo) Suggestion that vaccine against blue virus increases mobility is rejected in new study (pos_loss_soc/emo) 
12 Evidence is found that vaccine against blue virus is effective for young children (pos_gain_physical) Evidence is lacking that vaccine against blue virus is effective for young children (pos_loss_physical) 
13 World leaders decide that traveling is now permitted after vaccination against blue virus (pos_gain_soc/emo) World leaders decide that traveling is still prohibited after vaccination against blue virus (pos_loss_soc/emo) 
14 Voters say “Taking the jab against blue virus reinstates sense of freedom” (pos_gain_soc/emo) Voters say “Taking the jab against blue virus diminishes sense of freedom” (pos_loss_soc/emo) 
15 Accessto some public facilities is now granted after taking jab against blue virus (pos_gain_soc/emo) Accessto some public facilities is still denied after taking jab against blue virus (pos_loss_soc/emo) 
16 Does the vaccine against blue virus protect from illness? CDC says yes (pos_gain_physical) Does the vaccine against blue virus protect from illness? CDC says no (pos_loss_physical) 

Note. Words in bold reflect the (non)occurrence of an event. As denoted in parentheses, items 1-8 have negative event descriptors (in italics) with occurrences presenting loss outcomes and nonoccurrences present gain outcomes. Items 9-16 have positive event descriptors with occurrences presenting gain outcomes and nonoccurrences presenting loss outcomes. For each event condition, half of the items describe physical consequences and half describe social/emotional consequences.

This resulted in a one-factorial (event) within-subjects design with eight headlines per condition (occurring vs nonoccurring), in which descriptor and outcome frames were counterbalanced within participants and items were counterbalanced between participants. All headlines were constructed in the grammatical style of a newspaper headline, had approximately the same length, and were presented in the same font.

Measures

Reading times. For each headline, time-on-screen in milliseconds was measured as a proxy of reading time (RT). We preregistered to exclude reading times shorter than 100 ms from data analysis, based on the assumption that people are unable to read a headline that fast for comprehension. Given that the minimum reading time was 357 ms, no data were excluded based on this criterion. Similarly, we preregistered that reading times above 8 sec would be removed as these likely do not reflect situations in which the sentence is read as quickly as possible as instructed (for a meta-analysis on average reading times, see Brysbaert, 2019). Next, outliers above or below 2.5 standard deviations (SD) of the participant mean would be discarded as missing. However, the data of the first 150 participants showed many reading times above 8 seconds: 83 participants (55.33%) had at least one RT > 8 sec, with 20 participants (13.33%) showing RTs > 8 sec in at least half of the trials. Adhering to our preregistered plan would result in the exclusion of 16.63% of the trials and an even greater overall exclusion percentage as 20 participants would have too few valid trials to be included. We deemed our preregistered exclusion strategy undesirable, as such a large part of the data no longer reflects what outliers represent.

We therefore examined what would be a reasonable alternative given the data. We decided to replace the two suggested steps for outlier removal (8 sec and 2.5 SD) with one step: excluding RTs above or below 2 SD of the participant median. This method no longer includes a hard cut-off, which is more appropriate given the large variability in reading times between participants. Also, the median is less sensitive to any extreme outliers that may occur within participants across trials and is therefore appropriate given the absence of a hard cut-off. Inspection of the data using this alternative method shows that this resulted in 6.17% trial exclusion, which we deemed acceptable for outlier removal.1

Reading times were right-skewed (i.e., with a skewness value above 1) as expected. Therefore, log transformations were performed as preregistered. Next, means and SDs were calculated per participant per event condition (i.e., based on 8 occurring events versus 8 nonoccurring events) for data analysis. Longer reading times will be taken to indicate greater processing difficulty (in line with, e.g., Rayner et al., 2006).

Free recall. Participants were asked to recall as many presented news headlines as possible (i.e., “Please think back to the news headlines that were presented to you earlier. Try to recall as many of these headlines as possible. Please do the best job you can. List the headlines (or whatever gist/words you can remember from them) below, with one headline per text box. Your responses must be constructed using words. The use of arrows or other symbols to annotate relationships is not allowed”). As participants did not report verbatim descriptions of the headlines, recalled headlines were coded by two coders (GM and LV). A headline was considered correctly recalled when both the descriptor (negative vs. positive) and event (occurring vs. nonoccurring) were correctly reported. They did not have to be correct verbatim (e.g., “New study confirms link between vaccine against blue virus vaccine and high fever”) as long as the gist of both descriptor and event were correct (e.g., “Getting the vaccine is correlated with getting a fever”). A correctly recalled descriptor-event combination resulted in a recall score of 1; any incorrectly recalled descriptor-event combination resulted in a recall score of 0. A codebook was constructed to instruct coders on the assessment of the responses (see Appendix B). Coders first coded the same 10% of the data, which resulted in almost perfect intercoder reliability (Cohens Kappa = .97). Discrepancies were discussed among coders until consensus was reached, the codebook was adjusted accordingly, after which the two coders coded the remainder of the recall data. Means and standard deviations were calculated per participant per event condition, resulting in means between 0 and 1 that reflect the proportion of correctly recalled information. Lower proportions indicate greater recall difficulty.

Perceived importance. Participants were presented randomly with all news headlines. For each headline, they were asked “How important do you consider this information when evaluating the vaccine against the blue virus?” on a scale from 0 (not at all important) to 7 (extremely important). Next, we calculated the means and standard deviations per participant per event condition. Lower means indicate lower perceived importance in judgment.

Background variables. Demographic (i.e., age, gender, education level) and psychographic information were assessed. Psychographic information consisted of two constructs. Vaccine hesitancy was assessed using three items, asking “Please think back to the first time you were eligible for getting the COVID-19 vaccine. In making the decision whether to take the vaccine, to what extent have you felt hesitancy; 2) doubt; 3) indecisiveness about getting vaccinated?” (cf. Bussink-Voorend et al., 2022), to be answered on a slider from 0 (not at all hesitant) – 100 (extremely hesitant). The vaccine attitude item was formulated as follows “How positive or negative would you consider yourself to be about the COVID-19 vaccine?”, to be answered on a slider from –100 (extremely negative) to +100 (extremely positive). These two constructs assessed a priori vaccine beliefs. No existing scales were used as existing scales are often heterogeneous and confounded with various related constructs, which hinders clean measurement and comparability of study results (Bussink-Voorend et al., 2022). COVID-19 was taken as a case, as vaccination beliefs are highly context-specific (MacDonald, 2015) and as the COVID pandemic best resembles the situation described in the blue virus vignette.

Attention checks. Two attention checks were performed to ensure participants’ serious participation. First, an instructional manipulation check (cf. Hauser & Schwarz, 2015) assessed whether people carefully read instructions (i.e., “Which sports do you like to perform?” with a comment in the instruction that they should not select the multiple-choice sports options provided, but they should select the “other” option and type “I have read the instructions”). Second, a comprehension question checked whether people had attended to the cover story (i.e., “Which symptoms were mentioned in the story about the blue virus? Name at least one”), with at least one correctly mentioned symptom resulting in a satisfactory outcome of this check. We preregistered to exclude participants who failed both attention checks from further data analysis.

Procedure

The Ethics Committee of the Faculty of Social Sciences (ECSS) of Radboud University reviewed a research line that included the proposed study and concluded there are no formal objections2 (case number ECSW-2021-072). On the Prolific Academic website, recruited participants were redirected to the online experiment platform to perform the experiment (Qualtrics™, with a redirection to Gorilla™ for reliable measurement of the response times). After giving informed consent, participants were instructed to first provide information on their demographic and psychographic characteristics, including the instructional manipulation check. Next, they were presented with the cover story that provided the context for the experimental materials (i.e., the news headlines). They were instructed to read the text attentively and imagine as vividly as possible that the described scenario was true. The scenario described a future in which a fictional infectious disease with remarkable symptoms (e.g., blueish hue on face and torso, loss of sight) had emerged, for which a new vaccine had been discovered recently.3 Although the described situation of course bares resemblance with recent COVID-19 outbreaks, the disease characteristics were described in a way that reduced resemblance to real-world events or infectious diseases that may be familiar to participants. Time-on-screen was recorded for descriptive purposes.

After reading the fictional scenario, participants first received a comprehension question about the scenario as part of the attention checks. Next, they were instructed that they would be presented with 16 sequential news headlines about the new vaccine, that they should read these sentences as accurately and quickly as possible because they would receive questions about them later, and that they should press the spacebar on their keyboard to move to the next headline. Participants started with 3 practice trials to familiarize themselves with the procedure, after which the 16 experimental trials were presented in random order (8 describing occurring vaccine consequences and 8 describing nonoccurring vaccine consequences). Each trial started with a fixation cross that was presented for 500 ms, after which the news headline was presented. Once the participants pressed the spacebar the trial was terminated, after which they were presented with an empty screen for 1000 ms until the next trial started. Afterwards, participants were asked to recall the news headlines, to evaluate the vaccine (-100 = extremely negative, + 100 = extremely positive) as an introduction to the perceived importance question,4 and to rate for each headline the perceived importance in evaluating the described vaccine. Next, open-ended questions asked 1) about their perceived purpose of the experiment and 2) to note any comments or observations they might have regarding the experiment. Finally, they were debriefed and thanked for their participation.

Statistical analyses

By choosing a sequential stopping method, it was likely that the analyses would be performed multiple times. The analyzing sequence would start at Nmin = 150 and repeat when every subsequent 100 participants were reached, until the null hypothesis could be rejected or not according to the COAST method or until Nmax = 350. The analyses performed at each sequence were identical.

The preprocessing phase consisted of three parts. First, outliers in reading times were removed and recall was coded as described under “measures”. Second, participants were excluded from further analyses if a) > 50% of reading times was missing in at least one event condition (n = 0), b) they scored 0 on recall and/or reported two or more nonsense memories (n = 2), c) they incorrectly responded to both attention checks (n = 4), or d) the open ended questions showed that they either guessed the experimental purpose or reported to not understand a part of the experiment that is directly relevant to our FPE outcome measures (n = 0). Participants were still paid for their participation when excluded, unless they incorrectly responded to both attention checks. We preregistered that excluded participants would only be replaced with newly recruited participants if, after the last batch of data (Nrecruited = 350), no decision had been reached and the maximum sample size or budget had not yet been met, thus when Nincluded < 350. However, we decided to immediately replace excluded participants for each batch, to adhere to the exact preregistered sample sizes. Inclusion or exclusion of these additional participants did not alter result patterns. Third, descriptive statistics regarding the sample characteristics were collected to be reported. Here, descriptive statistics for the dependent variables were also be checked to allow for the discovery of any potential floor or ceiling effects, even though these were not expected. For the first batch of 150 participants, descriptives showed no indication of floor or ceiling effects.

After preprocessing, the analysis commenced as preregistered, with any deviations mentioned in footnotes.5 First, statistical assumptions for repeated measures ANOVAs were checked. Because we used a random sampling method, a violation of independence of observations is unlikely. As expected, the normality assumption was violated for the reading times (indicated by skewness and kurtosis scores greater than 1) but not the other dependent variables. Reading times were log transformed to approach normality. The assumption of sphericity was met for all dependent variables. Second, we checked whether counterbalancing lists resulted in different reading times, recall scores, and perceived importance scores using a one-way ANOVA for each dependent variable with counterbalancing list as a between-subjects factor. This was not the case (all p-values ≥ .20), which is why counterbalancing list was not included as a between-subjects variable in the analyses. Third, three repeated-measures ANOVAs were performed as the main confirmatory analyses, with event condition as independent variable, reading time, recall, and perceived importance as respective dependent variables. The hypotheses that were tested are that headlines describing nonoccurring vaccination consequences result in (a) longer reading times, (b) lower recall, (c) lower self-reported perceived importance in evaluating the vaccine than texts describing occurring vaccination consequences. This would be reflected by a significant main effect of event condition (p < .01, see COAST) in the hypothesized direction on the respective dependent variables.

The preregistration described that, if the sequential stopping rule would dictate to stop for any given dependent variable, conclusions would be drawn for this dependent variable. Data collection would be pursued and sequential analyses would continue for the remaining dependent variables only. If the sequential stopping rule dictated to stop for the last dependent variable, or all dependent variables simultaneously, or when Nmax would be reached, data collection would seize. At this point, once all analyzing steps described above would be performed, exploratory analyses might follow.

All study materials, the laboratory log, data, syntax, and the stage 1 registered report are publicly available on the Open Science Framework (https://osf.io/x8n4a/).

Participant characteristics

The descriptive statistics of the participant characteristics showed that each sequential sample included people of various genders, ages, education levels, and countries of residence (see Table 2). Though reported vaccine attitudes ranged from -100 (extremely negative) to +100 (extremely positive), participants can be described as generally positive about vaccines (M = 44.15; Median = 73.50; Mode = 100). Similarly, reported vaccine hesitancy scores ranged from 0 (not at all hesitant) to 100 (extremely hesitant), but the sample can generally be characterized as not very hesitant (M = 31.92; Median = 13.33; Mode = 0).

Table 2.
Participant characteristics per data collection batch.
 Batch 1 Batch 2 Batch 3 
Total N 150 250 350 
Age, M (SD) 37.79 (14.40) 37.54 (13.94) 38.52 (14.36) 
Vaccine attitude, M (SD) 45.75 (63.73) 42.05 (63.63) 44.15 (62.23) 
Vaccine hesitancy, M (SD) 29.10 (33.93) 32.41 (35.28) 31.92 (34.97) 
Gender, N (%)    
Female 68 (45.30%) 127 (50.80%) 191 (54.60%) 
Male 81 (54.00%) 120 (48.00%) 154 (44.00%) 
Non-binary 1 (0.70%) 3 (1.20%) 5 (1.40%) 
Education, N (%)    
Middle school 4 (2.70%) 4 (1.60%) 4 (1.10%) 
High school 29 (19.30%) 44 (17.60%) 64 (18.30%) 
College, no degree 26 (17.30%) 48 (19.20%) 69 (19.70%) 
Associate's degree 9 (6.00%) 14 (5.60%) 21 (6.00%) 
Bachelor's degree 63 (42.00%) 109 (43.60%) 144 (41.10%) 
Graduate degree 19 (12.70%) 31 (12.40%) 48 (13.70%) 
Country, N (%)    
Unknown 5 (3.30%) 5 (2.00%) 5 (1.40%) 
Australia 5 (3.30%) 6 (2.40%) 6 (1.70%) 
Austria 1 (0.70%) 1 (0.40%) 1 (0.30%) 
Canada 1 (0.70%) 9 (3.60%) 14 (4.00%) 
Denmark 1 (0.70%) 1 (0.40%) 1 (0.30%) 
France 1 (0.70%) 1 (0.40%) 2 (0.60%) 
Greece   1 (0.30%) 
Ireland 4 (2.70%) 9 (3.60%) 14 (4.00%) 
Israel 2 (1.30%) 2 (0.80%) 3 (0.90%) 
Italy 1 (0.70%) 1 (0.40%) 1 (0.30%) 
Japan   1 (0.30%) 
Netherlands   1 (0.30%) 
New Zealand 3 (2.00%) 3 (1.20%) 5 (1.40%) 
Poland  1 (0.40%) 1 (0.30%) 
Portugal   1 (0.30%) 
South Africa 24 (16.00%) 47 (18.80%) 66 (18.90%) 
South Korea   1 (0.30%) 
Spain  1 (0.40%) 2 (0.60%) 
Sweden   1 (0.30%) 
Switzerland   1 (0.30%) 
United Kingdom 102 (68.00%) 161 (64.40%) 217 (62.00%) 
United States of America  2 (0.80%) 5 (1.40%) 
 Batch 1 Batch 2 Batch 3 
Total N 150 250 350 
Age, M (SD) 37.79 (14.40) 37.54 (13.94) 38.52 (14.36) 
Vaccine attitude, M (SD) 45.75 (63.73) 42.05 (63.63) 44.15 (62.23) 
Vaccine hesitancy, M (SD) 29.10 (33.93) 32.41 (35.28) 31.92 (34.97) 
Gender, N (%)    
Female 68 (45.30%) 127 (50.80%) 191 (54.60%) 
Male 81 (54.00%) 120 (48.00%) 154 (44.00%) 
Non-binary 1 (0.70%) 3 (1.20%) 5 (1.40%) 
Education, N (%)    
Middle school 4 (2.70%) 4 (1.60%) 4 (1.10%) 
High school 29 (19.30%) 44 (17.60%) 64 (18.30%) 
College, no degree 26 (17.30%) 48 (19.20%) 69 (19.70%) 
Associate's degree 9 (6.00%) 14 (5.60%) 21 (6.00%) 
Bachelor's degree 63 (42.00%) 109 (43.60%) 144 (41.10%) 
Graduate degree 19 (12.70%) 31 (12.40%) 48 (13.70%) 
Country, N (%)    
Unknown 5 (3.30%) 5 (2.00%) 5 (1.40%) 
Australia 5 (3.30%) 6 (2.40%) 6 (1.70%) 
Austria 1 (0.70%) 1 (0.40%) 1 (0.30%) 
Canada 1 (0.70%) 9 (3.60%) 14 (4.00%) 
Denmark 1 (0.70%) 1 (0.40%) 1 (0.30%) 
France 1 (0.70%) 1 (0.40%) 2 (0.60%) 
Greece   1 (0.30%) 
Ireland 4 (2.70%) 9 (3.60%) 14 (4.00%) 
Israel 2 (1.30%) 2 (0.80%) 3 (0.90%) 
Italy 1 (0.70%) 1 (0.40%) 1 (0.30%) 
Japan   1 (0.30%) 
Netherlands   1 (0.30%) 
New Zealand 3 (2.00%) 3 (1.20%) 5 (1.40%) 
Poland  1 (0.40%) 1 (0.30%) 
Portugal   1 (0.30%) 
South Africa 24 (16.00%) 47 (18.80%) 66 (18.90%) 
South Korea   1 (0.30%) 
Spain  1 (0.40%) 2 (0.60%) 
Sweden   1 (0.30%) 
Switzerland   1 (0.30%) 
United Kingdom 102 (68.00%) 161 (64.40%) 217 (62.00%) 
United States of America  2 (0.80%) 5 (1.40%) 

Hypothesis testing

To test our hypotheses, we performed a repeated measures ANOVA for each dependent variable as preregistered. However, during the coding of the recall data and prior to any data analysis, both coders independently noticed that one of the 16 headlines was ambiguous, and therefore unsuitable for a clean test of the hypotheses. Namely, the item on social stigma (‘Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.”’) was designed to describe the absence of a negative descriptor (no stigma) but also turned out to consist of the presence of a positive descriptor (friends’ support), which confounded this item. For this reason, we decided prior to performing any analyses that the social stigma item should best be excluded from the analyses. However, for transparency reasons, we also performed and report the analyses on the data including this item. The outcomes for both analyses lead to the same conclusion for all dependent variables and data collection batches except for one.

Batch 1. Repeated measures ANOVAs were performed on the data of the first 150 participants. The results were inconclusive regarding the effect of event (occurring vs. nonoccurring) on reading times, both without (FItemExcluded (1,149) = 4.09, p = .045, partial η² = .03) and with the ambiguous item (FItemIncluded (1,149) = 1.83, p = .18, partial η² = .01), as the COAST method dictates that data are inconclusive regarding H0 when p ≥ .01 and ≤ .36. We were therefore unable to reject H0 regarding reading times, requiring a second batch of data collection.

The repeated measures ANOVA of event on recall showed a medium-to-large significant effect (FItemExcluded (1,149) = 19.47, p < .001, partial η² = .12; FItemIncluded (1,149) = 18.73, p < .001, partial η² = .11), with headlines about nonoccurring vaccination-related events resulting in lower recall (MItemExcluded = 0.32, SDItemExcluded = 0.20; MItemIncluded = 0.31, SDItemIncluded = 0.19) than headlines about occurring vaccination-related events (MItemExcluded = 0.39, SDItemExcluded = 0.20; MItemIncluded = 0.38, SDItemIncluded = 0.20). As the p-value is below .01 and the effect is in the expected direction, H0 can be rejected and the hypothesis on recall is confirmed.

Finally, the repeated measures ANOVA of event on perceived importance for vaccine evaluation showed a significant and large effect (FItemExcluded (1,149) = 33.08, p < .001, partial η² = .18; FItemIncluded (1,149) = 32.72, p < .001, partial η² = .18), with headlines about nonoccurring vaccination-related events being perceived as less important in evaluating the vaccine (MItemExcluded = 4.10, SDItemExcluded = 1.27; MItemIncluded = 3.97, SDItemIncluded = 1.26) than headlines about occurring vaccination-related events (MItemExcluded = 4.53, SDItemExcluded = 1.20; MItemIncluded = 4.39, SDItemIncluded = 1.18). As the p-value is below .01 and the effect is in the expected direction, H0 can be rejected and the hypothesis on perceived importance is confirmed.

Batch 2. After running a second batch of 100 participants, a repeated measures ANOVA was performed on the merged reading time data of the 250 participants. The results resemble those of batch 1, as reading times without ambiguous item (FItemExcluded (1,249) = 5.22, p = .023, partial η² = .02) and with ambiguous item (FItemIncluded (1,249) = 2.24, p = .14, partial η² = .01) again show p ≥ .01 and ≤ .36, which is inconclusive under the COAST method. Therefore, we were again unable to reject H0 regarding reading times, requiring a third batch of data collection.

Batch 3. After running the final batch of 100 participants, a repeated measures ANOVA was performed on the merged reading time data of the 350 participants. The analysis showed a significant but small effect under COAST (i.e., p < .01) when excluding the ambiguous item (FItemExcluded (1,349) = 10.15, p = .002, partial η² = .03), with headlines about nonoccurring vaccination-related events being processed more slowly (M = 5295, SD = 3945) than headlines about occurring vaccination-related events (M = 5168, SD = 3918).6 However, when including the ambiguous item, the results are inconclusive as p ≥ .01 and ≤ .36 (FItemIncluded (1,349) = 5.86, p = .016, partial η² = .02; Mnonocc = 5249, SDnonocc = 3859; Mocc = 5179, SDocc = 3943). As the results allow us to reject H0 only for a large sample size, with a small effect size, and when one item is excluded (which was not preregistered), we believe more evidence is needed to convincingly reject H0. We therefore cannot confirm our hypothesis on reading times.

For people to make informed decisions regarding vaccination, it is essential that they can adequately process evidence-based information. In this work, we examined whether people might be subject to bias when processing vaccination information. If so, their capacity to make well-informed (i.e., knowledge-based, deliberate, and value-consistent) decisions on vaccination might be undermined. Specifically, we tested the potential impact of the relatively unknown feature positive effect, which is the phenomenon that people experience greater difficulty processing descriptions of nonoccurring compared to occurring events.

This research was motivated by our observations in the media and literature that vaccination-critical and vaccination-supporting information seem to differ in their presentation of risk. Generally, vaccination-critical information appears to describe people’s experiences with occurring events (e.g., vaccinated people who experience adverse events; Bean, 2011; Leask et al., 2010; Zimmerman et al., 2005), whereas vaccination-supporting information mainly appears to focus on nonoccurring events (e.g., vaccinated people who do not fall ill with the vaccine-preventable disease; Hobson-West, 2003). Therefore, FPE might explain the relatively large appeal and impact of vaccination-critical (versus vaccination-supporting) information on people’s online behaviors (e.g., Johnson et al., 2020) and perceptions, attitudes, and intentions (e.g., Nan & Madden, 2012).

Our findings show that descriptions of nonoccurring vaccination-related outcomes are indeed more difficult to remember and have a lower perceived impact on the vaccine’s evaluation than descriptions of occurring outcomes. Whether descriptions of nonoccurring outcomes are also more difficult to process remains inconclusive. The memory and perceived importance findings are in line with earlier demonstrations of FPE in fundamental research, and extend these fundamental insights to the relevant, timely, and ecologically valid context of vaccination communication (for other examples in an applied, forensic context, see Eerland et al., 2012; Eerland & Rassin, 2010). By having counterbalanced outcome and descriptor frames in the design of this study, the findings do not reflect a mere gain versus loss framing effect or a potential positivity or negativity bias, but rather indicate that indeed the (non)occurrence of described events biases how vaccination-related information is remembered, perceived, and potentially processed.

One could argue that reading times might reflect effort rather than processing difficulty. However, this is contrary to the status quo of the memory and language comprehension literature in which reading times are a widely accepted measure for the ease or difficulty with which linguistic information is processed (Rayner, 1998). This literature demonstrates a slowing down of reading times when information is unpredictable (Smith & Levy, 2013), requires temporal updating (Radvansky & Copeland, 2010), or is inconsistent with prior information (Rayner et al., 2006). In these studies, reading times are taken to reflect processing difficulty on a basic informational level, while being agnostic about the higher motivational level to effortfully process a text (which is arguably more affected by peoples’ motivation, opportunity, and ability, than mere text characteristics).

Accounts that assume reading times to reflect processing difficulty, like in this study, make fundamentally different predictions about the relation between reading times and recall than motivational accounts. Processing accounts predict that information that is processed more easily (referred to as “processing fluency”) should result in improved memory (Atkinson & Shiffrin, 1968; Hirshman & Mulligan, 1991). Such a positive relation between reading times and recall is indeed demonstrated in the current study, as well as in earlier work on FPE (Eerland et al., 2012).

This study demonstrates that people have greater difficulty recalling nonoccurring than occurring vaccination-related outcomes and perceive these as less important in reaching a judgment. Explanations of this effect are scarce. One intuitive explanation, pointed out by Rassin and colleagues (2008), is that nonoccurring events imply more uncertainty than occurring events, as they more easily allow for the generation of alternative explanations. For instance, when someone experiences fever after receiving a vaccine, the raised temperature is easily causally connected to the vaccine, after which one would conclude that taking the vaccine results in fever. Alternative explanations are less likely to be conceived since the vaccine provides a logical explanation for the fever. However, not experiencing a fever after receiving a vaccine makes a causal connection more difficult. After all, there can be multiple reasons for not experiencing a fever after receiving a vaccine; one could have been a lucky exception, might not have noticed their raised temperature, or might not have made the link between a raised temperature and the vaccination. The vaccine not causing a fever is only one of several possible explanations. We therefore argue that the uncertainty evoked by nonoccurrences can provide a plausible explanation of our findings.

We adopted a within-subjects design to eliminate between-subjects variability and provide optimal circumstances for the feature positive effect to manifest. The upside of this design is that the presented headlines can be taken to resemble media reporting in the early days of the COVID-19 crisis in terms of the diversity of vaccination outcomes. During the roll out of the COVID-19 vaccination strategy, a lack of clarity about the potential consequences of the vaccine prevailed and information in the media was very heterogeneous (Küçükali et al., 2022; Motta & Stecula, 2023; Scannell et al., 2021; Shaaban et al., 2022; Yousaf et al., 2022). Although our study materials are not representative or reflective of that media coverage, our design resembled this heterogeneity in the sense that each participant was presented with news headlines describing gain and loss framed, positive and negative consequences about occurring and nonoccurring vaccination-related outcomes. This approach supports the ecological validity of our finding that FPE substantially impacts information recall and perceived importance, which might occur in early pandemic situations in which there is still much uncertainty about a novel vaccine and communication reflects various perspectives.

A limitation of the adopted within-subjects design is that we were unable to assess the actual impact of (non)occurring event descriptions on vaccine evaluations. That is, by asking people to self-report how important they considered the presented descriptions of both occurring and nonoccurring events, we were unable to have an objective measure of a statement’s weight in their evaluation and assess whether nonoccurring events indeed have smaller impact on actual vaccine evaluations that occurring events. A more objective measure might not ask participants to rate importance, but might manipulate the (non)occurrence of events between participants and then ask them about their vaccine evaluations, to distinguish whether and how described (non)occurrences influence and bias judgment. Although our current set-up did not allow for such a test, it does give insight into which information people find important in reaching a judgment in highly uncertain situations where much new information about occurrences and nonoccurrences is provided. Future research should reveal whether the described (non)occurrence of events indeed not only predicts how information is processed and subjectively weighed, but also whether and how this impacts people’s real-life vaccine evaluations and attitudes.

Conclusion

In the current societal, political, and healthcare landscape, decisions regarding vaccination revolve around values such as personal autonomy, freedom of choice, and informed decision making. In this context where people are stimulated to make vaccination decisions themselves, it is essential that they can adequately process, recall and weigh evidence-based vaccination information. Our study shows that this is not necessarily the case: the mechanism responsible for the impact of vaccination communication on memory and perceived importance in judgment seems fundamentally biased when opposing arguments in the discussion reflect differences in (non)occurring vaccination-related outcomes. At the same time, these findings give concrete and practical pointers on how to improve vaccination communication. The current support for FPE suggests that evidence-based vaccination information is most effectively communicated in terms of occurring events or outcomes (e.g., wellbeing) rather than nonoccurring events or outcomes (e.g., no illness).

Contributed to conception and design: LV, GM, AE

Contributed to acquisition of data: LV

Contributed to analysis and interpretation of data: LV, GM, AE

Drafted and/or revised the article: LV, GM, AE

Approved the submitted version for publication: LV, GM, AE

We heartily thank Associate Editor Ullrich Ecker and the two anonymous reviewers for their supporting and insightful suggestions, which have improved this work.

We thank the Registered Report Funding Partnership (RRFP) of the Society for the Improvement of Psychological Science (SIPS) and Collabra: Psychology, as well as the Behavioral Science Institute, for funding this work.

The authors report no conflict of interest.

All study materials, the laboratory log, data, syntax, and the stage 1 registered report are publicly available on the Open Science Framework (https://osf.io/x8n4a/).’

Conceptualization: Lisa Vandeberg (Lead). Methodology: Gijsje Maas (Lead). Project administration: Anita Eerland. Software: Anita Eerland.

1.

Analyzing the reading time data from batch 1 (i.e., 150 participants) based on 1) the raw data without outlier removal, 2) the data in which the preregistered outlier criteria were adopted (i.e., remove RTs > 8 seconds and RTs above or below 2.5 SDs of the Mean), and 3) the data in which the new outlier criteria were adopted (i.e., remove RTs above or below 2 SDs of the Median) showed a similar data pattern.

2.

The ECSS explicitly mentions that they do not ‘approve’ of studies, they rather evaluate whether there are any formal objections.

3.

In the scenario about blue vaccine, the text “in the United States” was changed into “in North America and Europe” since the preregistration, because otherwise the scenario might have been more relevant for the potential US participants than the potential UK/Canadian citizens (since the study was open to all English native speakers, and Australia and New Zealand were already mentioned in the introductory sentence).

4.

The vaccine evaluation question was not preregistered and analyzed but included to meaningfully introduce the perceived importance measure. Performing the perceived importance item (which asked participants to rate the perceived importance of each headline in evaluating the described vaccine) would require participants to execute two steps, i.e., 1) to evaluate the vaccine, and 2) to judge to what extent participants used which information in making this evaluation. To improve the clarity of the question and reduce the cognitive load required to answer it, we made the first step explicit and added the vaccine evaluation question.

5.

We had preregistered to start with a randomization check across conditions, not realizing that this procedure is only meaningful if conditions are presented in a between-subjects design. Because a randomization check would be meaningless in our within-subjects design, no such check was performed.

6.

Although the analyses were performed on log transformed reading times, descriptives are presented in milliseconds for ease of interpretation.

Ache, K. A., & Wallace, L. S. (2008). Human Papillomavirus Vaccination Coverage on YouTube. American Journal of Preventive Medicine, 35(4), 389–392. https://doi.org/10.1016/j.amepre.2008.06.029
Allison, S. T., & Messick, D. M. (1988). The Feature-Positive Effect, Attitude Strength, and Degree of Perceived Consensus. Personality and Social Psychology Bulletin, 14(2), 231–241. https://doi.org/10.1177/0146167288142002
Ames, H. M., Glenton, C., & Lewin, S. (2017). Parents’ and informal caregivers’ views and experiences of communication about routine childhood vaccination: A synthesis of qualitative evidence. Cochrane Database of Systematic Reviews, 2. https://doi.org/10.1002/14651858.cd011787.pub2
Atkinson, R. C., & Shiffrin, R. M. (1968). Human Memory: A Proposed System and its Control Processes. In K. W. Spence & J. T. Spence (Eds.), Psychology of Learning and Motivation (Vol. 2, pp. 89–195). Academic Press. https://doi.org/10.1016/s0079-7421(08)60422-3
Ball, L. K., Evans, G., & Bostrom, A. (1998). Risky Business: Challenges in Vaccine Risk Communication. Pediatrics, 101(3), 453–458. https://doi.org/10.1542/peds.101.3.453
Bean, S. J. (2011). Emerging and continuing trends in vaccine opposition website content. Vaccine, 29(10), 1874–1880. https://doi.org/10.1016/j.vaccine.2011.01.003
Betsch, C., Renkewitz, F., Betsch, T., & Ulshöfer, C. (2010). The Influence of Vaccine-critical Websites on Perceiving Vaccination Risks. Journal of Health Psychology, 15(3), 446–455. https://doi.org/10.1177/1359105309353647
Bitgood, S. C., Segrave, K., & Jenkins, H. M. (1976). Verbal feedback and the feature-positive effect in children. Journal of Experimental Child Psychology, 21(2), 249–255. https://doi.org/10.1016/0022-0965(76)90038-2
Brown, K. F., Kroll, J. S., Hudson, M. J., Ramsay, M., Green, J., Vincent, C. A., Fraser, G., & Sevdalis, N. (2010). Omission bias and vaccine rejection by parents of healthy children: Implications for the influenza A/H1N1 vaccination programme. Vaccine, 28(25), 4181–4185. https://doi.org/10.1016/j.vaccine.2010.04.012
Brysbaert, M. (2019). How many words do we read per minute? A review and meta-analysis of reading rate. Journal of Memory and Language, 109, 104047. https://doi.org/10.1016/j.jml.2019.104047
Bussink-Voorend, D. M., Hautvast, J. L. A., Vandeberg, L., Visser, O., & Hulscher, M. E. J. L. (2022). A systematic literature review to clarify the concept of vaccine hesitancy. Nature Human Behaviour, 6(12), 1634–1648. https://doi.org/10.1038/s41562-022-01431-6
Cherubini, P., Rusconi, P., Russo, S., & Crippa, F. (2013). Missing the dog that failed to bark in the nighttime: On the overestimation of occurrences over non-occurrences in hypothesis testing. Psychological Research, 77(3), 348–370. https://doi.org/10.1007/s00426-012-0430-3
Davies, P., Chapman, S., & Leask, J. (2002). Antivaccination activists on the world wide web. Archives of Disease in Childhood, 87(1), 22–25. https://doi.org/10.1136/adc.87.1.22
Downs, J. S., de Bruin, W. B., & Fischhoff, B. (2008). Parents’ vaccination comprehension and decisions. Vaccine, 26(12), 1595–1607. https://doi.org/10.1016/j.vaccine.2008.01.011
Dubé, E., Vivion, M., & MacDonald, N. E. (2015). Vaccine hesitancy, vaccine refusal and the anti-vaccine movement: Influence, impact and implications. Expert Review of Vaccines, 14(1), 99–117. https://doi.org/10.1586/14760584.2015.964212
Eerland, A., Post, L. S., Rassin, E., Bouwmeester, S., & Zwaan, R. A. (2012). Out of sight, out of mind: The presence of forensic evidence counts more than its absence. Acta Psychologica, 140(1), 96–100. https://doi.org/10.1016/j.actpsy.2012.02.006
Eerland, A., Rassin, E. (2010). Biased evaluation of incriminating and exonerating (non)evidence. Psychology, Crime Law, 18(4), 351–358. https://doi.org/10.1080/1068316x.2010.493889
Faul, F., Erdfelder, E., Lang, A.-G., Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. https://doi.org/10.3758/bf03193146
Fazio, R. H., Sherman, S. J., Herr, P. M. (1982). The feature-positive effect in the self-perception process: Does not doing matter as much as doing? Journal of Personality and Social Psychology, 42(3), 404–411. https://doi.org/10.1037/0022-3514.42.3.404
Frick, R. W. (1998). A better stopping rule for conventional statistical tests. Behavior Research Methods, Instruments, Computers, 30(4), 690–697. https://doi.org/10.3758/bf03209488
Gigerenzer, G. (2008). Why Heuristics Work. Perspectives on Psychological Science, 3(1), 20–29. https://doi.org/10.1111/j.1745-6916.2008.00058.x
Guidry, J. P. D., Carlyle, K., Messner, M., Jin, Y. (2015). On pins and needles: How vaccines are portrayed on Pinterest. Vaccine, 33(39), 5051–5056. https://doi.org/10.1016/j.vaccine.2015.08.064
Haase, N., Schmid, P., Betsch, C. (2020). Impact of disease risk on the narrative bias in vaccination risk perceptions. Psychology Health, 35(3), 346–365. https://doi.org/10.1080/08870446.2019.1630561
Habel, M. A., Liddon, N., Stryker, J. E. (2009). The HPV Vaccine: A Content Analysis of Online News Stories. Journal of Women’s Health, 18(3), 401–407. https://doi.org/10.1089/jwh.2008.0920
Hauser, D. J., Schwarz, N. (2015). It’s a Trap! Instructional Manipulation Checks Prompt Systematic Thinking on “Tricky” Tasks. SAGE Open, 5(2), 2158244015584617. https://doi.org/10.1177/2158244015584617
He, K., Mack, W. J., Neely, M., Lewis, L., Anand, V. (2021). Parental Perspectives on Immunizations: Impact of the COVID-19 Pandemic on Childhood Vaccine Hesitancy. Journal of Community Health, 47(1), 39–52. https://doi.org/10.1007/s10900-021-01017-9
Hirshman, E., Mulligan, N. (1991). Perceptual interference improves explicit memory but does not enhance data-driven processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17(3), 507–513. https://doi.org/10.1037/0278-7393.17.3.507
Hobson-West, P. (2003). Understanding vaccination resistance: Moving beyond risk. Health, Risk Society, 5(3), 273–283. https://doi.org/10.1080/13698570310001606978
Hovland, C. I., Weiss, W. (1953). Transmission of information concerning concepts through positive and negative instances. Journal of Experimental Psychology, 45(3), 175–182. https://doi.org/10.1037/h0062351
Jacobson, R. M., St. Sauver, J. L., Finney Rutten, L. J. (2015). Vaccine Hesitancy. Mayo Clinic Proceedings, 90(11), 1562–1568. https://doi.org/10.1016/j.mayocp.2015.09.006
Johnson, N. F., Velásquez, N., Restrepo, N. J., Leahy, R., Gabriel, N., El Oud, S., Zheng, M., Manrique, P., Wuchty, S., Lupu, Y. (2020). The online competition between pro- and anti-vaccination views. Nature, 582(7811), 230–233. https://doi.org/10.1038/s41586-020-2281-1
Jolley, D., Douglas, K. M. (2014). The Effects of Anti-Vaccine Conspiracy Theories on Vaccination Intentions. PLOS ONE, 9(2), e89177. https://doi.org/10.1371/journal.pone.0089177
Jones, A. M., Omer, S. B., Bednarczyk, R. A., Halsey, N. A., Moulton, L. H., Salmon, D. A. (2012). Parents’ Source of Vaccine Information and Impact on Vaccine Attitudes, Beliefs, and Nonmedical Exemptions. Advances in Preventive Medicine, 2012, e932741. https://doi.org/10.1155/2012/932741
Kardes, F. R., Sanbonmatsu, D. M., Herr, P. M. (1990). Consumer Expertise and the Feature-Positive Effect: Implications For Judgment and Inference. ACR North American Advances, NA-17. https://www.acrwebsite.org/volumes/7039/volumes/v17/NA-17/full
Kata, A. (2012). Anti-vaccine activists, Web 2.0, and the postmodern paradigm – An overview of tactics and tropes used online by the anti-vaccination movement. Vaccine, 30(25), 3778–3789. https://doi.org/10.1016/j.vaccine.2011.11.112
Kaup, B., Zwaan, R. A. (2003). Effects of negation and situational presence on the accessibility of text information. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(3), 439–446. https://doi.org/10.1037/0278-7393.29.3.439
Keelan, J., Pavri-Garcia, V., Tomlinson, G., Wilson, K. (2007). YouTube as a Source of Information on Immunization: A Content Analysis. JAMA, 298(21), 2482–2484. https://doi.org/10.1001/jama.298.21.2482
Kerr, J. R., Freeman, A. L. J., Marteau, T. M., van der Linden, S. (2021). Effect of Information about COVID-19 Vaccine Effectiveness and Side Effects on Behavioural Intentions: Two Online Experiments. Vaccines, 9(4), 379. https://doi.org/10.3390/vaccines9040379
Klayman, J., Ha, Y. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94(2), 211–228. https://doi.org/10.1037/0033-295x.94.2.211
Küçükali, H., Ataç, Ö., Palteki, A. S., Tokaç, A. Z., Hayran, O. (2022). Vaccine Hesitancy and Anti-Vaccination Attitudes during the Start of COVID-19 Vaccination Program: A Content Analysis on Twitter Data. Vaccines, 10(2), 161. https://doi.org/10.3390/vaccines10020161
Lakens, D. (2014). Performing high-powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701–710. https://doi.org/10.1002/ejsp.2023
Leask, J., Chapman, S., Cooper Robbins, S. C. (2010). “All manner of ills”: The features of serious diseases attributed to vaccination. Vaccine, 28(17), 3066–3070. https://doi.org/10.1016/j.vaccine.2009.10.042
Lehmann, B. A., de Melker, H. E., Timmermans, D. R. M., Mollema, L. (2017). Informed decision making in the context of childhood immunization. Patient Education and Counseling, 100(12), 2339–2345. https://doi.org/10.1016/j.pec.2017.06.015
Lutkenhaus, R. O., Jansz, J., Bouman, M. P. A. (2019a). Tailoring in the digital era: Stimulating dialogues on health topics in collaboration with social media influencers. DIGITAL HEALTH, 5, 2055207618821521. https://doi.org/10.1177/2055207618821521
Lutkenhaus, R. O., Jansz, J., Bouman, M. P. A. (2019b). Mapping the Dutch vaccination debate on Twitter: Identifying communities, narratives, and interactions. Vaccine: X, 1, 100019. https://doi.org/10.1016/j.jvacx.2019.100019
MacDonald, N. E. (2015). Vaccine hesitancy: Definition, scope and determinants. Vaccine, 33(34), 4161–4164. https://doi.org/10.1016/j.vaccine.2015.04.036
MacDonald, N. E., Smith, J., Appleton, M. (2012). Risk perception, risk management and safety assessment: What can governments do to increase public confidence in their vaccine system? Biologicals, 40(5), 384–388. https://doi.org/10.1016/j.biologicals.2011.08.001
Machingaidze, S., Wiysonge, C. S. (2021). Understanding COVID-19 vaccine hesitancy. Nature Medicine, 27(8), 1338–1339. https://doi.org/10.1038/s41591-021-01459-7
Mandel, D. R. (2001). Gain-Loss Framing and Choice: Separating Outcome Formulations from Descriptor Formulations. Organizational Behavior and Human Decision Processes, 85(1), 56–76. https://doi.org/10.1006/obhd.2000.2932
Meppelink, C. S., Hendriks, H., Trilling, D., van Weert, J. C. M., Shao, A., Smit, E. S. (2021). Reliable or not? An automated classification of webpages about early childhood vaccination using supervised machine learning. Patient Education and Counseling, 104(6), 1460–1466. https://doi.org/10.1016/j.pec.2020.11.013
Meppelink, C. S., Smit, E. G., Fransen, M. L., Diviani, N. (2019). “I was Right about Vaccination”: Confirmation Bias and Health Literacy in Online Health Information Seeking. Journal of Health Communication, 24(2), 129–140. https://doi.org/10.1080/10810730.2019.1583701
Motta, M., Stecula, D. (2023). The Effects of Partisan Media in the Face of Global Pandemic: How News Shaped COVID-19 Vaccine Hesitancy. Political Communication, 40(5), 505–526. https://doi.org/10.1080/10584609.2023.2187496
Nan, X., Madden, K. (2012). HPV Vaccine Information in the Blogosphere: How Positive and Negative Blogs Influence Vaccine-Related Risk Perceptions, Attitudes, and Behavioral Intentions. Health Communication, 27(8), 829–836. https://doi.org/10.1080/10410236.2012.661348
Newman, J. P., Wolff, W. T., Hearst, E. (1980). The feature-positive effect in adult human subjects. Journal of Experimental Psychology: Human Learning and Memory, 6(5), 630–650. https://doi.org/10.1037/0278-7393.6.5.630
Niccolai, L. M., Pettigrew, M. M. (2016). The Role of Cognitive Bias in Suboptimal HPV Vaccine Uptake. Pediatrics, 138(4). https://doi.org/10.1542/peds.2016-1537
O’Keefe, D. J., Nan, X. (2012). The Relative Persuasiveness of Gain- and Loss-Framed Messages for Promoting Vaccination: A Meta-Analytic Review. Health Communication, 27(8), 776–783. https://doi.org/10.1080/10410236.2011.640974
Pace, G. M., McCoy, D. F., Nallan, G. B. (1980). Feature-Positive and Feature-Negative Learning in the Rhesus Monkey and Pigeon. The American Journal of Psychology, 93(3), 409–427. https://doi.org/10.2307/1422721
Patel, M. K., Goodson, J. L., Alexander, J. P., Kretsinger, K., Sodha, S. V., Steulet, C., Gacic-Dobo, M., Rota, P. A., McFarland, J., Menning, L., Mulders, M. N., Crowcroft, N. S. (2020). Progress Toward Regional Measles Elimination — Worldwide, 2000–2019. Morbidity and Mortality Weekly Report, 69(45), 1700–1705. https://doi.org/10.15585/mmwr.mm6945a6
Paulussen, T. G. W., Hoekstra, F., Lanting, C. I., Buijs, G. B., Hirasing, R. A. (2006). Determinants of Dutch parents’ decisions to vaccinate their child. Vaccine, 24(5), 644–651. https://doi.org/10.1016/j.vaccine.2005.08.053
Penţa, M. A., Băban, A. (2018). Message Framing in Vaccine Communication: A Systematic Review of Published Literature. Health Communication, 33(3), 299–314. https://doi.org/10.1080/10410236.2016.1266574
Radvansky, G. A., Copeland, D. E. (2010). Reading times and the detection of event shift processing. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(1), 210–216. https://doi.org/10.1037/a0017258
Randolph, H. E., Barreiro, L. B. (2020). Herd Immunity: Understanding COVID-19. Immunity, 52(5), 737–741. https://doi.org/10.1016/j.immuni.2020.04.012
Rassin, E. (2014). Reducing the feature positive effect by alerting people to its existence. Learning Behavior, 42(4), 313–317. https://doi.org/10.3758/s13420-014-0148-8
Rassin, E., Muris, P., Franken, I., van Straten, M. (2008). The feature-positive effect and hypochondriacal concerns. Behaviour Research and Therapy, 46(2), 263–269. https://doi.org/10.1016/j.brat.2007.11.003
Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124(3), 372–422. https://doi.org/10.1037/0033-2909.124.3.372
Rayner, K., Chace, K. H., Slattery, T. J., Ashby, J. (2006). Eye Movements as Reflections of Comprehension Processes in Reading. Scientific Studies of Reading, 10(3), 241–255. https://doi.org/10.1207/s1532799xssr1003_3
Sainsbury, R. S., Jenkins, H. M. (1967). Feature-positive effect in discrimination learning. Proceedings of the Annual Convention of the American Psychological Association, 2, 17–18.
Sallam, M. (2021). COVID-19 Vaccine Hesitancy Worldwide: A Concise Systematic Review of Vaccine Acceptance Rates. Vaccines, 9(2), 160. https://doi.org/10.3390/vaccines9020160
Sanders, J., Krieken, K. van, Vandeberg, L. (2019). Ouders als helden: De moeilijkheden en mogelijkheden van vaccinatieverhalen in gezondheidscommunicatie. [Parents as heroes. The difficulties and possibilities of vaccination narratives in health education]. Tijdschrift Voor Taalbeheersing, 41(3), 485–513. https://doi.org/10.5117/tvt2019.3.004.sand
Scannell, D., Desens, L., Guadagno, M., Tra, Y., Acker, E., Sheridan, K., Rosner, M., Mathieu, J., Fulk, M. (2021). COVID-19 Vaccine Discourse on Twitter: A Content Analysis of Persuasion Techniques, Sentiment and Mis/Disinformation. Journal of Health Communication, 26(7), 443–459. https://doi.org/10.1080/10810730.2021.1955050
Serpell, L., Green, J. (2006). Parental decision-making in childhood vaccination. Vaccine, 24(19), 4041–4046. https://doi.org/10.1016/j.vaccine.2006.02.037
Shaaban, R., Ghazy, R. M., Elsherif, F., Ali, N., Yakoub, Y., Aly, M. O., ElMakhzangy, R., Abdou, M. S., McKinna, B., Elzorkany, A. M., Abdullah, F., Alnagar, A., ElTaweel, N., Alharthi, M., Mohsin, A., Ordóñez-Cruickshank, A., Toniolo, B., Grafolin, T., Aye, T. T., … Kamal, A. (2022). COVID-19 Vaccine Acceptance among Social Media Users: A Content Analysis, Multi-Continent Study. International Journal of Environmental Research and Public Health, 19(9), 5737. https://doi.org/10.3390/ijerph19095737
Smith, N. J., Levy, R. (2013). The effect of word predictability on reading time is logarithmic. Cognition, 128(3), 302–319. https://doi.org/10.1016/j.cognition.2013.02.013
Tversky, A., Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. Science, 185(4157), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
Tversky, A., Kahneman, D. (1985). The Framing of Decisions and the Psychology of Choice. In G. Wright (Ed.), Behavioral Decision Making (pp. 25–41). Springer US. https://doi.org/10.1007/978-1-4613-2391-4_2
van der Linden, S. L., Clarke, C. E., Maibach, E. W. (2015). Highlighting consensus among medical scientists increases public support for vaccines: Evidence from a randomized experiment. BMC Public Health, 15(1), 1207. https://doi.org/10.1186/s12889-015-2541-4
Vandeberg, L., Meppelink, C. S., Sanders, J., Fransen, M. L. (2022). Facts Tell, Stories Sell? Assessing the Availability Heuristic and Resistance as Cognitive Mechanisms Underlying the Persuasive Effects of Vaccination Narratives. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.837346
Visschers, V. H. M., Meertens, R. M., Passchier, W. W. F., de Vries, N. N. K. (2009). Probability information in risk communication: A review of the research literature. Risk Analysis: An Official Publication of the Society for Risk Analysis, 29(2), 267–287. https://doi.org/10.1111/j.1539-6924.2008.01137.x
Wells, G. L., Lindsay, R. C. (1980). On estimating the diagnosticity of eyewitness nonidentifications. Psychological Bulletin, 88(3), 776–784. https://doi.org/10.1037/0033-2909.88.3.776
World Health Organization. (2019). Ten health issues WHO will tackle this year. https://www.who.int/news-room/spotlight/ten-threats-to-global-health-in-2019
Yousaf, M., Hassan Raza, S., Mahmood, N., Core, R., Zaman, U., Malik, A. (2022). Immunity debt or vaccination crisis? A multi-method evidence on vaccine acceptance and media framing for emerging COVID-19 variants. Vaccine, 40(12), 1855–1863. https://doi.org/10.1016/j.vaccine.2022.01.055
Zimmerman, R. K., Wolfe, R. M., Fox, D. E., Fox, J. R., Nowalk, M. P., Troy, J. A., Sharp, L. K. (2005). Vaccine Criticism on the World Wide Web. Journal of Medical Internet Research, 7(2), e369. https://doi.org/10.2196/jmir.7.2.e17
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material