Evidence-based vaccination communication aims to support people in making informed decisions regarding vaccination. It is therefore important to learn how vaccination information is processed and how it might be biased. One potentially relevant bias that is overlooked in the vaccination literature is the feature positive effect (FPE), the phenomenon that people experience greater difficulty processing nonoccurring events than occurring events, which impacts judgment and decision making. The present study adopts an experimental design with sequential testing rules to examine a potential FPE for vaccination information processing. The results convincingly demonstrate that vaccination-related events described as nonoccurring (e.g., no side effects after vaccination) versus occurring (e.g., side effects after vaccination) indeed result in lower recall and are perceived as less important in evaluating the vaccine. The results regarding processing time remain inconclusive. These findings might help explain the appeal of vaccination-critical information and suggest that emphasizing what does happen as a result of vaccination, rather than what does not, can help debias the processing of evidence-based vaccination information.
Introduction
Vaccines are widely acknowledged as a safe and effective means to reduce morbidity and mortality from infectious diseases. However, public confidence in vaccines has been decreasing (Dubé et al., 2015; MacDonald, 2015) and vaccine hesitancy has been identified as a major threat to global health (World Health Organization, 2019). Vaccine-preventable diseases like measles, that were nearly eradicated in developed countries a decade ago, have reemerged in both the US and Europe due to insufficient immunization coverage (Patel et al., 2020). Recently, the emergence of the COVID-19 virus has re-emphasized the importance of broadly carried vaccine acceptance (Randolph & Barreiro, 2020) and the necessity to improve our understanding of vaccine hesitancy (He et al., 2021; Machingaidze & Wiysonge, 2021; Sallam, 2021).
Vaccine hesitancy can arguably best be defined as indecisiveness regarding the acceptance or rejection of a vaccination (Bussink-Voorend et al., 2022). The main idea is that vaccine hesitancy is a predictor of vaccine rejection. To reduce vaccine hesitancy, strategic communication (e.g., from health professionals, governments, and scientists) provides evidence-based information on the scientific consensus regarding vaccinations, to help people make well-informed, rational vaccination decisions. However, a large majority of decisions regarding both the acceptance and the rejection of vaccinations does not classify as informed (i.e., deliberate, value-consistent, and based on knowledge; Lehmann et al., 2017). This suggests that vaccination decision making is not merely a deliberate and analytical process but is also susceptible to other influences.
Risks related to vaccine-preventable diseases and vaccine adverse events can be difficult to comprehend (Visschers et al., 2009) and possible outcomes of any given decision to (not) vaccinate remain uncertain (Serpell & Green, 2006). For such decisions under uncertainty, people often rely on heuristics, i.e., cognitive short-cuts in which (part of the) complex information is ignored in the aim to reach a decision (Gigerenzer, 2008; Tversky & Kahneman, 1974). Such heuristics are often useful (Gigerenzer, 2008; van der Linden et al., 2015). However, heuristics sometimes lead to systematic and severe errors in judgment, referred to as cognitive biases (Tversky & Kahneman, 1974). Vaccination decisions are arguably susceptible to such biases (Ball et al., 1998; Jacobson et al., 2015; MacDonald et al., 2012; Niccolai & Pettigrew, 2016).
Over the years, multiple heuristics and biases have been studied in the context of vaccination judgment and decision making, like the compression heuristic (Zimmerman et al., 2005), availability heuristic (Vandeberg et al., 2022), confirmation bias (Meppelink et al., 2019), and omission bias (Brown et al., 2010). However, one highly relevant but relatively unknown bias is overlooked, the feature positive effect (FPE, coined by Sainsbury & Jenkins, 1967). FPE refers to the relative difficulty humans (and other animals) have processing information about events that do not occur compared to events that do occur. This discrepancy in processing difficulty results in underweighing the informational value of nonoccurring events (Eerland et al., 2012; Eerland & Rassin, 2010; Newman et al., 1980; Wells & Lindsay, 1980). We argue that the (non)occurrence of events is an inherent part of informing about and understanding the risks involved in vaccination decisions. More specifically, the (non)occurrence of vaccine acceptance (i.e., whether or not a vaccination is administered) affects whether one might expect the (non)occurrence of a vaccine-preventable disease or the (non)occurrence of vaccine adverse events. Empirical work on FPE shows that people have more difficulty processing nonoccurrences than occurrences, and this asymmetry in processing difficulty impacts their judgment, which can have important implications. Therefore, investigation of FPE is essential to better understand how information processing impacts the vaccination decision making process.
Vaccine information
While health professionals are generally viewed as an important source of information about vaccination (Ames et al., 2017), the internet is frequently used to search for additional or “independent” information (Downs et al., 2008; Jones et al., 2012). This may be due to beliefs that health professionals disregard possible harms and mainly provide information about the benefits of vaccination (Jones et al., 2012; Paulussen et al., 2006). Although online vaccination content regularly takes a positive or neutral vaccination stance (Ache & Wallace, 2008; Habel et al., 2009; Keelan et al., 2007), vaccination-critical content that is not based on scientific evidence is abundant (Davies et al., 2002; Guidry et al., 2015; Jolley & Douglas, 2014) and much more effective in reaching and activating (vaccine-hesitant) populations (Johnson et al., 2020; Lutkenhaus et al., 2019b). Even brief exposures to such vaccination-critical information can decrease the perceived risk of non-vaccination, increase perceived vaccination risks, and negatively affect vaccination attitudes and intentions (Betsch et al., 2010; Jolley & Douglas, 2014; Nan & Madden, 2012). Conversely, the impact of evidence-based vaccination-supporting information is not so clear-cut. Although some findings show that scientific (consensus) information can help people correct misperceptions regarding vaccination (van der Linden et al., 2015), other findings suggest that scientific information does not have much impact on people’s vaccination perceptions and intentions (Kerr et al., 2021; Nan & Madden, 2012).
On the internet, vaccination-critical information differs from vaccination-supporting information in several ways. First, vaccination-critical information employs a wide variation of arguments ranging from disputing science to safety concerns, conspiracy theories, and alternative medicine (Johnson et al., 2020; Kata, 2012). Conversely, vaccination-supporting argumentation is more homogeneous (Johnson et al., 2020; Meppelink et al., 2021) and based on the repetition of facts, figures, and scientific studies (Lutkenhaus et al., 2019b). Second, vaccination-critical information often appears in an emotional, narrative format describing people’s lived experiences with vaccinations (Bean, 2011; Guidry et al., 2015; Haase et al., 2020; Sanders et al., 2019), whereas vaccination-supporting information generally adopts a more impersonal, expository format highlighting scientific research (Guidry et al., 2015; Lutkenhaus et al., 2019a, 2019b; Sanders et al., 2019). Third, and most important for the present study, both types of vaccination information appear to differ in their presentation of risk. Vaccination-critical sources are likely to link vaccination to the occurrence of adverse outcomes, including illness, idiopathic diseases such as autism, disability, and death (Bean, 2011; Leask et al., 2010; Zimmerman et al., 2005). Conversely, when vaccination-supporting sources discuss the topic of vaccination risk, they emphasize the lowered risk for illness when a vaccination is administered (Hobson-West, 2003), thereby highlighting the nonoccurrence of an outcome. As psychological research has shown that information about nonoccurrences is often underweighed, further examination of the (non)occurrence of events in communication about vaccination is warranted.
Feature positive effect
The feature positive effect (FPE) refers to the tendency to experience more difficulty in processing (i.e., recognizing, interpreting, storing, and retrieving) nonoccurrences than occurrences (Allison & Messick, 1988). As a result, occurring events are considered more important in judgment and decision making than nonoccurring events (Fazio et al., 1982; Rassin et al., 2008), even when these nonoccurrences may have important implications in, for instance, clinical (Rassin et al., 2008), judicial (Eerland et al., 2012; Eerland & Rassin, 2010), educational (Newman et al., 1980), or marketing (Kardes et al., 1990) domains. This bias does not only occur in adults, but also in children (Bitgood et al., 1976) and animals (Pace et al., 1980; Sainsbury & Jenkins, 1967). FPE is relatively understudied and the few existing studies vary widely with regards to their approach to the subject. That is, the underweighing of described nonoccurrences has been studied in a variety of paradigms and is shown to manifest in reading times (Eerland et al., 2012), recall (Eerland et al., 2012), learning (Hovland & Weiss, 1953; Newman et al., 1980), (confidence in) performance (Rassin, 2014), hypothesis testing (Cherubini et al., 2013; also see Klayman & Ha, 1987), self-perception (Fazio et al., 1982), and the evaluation of forensic evidence (Eerland et al., 2012; Eerland & Rassin, 2010).
For instance, Cherubini et al. (2013) investigated FPE in relation to abstract problem solving by asking participants to estimate from which deck of lettered cards a certain card was most likely to originate, based on the letters on the card and the known content of the decks. Findings showed that, in this task, participants overestimated the importance of information conveyed by the occurring letters on the card compared to information conveyed by the nonoccurring letters. Additionally, participants were more likely to solve problems correctly and felt more confident when the presence rather than the absence of cues directed participants to the solution. Furthermore, Eerland and colleagues (2012; Eerland & Rassin, 2010) studied FPE in a more applied setting relating to concrete, real-life situations. Participants were asked to judge the guilt of a crime suspect based on described (non)occurring diagnostic forensic evidence. Results indicated that participants had more difficulty processing nonoccurring events (e.g., “fingerprints of the suspect were absent on the victim”) than occurring events (e.g., “fingerprints of the suspect were present on the victim”), as indicated by slower reading times (Eerland et al., 2012). Information that is processed more easily (referred to as “processing fluency”) is assumed to be remembered better (Atkinson & Shiffrin, 1968; Hirshman & Mulligan, 1991). Indeed, the findings showed that nonoccurring events were also less likely to be recalled (Eerland et al., 2012) and taken into account when participants decided on the guilt of a crime suspect (Eerland et al., 2012; Eerland & Rassin, 2010) than occurring events.
Based on the illustrated FPE literature, we expect that the nonoccurrence of vaccination-related events is more difficult to process, more difficult to remember, and considered less important in evaluating the described vaccine than the occurrence of vaccination-related events. This results in the following hypotheses:
Information about nonoccurring (versus occurring) vaccination consequences results in (a) slower reading times, (b) lower recall, (c) lower perceived importance.
The FPE perspective is different from, and can be orthogonal to, for instance, a (gain-loss) framing perspective on vaccination communication (see e.g., O’Keefe & Nan, 2012; Penţa & Băban, 2018). That is, occurrences and nonoccurrences can conceptually both reflect a gain-framed outcome (i.e., “wellbeing” and “no illness”), but could also both reflect a loss-framed outcome (e.g., “illness” and “no wellbeing”). Potentially confounding conceptual issues like these are further discussed under “stimulus materials and design”.
Methods
Sample
Participants were recruited through the scientific crowdsourcing community Prolific Academic (https://www.prolific.co/) and considered eligible for participation when they (1) were at least 18 years old, (2) had an approval rate of ≥ 95% for previous work done through Prolific, (3) were fluent, native, primary speakers of the English language, and (4) did not have a language related disorder. Exclusion criteria are addressed in the preprocessing stage of the statistical analysis. Participants were paid the equivalent of $10 per hour for their participation. The experiment took approximately 15 minutes per participant.
To not waste any resources, we adopted a sequential testing paradigm. This allows for a well-powered study without recruiting more participants than necessary. The rationale behind sequential testing is that published effect sizes – and by extension, power analyses – are often inaccurate (Lakens, 2014). Rather than using the commonly adopted fixed-sample stopping rule at a sample size that is predetermined by a power analysis, it is less problematic and more practical and efficient to adopt a sequential stopping rule (Frick, 1998). A sequential stopping rule indicates that the statistical analyses will be performed during data collection at intermediate sample sizes. With this approach, data collection can be terminated whenever the results convincingly show that the hypothesized effect is either present or extremely unlikely (Lakens, 2014). The sequential stopping rule that we adopted is COAST (i.e., composite open adaptive sequential test; Frick, 1998). The COAST method dictates that, after reaching a predefined minimum sample size for the first statistical test (Nmin), the researcher can perform statistical analyses for subsequent sample sizes during data collection (e.g., Nmin+50; Nmin+100) while adhering to the following rules: If the outcome of the statistical test is 1) p < .01, data collection is terminated and the null hypothesis is rejected; 2) p > .36, data collection is terminated and the null hypothesis is not rejected; 3) .01 < p < .36, more participants are tested. Monte Carlo computer simulations show that the overall alpha level of .05 (and therefore the Type I error) is preserved, provided that these rules are followed (Frick, 1998).
The minimum sample size for the current within-subjects experiment was set at Nmin = 150. This is the minimum sample size we were willing to accept if a p-value above .36 (or below .01) emerged, which would force us to stop testing. However, for a p between .01 and .36, data collection would resume. COAST assumes that data collection and intermediate testing continues until a decision is reached (i.e., until p < .01 or p > .36). However, considering experimental costs (money and time) and loosely informed by a power analysis (Faul et al., 2007) indicating that a sample size of 324 should be sufficiently powered (using G*Power software for the statistical F-test “ANOVA – repeated measures, within factors”, hypothesizing a very small true effect with parameters set at partial η² = 0.01, α = 0.05, power = 0.95, for one group and two measurements), the point at which we would stop testing was set at Nmax = 350. The spacing between sequential analyses was set at 100 (Nmin+100; Nmin+200).
Stimulus materials and design
To investigate the impact of (non)occurrences on the processing, recall, and perceived importance of vaccine information and minimize the impact of existing knowledge and beliefs, we presented participants with a brief, fabricated news article on a new (fictional) virus, the “blue virus” (see Appendix A). After reading this news article, participants were presented with information on (non)occurring consequences of injections with the vaccine against the blue virus. This information was presented in the form of 16 news headlines (cf. Meppelink et al., 2019), half of which described occurring vaccination consequences and half described nonoccurring consequences. To improve ecological validity of the findings, the content maximally resembled the issues presented in natural vaccination information on the internet. However, some issues arose in the construction of the materials.
To start, natural vaccination-critical and vaccination-supporting texts often describe different negative health outcomes, which might confound our design. That is, the occurrence of idiopathic diseases like autism in vaccination-critical texts likely results in completely different associations, beliefs, and opinions across individuals than the occurrence of vaccine-preventable diseases like measles in vaccination-supporting texts. For this reason, described health outcomes were kept agnostic with respect to the underlying disease and reflected identical consequences across conditions (e.g., patients either did or did not get a fever after vaccination).
However, using only negatively valenced descriptors (e.g., fever) results in occurrences describing a negatively valenced outcome (i.e., fever is present) and nonoccurrences describing a positively valenced outcome (i.e., fever is absent). This distinction can be taken to reflect the difference between loss versus gain framed outcomes, respectively (Tversky & Kahneman, 1985; see also Mandel, 2001), which confounds any FPE effects. Therefore, stimulus counterparts were designed to include both negatively and positively valenced descriptors, to balance gain and loss framed outcomes, and – additionally – to balance the type of described consequences (physical versus social/emotional) across event conditions.
Furthermore, we did not use any negation operators (e.g., the word “not”) to describe nonoccurrences, thereby following the procedure by Eerland et al. (2012). The reason for this is twofold. First, research on language comprehension has shown that people find it more difficult to remember information that is under the scope of a negation operator (see Kaup & Zwaan, 2003). Second, adding a negation operator to nonoccurrences would make these headlines longer than the headlines describing occurrences. Because we were interested in processing times we aimed to rule out this possible confound. See Table 1 for all stimulus materials and Appendix A for the cover story and counterbalancing lists.
. | Occurrence . | Nonoccurrence . |
---|---|---|
1 | New study confirms link between vaccine against blue virus and high fever (neg_loss_physical) | New study disproves link between vaccine against blue virus and high fever (neg_gain_physical) |
2 | Does the vaccine against blue virus cause insomnia? Experts say yes (neg_loss_physical) | Does the vaccine against blue virus cause insomnia? Experts say no (neg_gain_physical) |
3 | New evidence confirms that taking the vaccine against blue virus leads to cognitive malfunction (neg_loss_physical) | New evidence rejects that taking the vaccine against blue virus leads to cognitive malfunction (neg_gain_physical) |
4 | First recipients of vaccine against blue virus report presence of anxiety-like side effects (neg_loss_soc/emo) | First recipients of vaccine against blue virus report absence of anxiety-like side effects (neg_gain_soc/emo) |
5 | FDA: ‘presence of clotting problems as a result of vaccination against blue virus’ (neg_loss_physical) | FDA: ‘absence of clotting problems as a result of vaccination against blue virus’ (neg_gain_physical) |
6 | Mathematical modelers show that increases in hospitalization are present after first vaccination wave (neg_loss_physical) | Mathematical modelers show that increases in hospitalization are absent after first vaccination wave (neg_gain_physical) |
7 | Suggestion that vaccine against blue virus increases mortality accepted (neg_loss_physical) | Suggestion that vaccine against blue virus increases mortality rejected (neg_gain_physical) |
8 | Social stigma after taking the vaccine? “Yes, friends condone my decision to vaccinate.” (neg_loss_soc/emo) | Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.” (neg_gain_soc/emo) |
9 | Vaccine against blue virus promotes a feeling of safety in recipients (pos_gain_soc/emo) | Vaccine against blue virus obstructs a feeling of safety in recipients (pos_loss_soc/emo) |
10 | Does taking the vaccine against blue virus give you back your social life? Recipients say: Yes! (pos_gain_soc/emo) | Does taking the vaccine against blue virus give you back your social life? Recipients say: No! (pos_loss_soc/emo) |
11 | Suggestion that vaccine against blue virus increases mobility is confirmed in new study (pos_gain_soc/emo) | Suggestion that vaccine against blue virus increases mobility is rejected in new study (pos_loss_soc/emo) |
12 | Evidence is found that vaccine against blue virus is effective for young children (pos_gain_physical) | Evidence is lacking that vaccine against blue virus is effective for young children (pos_loss_physical) |
13 | World leaders decide that traveling is now permitted after vaccination against blue virus (pos_gain_soc/emo) | World leaders decide that traveling is still prohibited after vaccination against blue virus (pos_loss_soc/emo) |
14 | Voters say “Taking the jab against blue virus reinstates sense of freedom” (pos_gain_soc/emo) | Voters say “Taking the jab against blue virus diminishes sense of freedom” (pos_loss_soc/emo) |
15 | Accessto some public facilities is now granted after taking jab against blue virus (pos_gain_soc/emo) | Accessto some public facilities is still denied after taking jab against blue virus (pos_loss_soc/emo) |
16 | Does the vaccine against blue virus protect from illness? CDC says yes (pos_gain_physical) | Does the vaccine against blue virus protect from illness? CDC says no (pos_loss_physical) |
. | Occurrence . | Nonoccurrence . |
---|---|---|
1 | New study confirms link between vaccine against blue virus and high fever (neg_loss_physical) | New study disproves link between vaccine against blue virus and high fever (neg_gain_physical) |
2 | Does the vaccine against blue virus cause insomnia? Experts say yes (neg_loss_physical) | Does the vaccine against blue virus cause insomnia? Experts say no (neg_gain_physical) |
3 | New evidence confirms that taking the vaccine against blue virus leads to cognitive malfunction (neg_loss_physical) | New evidence rejects that taking the vaccine against blue virus leads to cognitive malfunction (neg_gain_physical) |
4 | First recipients of vaccine against blue virus report presence of anxiety-like side effects (neg_loss_soc/emo) | First recipients of vaccine against blue virus report absence of anxiety-like side effects (neg_gain_soc/emo) |
5 | FDA: ‘presence of clotting problems as a result of vaccination against blue virus’ (neg_loss_physical) | FDA: ‘absence of clotting problems as a result of vaccination against blue virus’ (neg_gain_physical) |
6 | Mathematical modelers show that increases in hospitalization are present after first vaccination wave (neg_loss_physical) | Mathematical modelers show that increases in hospitalization are absent after first vaccination wave (neg_gain_physical) |
7 | Suggestion that vaccine against blue virus increases mortality accepted (neg_loss_physical) | Suggestion that vaccine against blue virus increases mortality rejected (neg_gain_physical) |
8 | Social stigma after taking the vaccine? “Yes, friends condone my decision to vaccinate.” (neg_loss_soc/emo) | Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.” (neg_gain_soc/emo) |
9 | Vaccine against blue virus promotes a feeling of safety in recipients (pos_gain_soc/emo) | Vaccine against blue virus obstructs a feeling of safety in recipients (pos_loss_soc/emo) |
10 | Does taking the vaccine against blue virus give you back your social life? Recipients say: Yes! (pos_gain_soc/emo) | Does taking the vaccine against blue virus give you back your social life? Recipients say: No! (pos_loss_soc/emo) |
11 | Suggestion that vaccine against blue virus increases mobility is confirmed in new study (pos_gain_soc/emo) | Suggestion that vaccine against blue virus increases mobility is rejected in new study (pos_loss_soc/emo) |
12 | Evidence is found that vaccine against blue virus is effective for young children (pos_gain_physical) | Evidence is lacking that vaccine against blue virus is effective for young children (pos_loss_physical) |
13 | World leaders decide that traveling is now permitted after vaccination against blue virus (pos_gain_soc/emo) | World leaders decide that traveling is still prohibited after vaccination against blue virus (pos_loss_soc/emo) |
14 | Voters say “Taking the jab against blue virus reinstates sense of freedom” (pos_gain_soc/emo) | Voters say “Taking the jab against blue virus diminishes sense of freedom” (pos_loss_soc/emo) |
15 | Accessto some public facilities is now granted after taking jab against blue virus (pos_gain_soc/emo) | Accessto some public facilities is still denied after taking jab against blue virus (pos_loss_soc/emo) |
16 | Does the vaccine against blue virus protect from illness? CDC says yes (pos_gain_physical) | Does the vaccine against blue virus protect from illness? CDC says no (pos_loss_physical) |
Note. Words in bold reflect the (non)occurrence of an event. As denoted in parentheses, items 1-8 have negative event descriptors (in italics) with occurrences presenting loss outcomes and nonoccurrences present gain outcomes. Items 9-16 have positive event descriptors with occurrences presenting gain outcomes and nonoccurrences presenting loss outcomes. For each event condition, half of the items describe physical consequences and half describe social/emotional consequences.
This resulted in a one-factorial (event) within-subjects design with eight headlines per condition (occurring vs nonoccurring), in which descriptor and outcome frames were counterbalanced within participants and items were counterbalanced between participants. All headlines were constructed in the grammatical style of a newspaper headline, had approximately the same length, and were presented in the same font.
Measures
Reading times. For each headline, time-on-screen in milliseconds was measured as a proxy of reading time (RT). We preregistered to exclude reading times shorter than 100 ms from data analysis, based on the assumption that people are unable to read a headline that fast for comprehension. Given that the minimum reading time was 357 ms, no data were excluded based on this criterion. Similarly, we preregistered that reading times above 8 sec would be removed as these likely do not reflect situations in which the sentence is read as quickly as possible as instructed (for a meta-analysis on average reading times, see Brysbaert, 2019). Next, outliers above or below 2.5 standard deviations (SD) of the participant mean would be discarded as missing. However, the data of the first 150 participants showed many reading times above 8 seconds: 83 participants (55.33%) had at least one RT > 8 sec, with 20 participants (13.33%) showing RTs > 8 sec in at least half of the trials. Adhering to our preregistered plan would result in the exclusion of 16.63% of the trials and an even greater overall exclusion percentage as 20 participants would have too few valid trials to be included. We deemed our preregistered exclusion strategy undesirable, as such a large part of the data no longer reflects what outliers represent.
We therefore examined what would be a reasonable alternative given the data. We decided to replace the two suggested steps for outlier removal (8 sec and 2.5 SD) with one step: excluding RTs above or below 2 SD of the participant median. This method no longer includes a hard cut-off, which is more appropriate given the large variability in reading times between participants. Also, the median is less sensitive to any extreme outliers that may occur within participants across trials and is therefore appropriate given the absence of a hard cut-off. Inspection of the data using this alternative method shows that this resulted in 6.17% trial exclusion, which we deemed acceptable for outlier removal.1
Reading times were right-skewed (i.e., with a skewness value above 1) as expected. Therefore, log transformations were performed as preregistered. Next, means and SDs were calculated per participant per event condition (i.e., based on 8 occurring events versus 8 nonoccurring events) for data analysis. Longer reading times will be taken to indicate greater processing difficulty (in line with, e.g., Rayner et al., 2006).
Free recall. Participants were asked to recall as many presented news headlines as possible (i.e., “Please think back to the news headlines that were presented to you earlier. Try to recall as many of these headlines as possible. Please do the best job you can. List the headlines (or whatever gist/words you can remember from them) below, with one headline per text box. Your responses must be constructed using words. The use of arrows or other symbols to annotate relationships is not allowed”). As participants did not report verbatim descriptions of the headlines, recalled headlines were coded by two coders (GM and LV). A headline was considered correctly recalled when both the descriptor (negative vs. positive) and event (occurring vs. nonoccurring) were correctly reported. They did not have to be correct verbatim (e.g., “New study confirms link between vaccine against blue virus vaccine and high fever”) as long as the gist of both descriptor and event were correct (e.g., “Getting the vaccine is correlated with getting a fever”). A correctly recalled descriptor-event combination resulted in a recall score of 1; any incorrectly recalled descriptor-event combination resulted in a recall score of 0. A codebook was constructed to instruct coders on the assessment of the responses (see Appendix B). Coders first coded the same 10% of the data, which resulted in almost perfect intercoder reliability (Cohens Kappa = .97). Discrepancies were discussed among coders until consensus was reached, the codebook was adjusted accordingly, after which the two coders coded the remainder of the recall data. Means and standard deviations were calculated per participant per event condition, resulting in means between 0 and 1 that reflect the proportion of correctly recalled information. Lower proportions indicate greater recall difficulty.
Perceived importance. Participants were presented randomly with all news headlines. For each headline, they were asked “How important do you consider this information when evaluating the vaccine against the blue virus?” on a scale from 0 (not at all important) to 7 (extremely important). Next, we calculated the means and standard deviations per participant per event condition. Lower means indicate lower perceived importance in judgment.
Background variables. Demographic (i.e., age, gender, education level) and psychographic information were assessed. Psychographic information consisted of two constructs. Vaccine hesitancy was assessed using three items, asking “Please think back to the first time you were eligible for getting the COVID-19 vaccine. In making the decision whether to take the vaccine, to what extent have you felt hesitancy; 2) doubt; 3) indecisiveness about getting vaccinated?” (cf. Bussink-Voorend et al., 2022), to be answered on a slider from 0 (not at all hesitant) – 100 (extremely hesitant). The vaccine attitude item was formulated as follows “How positive or negative would you consider yourself to be about the COVID-19 vaccine?”, to be answered on a slider from –100 (extremely negative) to +100 (extremely positive). These two constructs assessed a priori vaccine beliefs. No existing scales were used as existing scales are often heterogeneous and confounded with various related constructs, which hinders clean measurement and comparability of study results (Bussink-Voorend et al., 2022). COVID-19 was taken as a case, as vaccination beliefs are highly context-specific (MacDonald, 2015) and as the COVID pandemic best resembles the situation described in the blue virus vignette.
Attention checks. Two attention checks were performed to ensure participants’ serious participation. First, an instructional manipulation check (cf. Hauser & Schwarz, 2015) assessed whether people carefully read instructions (i.e., “Which sports do you like to perform?” with a comment in the instruction that they should not select the multiple-choice sports options provided, but they should select the “other” option and type “I have read the instructions”). Second, a comprehension question checked whether people had attended to the cover story (i.e., “Which symptoms were mentioned in the story about the blue virus? Name at least one”), with at least one correctly mentioned symptom resulting in a satisfactory outcome of this check. We preregistered to exclude participants who failed both attention checks from further data analysis.
Procedure
The Ethics Committee of the Faculty of Social Sciences (ECSS) of Radboud University reviewed a research line that included the proposed study and concluded there are no formal objections2 (case number ECSW-2021-072). On the Prolific Academic website, recruited participants were redirected to the online experiment platform to perform the experiment (Qualtrics™, with a redirection to Gorilla™ for reliable measurement of the response times). After giving informed consent, participants were instructed to first provide information on their demographic and psychographic characteristics, including the instructional manipulation check. Next, they were presented with the cover story that provided the context for the experimental materials (i.e., the news headlines). They were instructed to read the text attentively and imagine as vividly as possible that the described scenario was true. The scenario described a future in which a fictional infectious disease with remarkable symptoms (e.g., blueish hue on face and torso, loss of sight) had emerged, for which a new vaccine had been discovered recently.3 Although the described situation of course bares resemblance with recent COVID-19 outbreaks, the disease characteristics were described in a way that reduced resemblance to real-world events or infectious diseases that may be familiar to participants. Time-on-screen was recorded for descriptive purposes.
After reading the fictional scenario, participants first received a comprehension question about the scenario as part of the attention checks. Next, they were instructed that they would be presented with 16 sequential news headlines about the new vaccine, that they should read these sentences as accurately and quickly as possible because they would receive questions about them later, and that they should press the spacebar on their keyboard to move to the next headline. Participants started with 3 practice trials to familiarize themselves with the procedure, after which the 16 experimental trials were presented in random order (8 describing occurring vaccine consequences and 8 describing nonoccurring vaccine consequences). Each trial started with a fixation cross that was presented for 500 ms, after which the news headline was presented. Once the participants pressed the spacebar the trial was terminated, after which they were presented with an empty screen for 1000 ms until the next trial started. Afterwards, participants were asked to recall the news headlines, to evaluate the vaccine (-100 = extremely negative, + 100 = extremely positive) as an introduction to the perceived importance question,4 and to rate for each headline the perceived importance in evaluating the described vaccine. Next, open-ended questions asked 1) about their perceived purpose of the experiment and 2) to note any comments or observations they might have regarding the experiment. Finally, they were debriefed and thanked for their participation.
Statistical analyses
By choosing a sequential stopping method, it was likely that the analyses would be performed multiple times. The analyzing sequence would start at Nmin = 150 and repeat when every subsequent 100 participants were reached, until the null hypothesis could be rejected or not according to the COAST method or until Nmax = 350. The analyses performed at each sequence were identical.
The preprocessing phase consisted of three parts. First, outliers in reading times were removed and recall was coded as described under “measures”. Second, participants were excluded from further analyses if a) > 50% of reading times was missing in at least one event condition (n = 0), b) they scored 0 on recall and/or reported two or more nonsense memories (n = 2), c) they incorrectly responded to both attention checks (n = 4), or d) the open ended questions showed that they either guessed the experimental purpose or reported to not understand a part of the experiment that is directly relevant to our FPE outcome measures (n = 0). Participants were still paid for their participation when excluded, unless they incorrectly responded to both attention checks. We preregistered that excluded participants would only be replaced with newly recruited participants if, after the last batch of data (Nrecruited = 350), no decision had been reached and the maximum sample size or budget had not yet been met, thus when Nincluded < 350. However, we decided to immediately replace excluded participants for each batch, to adhere to the exact preregistered sample sizes. Inclusion or exclusion of these additional participants did not alter result patterns. Third, descriptive statistics regarding the sample characteristics were collected to be reported. Here, descriptive statistics for the dependent variables were also be checked to allow for the discovery of any potential floor or ceiling effects, even though these were not expected. For the first batch of 150 participants, descriptives showed no indication of floor or ceiling effects.
After preprocessing, the analysis commenced as preregistered, with any deviations mentioned in footnotes.5 First, statistical assumptions for repeated measures ANOVAs were checked. Because we used a random sampling method, a violation of independence of observations is unlikely. As expected, the normality assumption was violated for the reading times (indicated by skewness and kurtosis scores greater than 1) but not the other dependent variables. Reading times were log transformed to approach normality. The assumption of sphericity was met for all dependent variables. Second, we checked whether counterbalancing lists resulted in different reading times, recall scores, and perceived importance scores using a one-way ANOVA for each dependent variable with counterbalancing list as a between-subjects factor. This was not the case (all p-values ≥ .20), which is why counterbalancing list was not included as a between-subjects variable in the analyses. Third, three repeated-measures ANOVAs were performed as the main confirmatory analyses, with event condition as independent variable, reading time, recall, and perceived importance as respective dependent variables. The hypotheses that were tested are that headlines describing nonoccurring vaccination consequences result in (a) longer reading times, (b) lower recall, (c) lower self-reported perceived importance in evaluating the vaccine than texts describing occurring vaccination consequences. This would be reflected by a significant main effect of event condition (p < .01, see COAST) in the hypothesized direction on the respective dependent variables.
The preregistration described that, if the sequential stopping rule would dictate to stop for any given dependent variable, conclusions would be drawn for this dependent variable. Data collection would be pursued and sequential analyses would continue for the remaining dependent variables only. If the sequential stopping rule dictated to stop for the last dependent variable, or all dependent variables simultaneously, or when Nmax would be reached, data collection would seize. At this point, once all analyzing steps described above would be performed, exploratory analyses might follow.
Results
All study materials, the laboratory log, data, syntax, and the stage 1 registered report are publicly available on the Open Science Framework (https://osf.io/x8n4a/).
Participant characteristics
The descriptive statistics of the participant characteristics showed that each sequential sample included people of various genders, ages, education levels, and countries of residence (see Table 2). Though reported vaccine attitudes ranged from -100 (extremely negative) to +100 (extremely positive), participants can be described as generally positive about vaccines (M = 44.15; Median = 73.50; Mode = 100). Similarly, reported vaccine hesitancy scores ranged from 0 (not at all hesitant) to 100 (extremely hesitant), but the sample can generally be characterized as not very hesitant (M = 31.92; Median = 13.33; Mode = 0).
Batch 1 | Batch 2 | Batch 3 | |
Total N | 150 | 250 | 350 |
Age, M (SD) | 37.79 (14.40) | 37.54 (13.94) | 38.52 (14.36) |
Vaccine attitude, M (SD) | 45.75 (63.73) | 42.05 (63.63) | 44.15 (62.23) |
Vaccine hesitancy, M (SD) | 29.10 (33.93) | 32.41 (35.28) | 31.92 (34.97) |
Gender, N (%) | |||
Female | 68 (45.30%) | 127 (50.80%) | 191 (54.60%) |
Male | 81 (54.00%) | 120 (48.00%) | 154 (44.00%) |
Non-binary | 1 (0.70%) | 3 (1.20%) | 5 (1.40%) |
Education, N (%) | |||
Middle school | 4 (2.70%) | 4 (1.60%) | 4 (1.10%) |
High school | 29 (19.30%) | 44 (17.60%) | 64 (18.30%) |
College, no degree | 26 (17.30%) | 48 (19.20%) | 69 (19.70%) |
Associate's degree | 9 (6.00%) | 14 (5.60%) | 21 (6.00%) |
Bachelor's degree | 63 (42.00%) | 109 (43.60%) | 144 (41.10%) |
Graduate degree | 19 (12.70%) | 31 (12.40%) | 48 (13.70%) |
Country, N (%) | |||
Unknown | 5 (3.30%) | 5 (2.00%) | 5 (1.40%) |
Australia | 5 (3.30%) | 6 (2.40%) | 6 (1.70%) |
Austria | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
Canada | 1 (0.70%) | 9 (3.60%) | 14 (4.00%) |
Denmark | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
France | 1 (0.70%) | 1 (0.40%) | 2 (0.60%) |
Greece | 1 (0.30%) | ||
Ireland | 4 (2.70%) | 9 (3.60%) | 14 (4.00%) |
Israel | 2 (1.30%) | 2 (0.80%) | 3 (0.90%) |
Italy | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
Japan | 1 (0.30%) | ||
Netherlands | 1 (0.30%) | ||
New Zealand | 3 (2.00%) | 3 (1.20%) | 5 (1.40%) |
Poland | 1 (0.40%) | 1 (0.30%) | |
Portugal | 1 (0.30%) | ||
South Africa | 24 (16.00%) | 47 (18.80%) | 66 (18.90%) |
South Korea | 1 (0.30%) | ||
Spain | 1 (0.40%) | 2 (0.60%) | |
Sweden | 1 (0.30%) | ||
Switzerland | 1 (0.30%) | ||
United Kingdom | 102 (68.00%) | 161 (64.40%) | 217 (62.00%) |
United States of America | 2 (0.80%) | 5 (1.40%) |
Batch 1 | Batch 2 | Batch 3 | |
Total N | 150 | 250 | 350 |
Age, M (SD) | 37.79 (14.40) | 37.54 (13.94) | 38.52 (14.36) |
Vaccine attitude, M (SD) | 45.75 (63.73) | 42.05 (63.63) | 44.15 (62.23) |
Vaccine hesitancy, M (SD) | 29.10 (33.93) | 32.41 (35.28) | 31.92 (34.97) |
Gender, N (%) | |||
Female | 68 (45.30%) | 127 (50.80%) | 191 (54.60%) |
Male | 81 (54.00%) | 120 (48.00%) | 154 (44.00%) |
Non-binary | 1 (0.70%) | 3 (1.20%) | 5 (1.40%) |
Education, N (%) | |||
Middle school | 4 (2.70%) | 4 (1.60%) | 4 (1.10%) |
High school | 29 (19.30%) | 44 (17.60%) | 64 (18.30%) |
College, no degree | 26 (17.30%) | 48 (19.20%) | 69 (19.70%) |
Associate's degree | 9 (6.00%) | 14 (5.60%) | 21 (6.00%) |
Bachelor's degree | 63 (42.00%) | 109 (43.60%) | 144 (41.10%) |
Graduate degree | 19 (12.70%) | 31 (12.40%) | 48 (13.70%) |
Country, N (%) | |||
Unknown | 5 (3.30%) | 5 (2.00%) | 5 (1.40%) |
Australia | 5 (3.30%) | 6 (2.40%) | 6 (1.70%) |
Austria | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
Canada | 1 (0.70%) | 9 (3.60%) | 14 (4.00%) |
Denmark | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
France | 1 (0.70%) | 1 (0.40%) | 2 (0.60%) |
Greece | 1 (0.30%) | ||
Ireland | 4 (2.70%) | 9 (3.60%) | 14 (4.00%) |
Israel | 2 (1.30%) | 2 (0.80%) | 3 (0.90%) |
Italy | 1 (0.70%) | 1 (0.40%) | 1 (0.30%) |
Japan | 1 (0.30%) | ||
Netherlands | 1 (0.30%) | ||
New Zealand | 3 (2.00%) | 3 (1.20%) | 5 (1.40%) |
Poland | 1 (0.40%) | 1 (0.30%) | |
Portugal | 1 (0.30%) | ||
South Africa | 24 (16.00%) | 47 (18.80%) | 66 (18.90%) |
South Korea | 1 (0.30%) | ||
Spain | 1 (0.40%) | 2 (0.60%) | |
Sweden | 1 (0.30%) | ||
Switzerland | 1 (0.30%) | ||
United Kingdom | 102 (68.00%) | 161 (64.40%) | 217 (62.00%) |
United States of America | 2 (0.80%) | 5 (1.40%) |
Hypothesis testing
To test our hypotheses, we performed a repeated measures ANOVA for each dependent variable as preregistered. However, during the coding of the recall data and prior to any data analysis, both coders independently noticed that one of the 16 headlines was ambiguous, and therefore unsuitable for a clean test of the hypotheses. Namely, the item on social stigma (‘Social stigma after taking the vaccine? “No, friends support my decision to vaccinate.”’) was designed to describe the absence of a negative descriptor (no stigma) but also turned out to consist of the presence of a positive descriptor (friends’ support), which confounded this item. For this reason, we decided prior to performing any analyses that the social stigma item should best be excluded from the analyses. However, for transparency reasons, we also performed and report the analyses on the data including this item. The outcomes for both analyses lead to the same conclusion for all dependent variables and data collection batches except for one.
Batch 1. Repeated measures ANOVAs were performed on the data of the first 150 participants. The results were inconclusive regarding the effect of event (occurring vs. nonoccurring) on reading times, both without (FItemExcluded (1,149) = 4.09, p = .045, partial η² = .03) and with the ambiguous item (FItemIncluded (1,149) = 1.83, p = .18, partial η² = .01), as the COAST method dictates that data are inconclusive regarding H0 when p ≥ .01 and ≤ .36. We were therefore unable to reject H0 regarding reading times, requiring a second batch of data collection.
The repeated measures ANOVA of event on recall showed a medium-to-large significant effect (FItemExcluded (1,149) = 19.47, p < .001, partial η² = .12; FItemIncluded (1,149) = 18.73, p < .001, partial η² = .11), with headlines about nonoccurring vaccination-related events resulting in lower recall (MItemExcluded = 0.32, SDItemExcluded = 0.20; MItemIncluded = 0.31, SDItemIncluded = 0.19) than headlines about occurring vaccination-related events (MItemExcluded = 0.39, SDItemExcluded = 0.20; MItemIncluded = 0.38, SDItemIncluded = 0.20). As the p-value is below .01 and the effect is in the expected direction, H0 can be rejected and the hypothesis on recall is confirmed.
Finally, the repeated measures ANOVA of event on perceived importance for vaccine evaluation showed a significant and large effect (FItemExcluded (1,149) = 33.08, p < .001, partial η² = .18; FItemIncluded (1,149) = 32.72, p < .001, partial η² = .18), with headlines about nonoccurring vaccination-related events being perceived as less important in evaluating the vaccine (MItemExcluded = 4.10, SDItemExcluded = 1.27; MItemIncluded = 3.97, SDItemIncluded = 1.26) than headlines about occurring vaccination-related events (MItemExcluded = 4.53, SDItemExcluded = 1.20; MItemIncluded = 4.39, SDItemIncluded = 1.18). As the p-value is below .01 and the effect is in the expected direction, H0 can be rejected and the hypothesis on perceived importance is confirmed.
Batch 2. After running a second batch of 100 participants, a repeated measures ANOVA was performed on the merged reading time data of the 250 participants. The results resemble those of batch 1, as reading times without ambiguous item (FItemExcluded (1,249) = 5.22, p = .023, partial η² = .02) and with ambiguous item (FItemIncluded (1,249) = 2.24, p = .14, partial η² = .01) again show p ≥ .01 and ≤ .36, which is inconclusive under the COAST method. Therefore, we were again unable to reject H0 regarding reading times, requiring a third batch of data collection.
Batch 3. After running the final batch of 100 participants, a repeated measures ANOVA was performed on the merged reading time data of the 350 participants. The analysis showed a significant but small effect under COAST (i.e., p < .01) when excluding the ambiguous item (FItemExcluded (1,349) = 10.15, p = .002, partial η² = .03), with headlines about nonoccurring vaccination-related events being processed more slowly (M = 5295, SD = 3945) than headlines about occurring vaccination-related events (M = 5168, SD = 3918).6 However, when including the ambiguous item, the results are inconclusive as p ≥ .01 and ≤ .36 (FItemIncluded (1,349) = 5.86, p = .016, partial η² = .02; Mnonocc = 5249, SDnonocc = 3859; Mocc = 5179, SDocc = 3943). As the results allow us to reject H0 only for a large sample size, with a small effect size, and when one item is excluded (which was not preregistered), we believe more evidence is needed to convincingly reject H0. We therefore cannot confirm our hypothesis on reading times.
Discussion
For people to make informed decisions regarding vaccination, it is essential that they can adequately process evidence-based information. In this work, we examined whether people might be subject to bias when processing vaccination information. If so, their capacity to make well-informed (i.e., knowledge-based, deliberate, and value-consistent) decisions on vaccination might be undermined. Specifically, we tested the potential impact of the relatively unknown feature positive effect, which is the phenomenon that people experience greater difficulty processing descriptions of nonoccurring compared to occurring events.
This research was motivated by our observations in the media and literature that vaccination-critical and vaccination-supporting information seem to differ in their presentation of risk. Generally, vaccination-critical information appears to describe people’s experiences with occurring events (e.g., vaccinated people who experience adverse events; Bean, 2011; Leask et al., 2010; Zimmerman et al., 2005), whereas vaccination-supporting information mainly appears to focus on nonoccurring events (e.g., vaccinated people who do not fall ill with the vaccine-preventable disease; Hobson-West, 2003). Therefore, FPE might explain the relatively large appeal and impact of vaccination-critical (versus vaccination-supporting) information on people’s online behaviors (e.g., Johnson et al., 2020) and perceptions, attitudes, and intentions (e.g., Nan & Madden, 2012).
Our findings show that descriptions of nonoccurring vaccination-related outcomes are indeed more difficult to remember and have a lower perceived impact on the vaccine’s evaluation than descriptions of occurring outcomes. Whether descriptions of nonoccurring outcomes are also more difficult to process remains inconclusive. The memory and perceived importance findings are in line with earlier demonstrations of FPE in fundamental research, and extend these fundamental insights to the relevant, timely, and ecologically valid context of vaccination communication (for other examples in an applied, forensic context, see Eerland et al., 2012; Eerland & Rassin, 2010). By having counterbalanced outcome and descriptor frames in the design of this study, the findings do not reflect a mere gain versus loss framing effect or a potential positivity or negativity bias, but rather indicate that indeed the (non)occurrence of described events biases how vaccination-related information is remembered, perceived, and potentially processed.
One could argue that reading times might reflect effort rather than processing difficulty. However, this is contrary to the status quo of the memory and language comprehension literature in which reading times are a widely accepted measure for the ease or difficulty with which linguistic information is processed (Rayner, 1998). This literature demonstrates a slowing down of reading times when information is unpredictable (Smith & Levy, 2013), requires temporal updating (Radvansky & Copeland, 2010), or is inconsistent with prior information (Rayner et al., 2006). In these studies, reading times are taken to reflect processing difficulty on a basic informational level, while being agnostic about the higher motivational level to effortfully process a text (which is arguably more affected by peoples’ motivation, opportunity, and ability, than mere text characteristics).
Accounts that assume reading times to reflect processing difficulty, like in this study, make fundamentally different predictions about the relation between reading times and recall than motivational accounts. Processing accounts predict that information that is processed more easily (referred to as “processing fluency”) should result in improved memory (Atkinson & Shiffrin, 1968; Hirshman & Mulligan, 1991). Such a positive relation between reading times and recall is indeed demonstrated in the current study, as well as in earlier work on FPE (Eerland et al., 2012).
This study demonstrates that people have greater difficulty recalling nonoccurring than occurring vaccination-related outcomes and perceive these as less important in reaching a judgment. Explanations of this effect are scarce. One intuitive explanation, pointed out by Rassin and colleagues (2008), is that nonoccurring events imply more uncertainty than occurring events, as they more easily allow for the generation of alternative explanations. For instance, when someone experiences fever after receiving a vaccine, the raised temperature is easily causally connected to the vaccine, after which one would conclude that taking the vaccine results in fever. Alternative explanations are less likely to be conceived since the vaccine provides a logical explanation for the fever. However, not experiencing a fever after receiving a vaccine makes a causal connection more difficult. After all, there can be multiple reasons for not experiencing a fever after receiving a vaccine; one could have been a lucky exception, might not have noticed their raised temperature, or might not have made the link between a raised temperature and the vaccination. The vaccine not causing a fever is only one of several possible explanations. We therefore argue that the uncertainty evoked by nonoccurrences can provide a plausible explanation of our findings.
We adopted a within-subjects design to eliminate between-subjects variability and provide optimal circumstances for the feature positive effect to manifest. The upside of this design is that the presented headlines can be taken to resemble media reporting in the early days of the COVID-19 crisis in terms of the diversity of vaccination outcomes. During the roll out of the COVID-19 vaccination strategy, a lack of clarity about the potential consequences of the vaccine prevailed and information in the media was very heterogeneous (Küçükali et al., 2022; Motta & Stecula, 2023; Scannell et al., 2021; Shaaban et al., 2022; Yousaf et al., 2022). Although our study materials are not representative or reflective of that media coverage, our design resembled this heterogeneity in the sense that each participant was presented with news headlines describing gain and loss framed, positive and negative consequences about occurring and nonoccurring vaccination-related outcomes. This approach supports the ecological validity of our finding that FPE substantially impacts information recall and perceived importance, which might occur in early pandemic situations in which there is still much uncertainty about a novel vaccine and communication reflects various perspectives.
A limitation of the adopted within-subjects design is that we were unable to assess the actual impact of (non)occurring event descriptions on vaccine evaluations. That is, by asking people to self-report how important they considered the presented descriptions of both occurring and nonoccurring events, we were unable to have an objective measure of a statement’s weight in their evaluation and assess whether nonoccurring events indeed have smaller impact on actual vaccine evaluations that occurring events. A more objective measure might not ask participants to rate importance, but might manipulate the (non)occurrence of events between participants and then ask them about their vaccine evaluations, to distinguish whether and how described (non)occurrences influence and bias judgment. Although our current set-up did not allow for such a test, it does give insight into which information people find important in reaching a judgment in highly uncertain situations where much new information about occurrences and nonoccurrences is provided. Future research should reveal whether the described (non)occurrence of events indeed not only predicts how information is processed and subjectively weighed, but also whether and how this impacts people’s real-life vaccine evaluations and attitudes.
Conclusion
In the current societal, political, and healthcare landscape, decisions regarding vaccination revolve around values such as personal autonomy, freedom of choice, and informed decision making. In this context where people are stimulated to make vaccination decisions themselves, it is essential that they can adequately process, recall and weigh evidence-based vaccination information. Our study shows that this is not necessarily the case: the mechanism responsible for the impact of vaccination communication on memory and perceived importance in judgment seems fundamentally biased when opposing arguments in the discussion reflect differences in (non)occurring vaccination-related outcomes. At the same time, these findings give concrete and practical pointers on how to improve vaccination communication. The current support for FPE suggests that evidence-based vaccination information is most effectively communicated in terms of occurring events or outcomes (e.g., wellbeing) rather than nonoccurring events or outcomes (e.g., no illness).
Author contributions
Contributed to conception and design: LV, GM, AE
Contributed to acquisition of data: LV
Contributed to analysis and interpretation of data: LV, GM, AE
Drafted and/or revised the article: LV, GM, AE
Approved the submitted version for publication: LV, GM, AE
Acknowledgements
We heartily thank Associate Editor Ullrich Ecker and the two anonymous reviewers for their supporting and insightful suggestions, which have improved this work.
Funding information
We thank the Registered Report Funding Partnership (RRFP) of the Society for the Improvement of Psychological Science (SIPS) and Collabra: Psychology, as well as the Behavioral Science Institute, for funding this work.
Competing interests
The authors report no conflict of interest.
Data accessibility statement
All study materials, the laboratory log, data, syntax, and the stage 1 registered report are publicly available on the Open Science Framework (https://osf.io/x8n4a/).’
Contribiygyi
Conceptualization: Lisa Vandeberg (Lead). Methodology: Gijsje Maas (Lead). Project administration: Anita Eerland. Software: Anita Eerland.
Footnotes
Analyzing the reading time data from batch 1 (i.e., 150 participants) based on 1) the raw data without outlier removal, 2) the data in which the preregistered outlier criteria were adopted (i.e., remove RTs > 8 seconds and RTs above or below 2.5 SDs of the Mean), and 3) the data in which the new outlier criteria were adopted (i.e., remove RTs above or below 2 SDs of the Median) showed a similar data pattern.
The ECSS explicitly mentions that they do not ‘approve’ of studies, they rather evaluate whether there are any formal objections.
In the scenario about blue vaccine, the text “in the United States” was changed into “in North America and Europe” since the preregistration, because otherwise the scenario might have been more relevant for the potential US participants than the potential UK/Canadian citizens (since the study was open to all English native speakers, and Australia and New Zealand were already mentioned in the introductory sentence).
The vaccine evaluation question was not preregistered and analyzed but included to meaningfully introduce the perceived importance measure. Performing the perceived importance item (which asked participants to rate the perceived importance of each headline in evaluating the described vaccine) would require participants to execute two steps, i.e., 1) to evaluate the vaccine, and 2) to judge to what extent participants used which information in making this evaluation. To improve the clarity of the question and reduce the cognitive load required to answer it, we made the first step explicit and added the vaccine evaluation question.
We had preregistered to start with a randomization check across conditions, not realizing that this procedure is only meaningful if conditions are presented in a between-subjects design. Because a randomization check would be meaningless in our within-subjects design, no such check was performed.
Although the analyses were performed on log transformed reading times, descriptives are presented in milliseconds for ease of interpretation.