Especially in times characterized by abundant misinformation, knowledge on fostering trust as a scientist is crucial. Previous studies have found that readers rate texts displaying a scientific style as more credible and show higher agreement with their message, a phenomenon called “scientificness effect”. However, it is unclear how strongly the relationship between scientific text features and trust is influenced by readers’ subjective scientificness perceptions, and if such an influence can also be found on an author level. We thus conducted an experimental online study with N = 838 German-speaking layreaders. Participants read two research summaries with randomized levels of author and text scientificness (low vs. high). After each text, they rated the scientificness and the trustworthiness of the author and the text. Mediation analyses provided preliminary evidence for our hypotheses. Under the assumptions of our model, perceived scientificness was found to be a mediator of the scientificness-trustworthiness relationship on an author and text level. Limitations regarding our cross-sectional mediation design are discussed. Based on our results, we offer suggestions for future research and perspectives for future theory-building.

When examining the publics’ trust in science, two major trends are noteworthy: On the one hand, overall trust in science and scientists appears to be high. For example, 62% of 1037 individuals in the 2022 German science barometer stated that they completely or somewhat trust science and research, and 30% of 2065 participants in a 2021 UK survey declared that their trust in scientists increased during the Covid-19 pandemic, with more than 50% reporting stable levels of trust (Kantar Public, 2023; Wissenschaft im Dialog/Kantar, 2022). On the other hand, mistrust towards science remains an issue and can have direct, adverse consequences. For instance, previous research has linked lower trust in science to a reduced support of climate protection policies and individual climate protection behavior (Cologna & Siegrist, 2020) and to a reduced compliance with prevention measures during the Covid-19 pandemic (Sailer et al., 2022). The independent research organization NORC at the University of Chicago interviewed 3544 American adults during the 2022 General Social Survey and found that 48% reported only “some” trust in science, while 13% even expressed “hardly any” trust in science (Davern et al., 2022). Similarly, about 8% of the science barometer participants answered that they somewhat or completely distrust science.

While the Covid-19 pandemic vitalized the examination of trust and had major influences on public’s views of science and scientists (Bicchieri et al., 2021; Bromme et al., 2022), this development took place against the backdrop of long-term digitalization and globalization trends. Science is increasingly communicated online, and laypeople frequently use platforms such as X (formerly Twitter), Facebook, YouTube or blogs to inform themselves, for example when seeking health information (Wang et al., 2021). Information is readily available and often just one click away. However, the lack of traditional editorial gatekeeping or quality checks also means that laypeople are more likely to encounter misinformation or fake news (Ecker et al., 2022; Lewandowsky et al., 2017). And these phenomena can spread quickly and reach large audiences (Vosoughi et al., 2018). This can have substantial negative consequences for layreaders above and beyond misrepresenting knowledge: For instance, beliefs in Covid-19 conspiracy theories have been directly linked to reduced compliance with prevention measures such as social distancing or wearing face masks (Hughes et al., 2022; Lin et al., 2023). And, furthermore, believing in one conspiracy theory seems to strongly predict beliefs in additional conspiracy theories and a generally skeptical attitude towards science and scientists (Douglas & Sutton, 2023; Georgiou et al., 2020).

This is problematic, especially since laypeople often rely on trustworthiness judgments when processing science-related information. A lack of specialized knowledge complicates it for them to directly assess the methodological quality or validity of a study (Bromme & Goldman, 2014). Instead of relying on such “first hand evaluations” (Bromme & Goldman, 2014), laypeople thus turn towards cues such as an author’s academic background, level of expertise, institutional affiliation or conflicts of interest to infer about a sources’ quality and trustworthiness. These “second hand evaluations” often still seem to enable them to differentiate warranted from unwarranted claims (Martel et al., 2023). Yet, the amount of misinformation online may precisely undermine general trust and interfere with science communicators’ goal to enable informed decision-making.

Trust judgements have been examined in psychology as early as the 1950s (Hovland et al., 1953; McGuire, 1996), and a popular definition by Mayer, Davis and Schoorman (1995) emphasizes that trust plays a role in communication contexts in which a trustor requires the aid of a trustee, but cannot directly evaluate the trustees’ actions and is thus vulnerable to risk. When it comes to knowledge verification and justification, the term “epistemic trust” is commonly used (Hendriks et al., 2016; Sperber et al., 2010). This specific type of trust focuses on the validity of knowledge claims and an individuals’ willingness to accept information from another person (such as a scientist) as trustworthy, generalizable and relevant (see Schröder-Pfeifer et al., 2018; Wilholt, 2013) Broadly speaking, three processes seem to be relevant whenever a person estimates trustworthiness: Inferences from base rates and prior probabilities, feelings associated with the presented information (e.g. based on message affect and fluency, see Reinhard & Schwarz, 2012; Schwarz et al., 2021), and reliance on information from memory such as claim consistency with prior knowledge (Brashier & Marsh, 2020). Especially source features have been studied extensively, with factors such as perceived expertise (Clark et al., 2012; Eastin, 2001) or trustworthiness (Ismagilova et al., 2020; Tormala & Clarkson, 2008) at the forefront. Lately, individuals’ subjective perceptions of text scientificness have also shifted into focus.

The Scientificness Effect

Readers tend to consider texts presented in a scientific style as more trustworthy and show stronger agreement with their claims. This “scientificness effect” was, for instance, reported by Thomm and Bromme (2012). In two experiments with 78 and 86 university students, they systematically presented texts either in a scientific or factual discourse style. The former included references, detailed research method descriptions (e.g., methods used to diagnose diabetes or dementia in the first study) and a passive voice. The latter abstained from using references, left out detailed method descriptions and employed an active voice. Students rated texts written in a scientific discourse style as more credible, reported higher confidence in their judgements of claim veracity and a higher tendency to trust the information without consulting an expert. A follow-up study provided further evidence for the effect (Bromme et al., 2015), and König and Jucks (2019, 2020) demonstrated that “unscientific” language (i.e., aggressively or positively valenced language) can lower message credibility. Recently, Jonas et al. (2023) replicated the scientificness effect in the context of short research summaries, differentiating between trustworthiness on an author and text level. 1144 participants read four research summaries of peer-reviewed psychological research, and the summaries’ easiness and scientificness were systematically varied. High-scientificness summaries contained a reference section, used neutral language, explicated underlying research methods and backed results via statistical values. The authors found that high text scientificness positively predicted trust in the study authors’ expertise, integrity and benevolence; as well as general trust in the text.

To summarize, laypeople seem to draw strongly on scientificness when evaluating the trustworthiness of information sources. These findings can be linked to the Elaboration Likelihood Model (ELM, see Kitchen et al., 2014; Petty & Brinol, 2012), especially the resource-efficient and fast peripheral route requiring little prior motivation or knowledge. Taking scientificness features into account may enable lay readers to make snap decisions on whether to trust a source, when the alternative of central-route processing with its high demands on motivation and prior knowledge is too time- or resource-consuming.

However, the understanding of the scientificness effect is somewhat complicated by the fuzziness of its theoretical foundations, especially regarding the processes by which an increase in scientific features amplifies trustworthiness. To illustrate this, consider the following: In any scientific context, there is an occasionally varied, yet often shared consensus on what makes scientific works or communications “scientific” (see Huguet et al., 2018; Madigan et al., 1995). If one were to encourage a discussion between 100 psychology researchers, they would likely at least agree on some core features such as “conciseness”, “references’‘, “neutral tone” or “arguments based on data”. To a certain extent, these synthesized positions could be considered as “objective scientificness”. Yet, these features may not necessarily match what an individual lay-reader from outside a field may understand as “scientific”. For instance, your next-door neighbor may value a researchers’ self-disclosure of expertise (e.g. hands-on experience with depressive patients when reporting on a clinical study) or their independence from any large-scale funding more highly. These criteria could be labeled”subjective scientificness”. The question then arises: When a research work contains features that are “objectively scientific”, does this translate to lay readers also perceiving higher levels of “subjective scientificness”?

A set of studies by Bohner et al. (2002) illustrates the importance of subjective perceptions. In two experiments, the authors had students read a text arguing for the construction of a traffic tunnel. In their second experiment, they varied the expertise of the source (e.g. a high-school student vs. an award-winning professor) as well as the argument quality (high vs. ambiguous vs. low). Whereas arguments in the “high quality” condition argued based on an 80% reduction in noise and exhaust fumes and outlined that tunnel constructions may have further benefits (green spaces, playgrounds for children), arguments in the “low quality” condition argued based on a reduction of only 4% and presented additional benefits far more conservatively. The “ambiguous quality” condition mixed statements from the two above-mentioned conditions. The relationship between source expertise or argument strength and readers’ attitude towards tunnel construction was significantly mediated by readers’ perceptions – more precisely, by their violation of expectancy. For example, when a high-expertise source argued with low-quality arguments, this resulted in more negative attitudes compared to all other combinations of source expertise and type of message. Koot et al. (2016) provide additional illustrations for the relevance of readers’ subjective perceptions. They found that the two features of source identity (experts vs. citizen scientists) and consensus (organization agrees vs. disagrees on findings and conclusions) showed an indirect effect on readers’ ability to achieve cognitive closure via their perceptions of source scientificness. Lastly, Thomm and Bromme (2012) as well as Bromme et al. (2015) demonstrated that perceived scientificness mediated the relationship between text type (factual vs. scientific discourse style) and credibility ratings.

It thus seems likely that subjective scientificness perceptions play an important role for the scientificness effect. Going forth, three approaches seem worthwhile: First, examining the relationship between laypeople’s perceptions of text scientificness and trust would allow for a conceptual replication and substantiation of the mediation effect described in previous studies (Bromme et al., 2015; Thomm & Bromme, 2012). Second, extending this mediation approach to perceived author scientificness and trustworthiness could help to elevate the scientificness effect above pure textual features. The impact of author features on trustworthiness has been well documented in persuasion research (for a review, see Pornpitakpan, 2004). As such, it does not seem far-fetched to also expect an impact of subjective scientificness perceptions on trust at an author level (e. g., by manipulating an author’s level of expertise or professional affiliation). And third, an angle going above and beyond previous research may be to examine cross-dimensional scientificness effects. For instance, a text written in a scientific style may influence the perceived scientificness of an author, which could then result in changes in the perceived trustworthiness of not only the text, but also of the author. Examining these three approaches may offer additional insights into the nature of the scientificness effect and could serve as a springboard to identify variables that affect trustworthiness by interacting with perceived scientificness, such as individual epistemic justification beliefs (Ferguson & Bråten, 2013).

To address the above-mentioned questions, we carried out an experimental online study with a German-speaking general population sample. We employed a mixed design with 16 experimental conditions and systematically varied the level of author scientificness (low vs. high), the level of text scientificness (low vs high), and the topic of the summary texts (manager rumination vs. couples’ leisure activities). Additional details are provided in the “Design” section below. Our objectives were a) to differentiate between features associated with scientificness on an author- and text level and their influence on author and text trustworthiness and b) to examine whether the relationship between systematic scientificness manipulations and trustworthiness judgements is mediated by laypeople’s subjective scientificness perceptions. We preregistered1 our study approach and formulated the following hypotheses2:

H1: The level of author scientificness significantly positively predicts higher perceived author expertise (H1a), integrity (H1b), and benevolence (H1c). The level of text scientificness significantly positively predicts higher perceived text trustworthiness (H1d).

This first hypothesis replicates the scientificness effect and extends its predictions to the level of author scientificness. Based on previous findings (Bromme et al., 2015; Thomm & Bromme, 2012; Zaboski & Therriault, 2020), lay readers should rate an author as more trustworthy if they perceive their credentials, research background and behavior as more scientific. Similarly, they should rate the text as more trustworthy when text features associated with scientificness increase.

H2: The level of author scientificness significantly positively predicts higher perceived author scientificness (H2a). The level of text scientificness significantly positively predicts higher perceived text scientificness (H2b).

This hypothesis was formulated based on the test steps for mediation analyses outlined by Baron and Kenny (1986). While statistically equivalent to an additional manipulation check, it simultaneously serves as an important prerequisite for testing the mediating effect of perceived scientificness on the relationship between varying levels of author and text scientificness and trustworthiness judgements.

Prior research (Bromme et al., 2015; Thomm & Bromme, 2012) found that lay readers rated a text containing references and detailed research method descriptions as more scientific. We hence assume that a higher level of scientificness in a research summary will have a positive effect on lay readers’ evaluations of text scientificness. Similarly, a higher level of scientificness in an author description will likely predict higher lay reader evaluations of author scientificness.

H3: Lay readers’ perceived author scientificness mediates the relationship between the level of author scientificness and perceived author expertise (H3a), integrity (H3b), and benevolence (H3c). Lay readers’ perceived text scientificness mediates the relationship between the level of text scientificness and text trustworthiness (H3d).

H4: The mediations specified in H3a - H3d remain significant when controlling for participants’ age, sex, education level, prior topic interests, prior topic knowledge, prior topic beliefs, epistemic justification beliefs, NCC, research summary text and perceived text accessibility.

Based on the findings described in our introduction and the logic of mediation analysis (Hayes, 2018; Igartua & Hayes, 2021), we hypothesize that a higher level of author scientificness or text scientificness raises laypeople’s subjective appraisal of author or text scientificness, and that this is connected to the scientificness effects’ trustworthiness increases.

Additionally, we scrutinized three exploratory research questions:

RQ1: Is there an influence of the level of text scientificness on perceived author expertise (RQ1a), integrity (RQ1b) and benevolence (RQ1c)? Is there an influence of the level of author scientificness on perceived text trustworthiness (RQ1d)?

RQ2: Is there an influence of the level of author scientificness on perceived text scientificness (RQ2a)? Is there an influence of the level of text scientificness on perceived author scientificness (RQ2b)?

RQ3: Does lay readers’ perceived author scientificness influence the relationship between the level of text scientificness and perceived author expertise (RQ3a), integrity (RQ3b) and benevolence (RQ3c)? Does lay readers’ perceived text scientificness influence the relationship between the level of author scientificness and perceived text trustworthiness (RQ3d)?

Prior research (Jonas et al., 2023) suggests high correlations (r > .50) between laypeople’s ratings of author and text trustworthiness. Higher text scientificness may therefore have a positive impact on laypeople’s ratings of author trustworthiness. And, similarly, higher author scientificness may predict higher text trustworthiness. Following this thought, a potentially positive relationship between the level of text scientificness and author trustworthiness may be mediated by laypeople’s perceived author scientificness, and a positive relationship between the level of author scientificness and text trustworthiness may be mediated by perceived text scientificness. While similar to Hypotheses H1-H4, these cross-level effects may be harder to detect empirically. We therefore did not formulate any confirmatory hypotheses.

To examine our hypotheses and research questions, we followed an approach with multiple steps. Initially, we conducted a small-scale pilot study. This pilot employed the same experimental design and variables as the later main study. However, the focus in this step was not yet on hypotheses testing, but on carrying out manipulation checks to ensure that our scientificness manipulations influenced perceived author and text scientificness in an independent sample. The pilot study allowed us to a priori detect issues with our experimental material, the programming of our survey and with our analysis code. Following the pilot study, we refined our manipulations of author scientificness (see “Manipulation Check and Revisions”). We then carried out the main study, using the same experimental design and variables as in the pilot, as well as the revised materials. Doing so allowed us to focus on H2s role as a prerequisite for mediation analyses instead of its role as an additional manipulation check.

Prior to both studies, the study protocol was preregistered at https://doi.org/10.23668/psycharchives.12869. A priori sample size for the pilot study was set at N = 104. Our main goals in this phase were a manipulation check and a potential revision of materials. For the main study, a priori sample size was determined via GPower 3.1 (Faul et al., 2009). We selected F-Tests as the test family and “Linear multiple regression: Fixed Model, R² increase” as a statistical test. We specified a small effect size of f² = .02 (Cohen, 1988), an alpha error of α = .05, a power of β = .90, three tested predictors (author or text scientificness, perceived author or text scientificness and their interaction) and 13 total predictors3. The selection of a small effect size was derived from Jonas et al. (2023), as previous examinations of the scientificness effect in short research summaries yielded small effect sizes. Our approach resulted in a recommended sample of N = 713, which was adjusted to 720 to enable an equal distribution of participants across all 16 experimental conditions. Since we expected an awareness check drop-out rate of about 20%, we selected a final sample size of Ntarget = 864 for the main study.

Design

A mixed design4 with 16 experimental conditions was employed. Participants condition assignment was randomized at the beginning of the study. To test our hypotheses, the following variables were systematically varied: (a) author scientificness (high vs. low), (b) text scientificness (high vs. low) and (c) research summary type (two summary texts, see below). Supplemental Material 2 offers a design overview; and the experimental manipulations are described in the “Research Summaries” section. The presentation order and the gender of the research summary author were randomized.

Sample

We recruited a German-speaking general population sample via the panel provider Bilendi & respondi for the pilot study (N = 123, Nawareness check pass = 109) and the main study (N = 841, Nawareness check pass = 838). Ten quota conditions were formed based on sex (male, female) crossed with age group (18-29, 30-41, 42-53, 54-65, 66-77). For inclusion in both studies, participants had to be at least 18 years old, proficient in German at a native speaker level and possess at least a “Hauptschulabschluss”5 as an educational degree. The mean age for participants successfully completing the awareness check in the pilot study was Mage = 47.13, SD = 15.76, Min = 18, Max = 76, and 50.46% of participants were female. The mean age in the main study was Mage = 47.97, SD = 15.66, Min = 18, Max = 77, with 50.12% female participants. A posteriori power analysis for the main study revealed that the smallest possible effect size we could have still reasonably detected with an alpha error of α = .05, a minimum power of β = .80 and a sample of 838 participants amounted to f² = .01, which corresponds to an r of = .10 and an R² = .01.

Materials and Measures

Research Summaries

Both summaries (see Supplemental Material 3) were based on published journal articles (Dobson & Ogolsky, 2022; Matick et al., 2022) and presented as researchers’ attempts to communicate their findings. Summary author gender was randomized and then held constant across the two texts. Both summaries were introduced as genuine, but had actually been written by the authors of the present study and systematically varied the levels of author and text scientificness (high vs. low).

The level of author scientificness was targeted via written vignettes accompanying the summaries. In the “high author scientificness” condition, the authors were presented as professors at public German universities. It was stated that they had already published a large volume of works on the subject and that their articles appeared in influential journals and received substantial attention from scientific peers. Furthermore, they were described as adherent to scientific standards and meticulous in their research approach. In contrast, the “low author scientificness” condition introduced the authors as Bachelor psychology students working in the private sector. It was outlined that they had only published a few articles in minor journals, and that these articles had received little attention from scientific peers. Additionally, while their work approach was described as still meeting scientific standards, it was stated that they were sometimes imprecise regarding finer details.

The level of text scientificness was varied via textual features. Summaries in the “high text scientificness” condition included a reference section, precisely described research methods (e.g. by referring to the exact names of questionnaires, specifying the timing of measurements, etc.), employed statistics such as t- or p-values and presented findings in a neutral tone. In contrast, the “low scientificness” summaries did not include references, provided almost no details on the methodology, descriptively outlined results without referring to statistics and made use of non-neutral language.

Perceived Scientificness

Participants judged author scientificness with the statement “I found the author of the research summary on the topic”XYZ” that I just read…” on a scale from 1 (very unscientific) to 8 (very scientific). The statement for text scientificness was worded similarly, i.e. “I found the research summary on the topic”XYZ” that I just read…” and employed the same scale as above.

Trustworthiness

To assess author trustworthiness, we used the METI (Hendriks et al., 2015), a 14-item questionnaire tracking trustworthiness judgements on the three separate dimensions of expertise (six items), integrity (four items) and benevolence (four items) via semantic differentials from 1 to 7 (sample item for expertise: “With reference to their insights, the researcher who’s summary on the topic XYZ I just read seems to be…”, 1 [unintelligent] to 7 [intelligent]). The METI has been repeatedly used in studies of layreaders’ evaluations of public knowledge providers and fulfills the primary test criteria of objectivity, reliability and validity (Hendriks et al., 2017). Its compactness and straightforward scoring scheme make it economical to use and score in the context of online studies with larger samples. In the present study, a confirmatory factor analysis (see Supplemental Material 9) supported the three-dimensional METI structure, and scale reliability was excellent (Cronbachs’ ɑ between .91 and .96). Regarding text trustworthiness, participants rated the statement “I found the research summary on the topic XYZ that I just read…” on a scale from 1 (not credible at all) to 8 (very credible).

Covariates

Epistemic Justification Beliefs

Epistemic justification beliefs refer to beliefs about the justification of knowledge and knowing (Ferguson & Bråten, 2013). Ferguson et al. differentiate between justification by authority (i.e., based on expert accounts and recommendations), personal justification (i.e., based on personal experiences and opinions) and justification by multiple sources (i.e., based on synthesizing, comparing and corroborating sources). It is conceivable that readers’ justification beliefs may impact their evaluation of author and text features associated with scientificness. To investigate this, we employed a questionnaire by Klopp & Stark (2023) at the end of the study. It consists of a total of nine items, three on each dimension, and asks participants to rate statements such as “One source alone is never enough to decide if something is scientifically correct.” on a Likert-style scale from 1 (entirely false) to 6 (entirely correct). Scale reliability was good to acceptable (Cronbachs’ ɑ between .74 and .81).

Need for Cognitive Closure

According to Kruglanski and colleagues (Kruglanski, 2013; Kruglanski & Fishman, 2009), NCC describes an individuals’ desire to obtain clear, unambiguous answers to a question or circumstances compared to answers that leave room for ambiguity. NCC may influence readers’ subjective scientificness and trustworthiness evaluations. For instance, readers with a high NCC may be more inclined to trust a summary that uses a colloquial tone and takes a clear-cut, less neutral stance. To control for NCC influences, readers’ individual NCC was assessed at the end of the study via a German NCC scale (Schlink & Walther, 2007). Readers received 16 items (e.g. “I do not like unforeseeable situations”) and answered these on Likert-style scales ranging from 1 (completely disagree) to 6 (completely agree). Cronbachs’ Alpha indicated good reliability (ɑ = .80).

Control Variables

In addition to demographics (age, sex, and education), data on participants’ topic interest, topic knowledge and topic beliefs were collected before each summary as control variables. Regarding prior interest, they rated the item “I personally find the topic of XYZ interesting” on a Likert-style scale from 1 (not interesting at all) to 8 (very interesting). Prior knowledge was assessed similarly via the item “Concerning the topic XYZ, I have…” and response options ranging from 1 (no prior knowledge) to 8 (profound prior knowledge). Two items per summary were used to gather data on participants’ subjective topic beliefs. The items described strong attitudes towards the topics (e.g. “Couples are definitely happier when they regularly spend their leisure time together with other people”) and were answered on scales ranging from 1 (strongly disagree) to 10 (strongly agree). One of the items was always inverted. After recoding, we intended to merge the items to form joint scales, yet the scale reliabilities were inadequate (Matick et al: ɑ = .40, Dobson & Ogolsky: ɑ = -.02). We therefore decided to remove the inverted items. While not ideal, this allowed us to retain at least a single item measure of prior beliefs for each summary as a control variable. Lastly, laypeople’s perceived accessibility of each summary was assessed via a Likert-style item (“I found the summary of topic XYZ that I just read…”) with ratings ranging from 1 (very difficult to read) to 8 (very easy to read). This measure was included to control for possible effects of text easiness on trustworthiness (see Scharrer et al., 2012, 2021).

Awareness Check

Drawing on Curran & Hauser (2019), we included five yes-or-no forced choice items as an awareness check across the study (see Supplemental Material 2). The questions were formulated so that only one answer was considered plausible (e.g., the answer “No” to “I have never used a computer or smartphone” was considered implausible). Three or more implausible answers across the study were preregistered as a cut-off point for participant exclusion. In total, N = 109 participants passed the awareness check in the pilot study (99.10%), and N = 838 participants passed in the main study (99.64%).

Procedure

The study was implemented with the software Unipark, and the experimental procedure was the same between the pilot and the main study. Mean study duration was approximately 18 Minutes for the pilot study and 21 minutes for the main study. Respondents participated online, and were financially compensated by Bilendi & Respondi. They were initially told that the study goal was to examine how well lay readers understand short research summaries. If participants gave their informed consent, they provided some demographic data (age, sex, education level, German language proficiency), which was used for checking inclusion criteria. They were then randomly assigned to one of the 16 experimental conditions (see Supplemental Material 1) and read two short research summaries. These were created by the study authors and based on published, peer-reviewed psychological journal articles on the effects of manager rumination on employees’ sleep quality (Matick et al., 2022) and the effects of couple leisure activities on relationship satisfaction and commitment (Dobson & Ogolsky, 2022). Before receiving the summaries, participants rated their prior interest, knowledge, and topic beliefs. The summaries were then presented for a minimum of 90 seconds. Author and text scientificness (high vs. low) were determined by participants’ experimental condition. Immediately after reading, participants provided ratings on the text’s accessibility and trustworthiness as well as on text and author scientificness, and judged author trustworthiness via the Muenster Epistemic Trustworthiness Inventory (METI). Once participants had completed this for both summaries, they filled out questionnaires on their epistemic justification beliefs and their need for cognitive closure (NCC). Finally, participants were debriefed. They were informed about the actual purpose of the study (i.e., the examination of the scientificness effect on a text and author level), the experimental manipulation, and were given the option to withdraw their consent. Throughout the study, a total of five awareness check questions were administered (see Supplemental Material 2).

Statistical Modeling

Data was analyzed with the statistical software R, version 4.4.1 (R Core Team, 2024) and the packages lme4 (Bates et al., 2015), mediation (Tingley et al., 2014) and r2glmm (Jaeger, 2017). The statistical tests employed a significance criterion of p < .05. All analyses steps are outlined in Supplemental Material 6 (pilot study) and 9 (main study).

Manipulation Check and Revisions

In the pilot study, we used the manipulation check variables to test whether authors and texts in the “high” conditions were perceived as more scientific. Data were analyzed via mixed model regressions, with a random effect for individual participants6. Perceived author or text scientificness were specified as outcomes, and the author or text scientificness manipulation and perceived accessibility were the predictors. The predictor models yielded a superior data fit for perceived author scientificness (χ² (2) = 14.28, p < .001) and perceived text scientificness (χ² (2) = 13.58, p < .01) compared to a null model. The author scientificness manipulation significantly predicted perceived author scientificness (b = .36, SE = .12, p < .01, R² = .03), and the text scientificness manipulation significantly predicted perceived text scientificness (b = .42, SE = .13, p < .01, R² = .04). Based on these results, we assume that our manipulations worked as intended. We nevertheless decided to slightly rework the author vignettes, since their effects were descriptively lower compared to text scientificness. We added that the author only recently started out in their position in the “low” condition, made differences in the extent of publications between conditions more salient (“individual” vs. “many” articles), and rephrased the section describing the authors’ attention to detail. In the main study, the level of author scientificness again significantly predicted perceived author scientificness (b = .30, SE = .05, p < .001, R² = .02), and the level of text scientificness significantly predicted perceived text scientificness (b = .53, SE = .05, p < .001, R² = .07). Table 1 demonstrates Pearson’s correlations between all continuous variables (perceived author and text scientificness, the METI dimensions and text trustworthiness) separately for the first and second measurement point. Table 2 reports Spearman’s rank correlations between our manipulations of author and text scientificness and all other mediation variables, again separated by measurement point.

Table 1.
Pearson’s Correlations Between all Continuous Mediation Variables Used in the Main Study
 Perceived author
scientificness 
Perceived text
scientificness 
METI expertise METI integrity METI benevolence Text trustworthiness 
Perceived author scientificness      
Perceived text scientificness .85 [.83, .87] ***
.88 [.86, .89] *** 
    
METI expertise .76 [.73, .78] ***
.70 [.67, .74] *** 
.72 [.69, .76] ***
.72 [.68, .75] *** 
   
METI integrity .61 [.56, .65] ***
.56 [.52, .61] *** 
.58 [.53, .62] ***
.57 [.52, .61] *** 
.85 [.83, .86] ***
.83 [.81, .85] *** 
  
METI benevolence .60 [.56, .64] ***
.55 [.50, .60] *** 
.59 [.54, .63] ***
.56 [.51, .60] *** 
.83 [.80. .85] ***
.81 [.78, .83] *** 
.91 [.90, .92] ***
.91 [.90, .92] *** 
 
Text trust-
worthiness 
.64 [.60, .68] ***
.70 [.67, .74] *** 
.64 [.60, .68] ***
.74 [.71, .77] *** 
.70 [.66, .73] ***
.73 [.70, .76] *** 
.64 [.59, .68] ***
.65 [.61, .69] *** 
.63 [.59, .67] ***
.64 [.60, .68] *** 
 Perceived author
scientificness 
Perceived text
scientificness 
METI expertise METI integrity METI benevolence Text trustworthiness 
Perceived author scientificness      
Perceived text scientificness .85 [.83, .87] ***
.88 [.86, .89] *** 
    
METI expertise .76 [.73, .78] ***
.70 [.67, .74] *** 
.72 [.69, .76] ***
.72 [.68, .75] *** 
   
METI integrity .61 [.56, .65] ***
.56 [.52, .61] *** 
.58 [.53, .62] ***
.57 [.52, .61] *** 
.85 [.83, .86] ***
.83 [.81, .85] *** 
  
METI benevolence .60 [.56, .64] ***
.55 [.50, .60] *** 
.59 [.54, .63] ***
.56 [.51, .60] *** 
.83 [.80. .85] ***
.81 [.78, .83] *** 
.91 [.90, .92] ***
.91 [.90, .92] *** 
 
Text trust-
worthiness 
.64 [.60, .68] ***
.70 [.67, .74] *** 
.64 [.60, .68] ***
.74 [.71, .77] *** 
.70 [.66, .73] ***
.73 [.70, .76] *** 
.64 [.59, .68] ***
.65 [.61, .69] *** 
.63 [.59, .67] ***
.64 [.60, .68] *** 

Note.N = 838. Pearson’s r coefficient is reported separately. First entry in each cell = first measurement point, second entry in each cell = second measurement point. Values in brackets refer to the lower/upper limits of a 95% CI. *** p < .001, ** p < .01, p * <.05

Table 2.
Spearman’s Correlations Involving the Categorical Manipulations of Author Scientificness and Text Scientificness for the Main Study
 Author scientifncess Text scientificness 
Author scientifncess  
Text scientificness .02 [-.05, .08]
-.01 [-.08, .06] 
Perceived author scientificness .17 [.10, .23] ***
.16 [.10, .23] *** 
.18 [.11, .24] ***
.20 [.13, .26] *** 
Perceived text scientificness .12 [.05, .18] ***
.13 [.07, .20] *** 
.15 [.08, .22] ***
.22 [.15, .28] *** 
METI expertise .14 [.07, .21] ***
.16 [.09, .22] *** 
.09 [.02, .15] *
.11 [.04, .17] ** 
METI integrity .08 [.01, .14] *
.06 [-.01, .12] 
.00 [-.07, .07]
.05 [-.02, .11] 
METI benevolence .07 [.01, .14] *
.05 [-.02, .12] 
-.01 [-.07, .06]
.04 [-.03, .11] 
Text trustworthiness .10 [.03, .17] **
.09 [.02, .15] * 
.01 [-.05, .08]
.11 [.05, .18] *** 
 Author scientifncess Text scientificness 
Author scientifncess  
Text scientificness .02 [-.05, .08]
-.01 [-.08, .06] 
Perceived author scientificness .17 [.10, .23] ***
.16 [.10, .23] *** 
.18 [.11, .24] ***
.20 [.13, .26] *** 
Perceived text scientificness .12 [.05, .18] ***
.13 [.07, .20] *** 
.15 [.08, .22] ***
.22 [.15, .28] *** 
METI expertise .14 [.07, .21] ***
.16 [.09, .22] *** 
.09 [.02, .15] *
.11 [.04, .17] ** 
METI integrity .08 [.01, .14] *
.06 [-.01, .12] 
.00 [-.07, .07]
.05 [-.02, .11] 
METI benevolence .07 [.01, .14] *
.05 [-.02, .12] 
-.01 [-.07, .06]
.04 [-.03, .11] 
Text trustworthiness .10 [.03, .17] **
.09 [.02, .15] * 
.01 [-.05, .08]
.11 [.05, .18] *** 

Note.N = 838. Spearman’s r coefficient is reported separately. First entry in each cell = first measurement point, second entry in each cell = second measurement point. Values in brackets refer to the lower/upper limits of a 95% CI. *** p < .001, ** p < .01 , p * <.05

H1: Author Scientificness, Text Scientificness and Trustworthiness

To examine the impact of the level of author and text scientificness on trustworthiness, we employed mixed-model regressions. Author trustworthiness (METI expertise, integrity, benevolence) or text trustworthiness were specified as outcomes, the level of author or text scientificness as predictors and a random effect for participants was included. For this hypothesis and all following analyses, we used null model comparison to assess model fit. Null models were always specified with two predictors, an intercept (i.e. mean-based) estimate and a random effect for individual participants (see Supplemental Material 9 for details).

Compared to null models, all predictor models showed a superior data fit, χ²(1) = 10.86 – 57.20, ps <.001. The level of author scientificness significantly predicted expertise (b = .31, 95% CI [.23, .39], SE = .04, t(1305.94)7 = 7.66, p < .001, R² = .02), integrity (b = .15, 95% CI [.08, .22], SE = .04, t(1147.50) = 4.19, p < .001, R² = .01), and benevolence (b = .14, 95% CI [.07, .21], SE = .04, t(1168.77) = 3.90, p < .001, R² = .01). Additionally, the level of text scientificness significantly predicted text trustworthiness (b = .15, 95% CI [.06, .23], SE = .04, t(1422.42) = 3.31, p < .001, R² = .01). Thus, targeting both the levels of author scientificness and text scientificness replicated the scientificness effect and confirmed H1a - H1d.

H2: Author Scientificness, Text Scientificness and Perceived Scientificness

Next, the impact of the levels of author and text scientificness on perceived scientificness were examined. Once again, a mixed model was specified, with perceived author or text scientificness as outcomes, level of author or text scientificness as predictors, and a random effect for participants. Compared to null models, both predictor models yielded a superior data fit (χ²(1) = 49.53 – 88.59, ps < .001). The level of author scientificness significantly predicted perceived author scientificness (b = .32, 95% CI [.23, .41], SE = .05, t(1529.20) = 7.10 p < .001, R² = .03). Similarly, the level of text scientificness significantly predicted perceived text scientificness (b = .42, 95% CI [.33, .51], SE = .04, t(1426.46) = 9.63, p < .001, R² = .04). Increasing the level of author or text scientificness indeed led laypeople to rate an author or text as more scientific, thus supporting H2a and H2b.

H3: Mediation of the Relationship between Scientificness and Trustworthiness via Perceived Scientificness

All mediation analyses were conducted via the mediate function (Tingley et al., 2014). A mediator model (modeling the relationship between independent variable and mediator) and an outcome model (modeling the impact of the independent variable and the mediator on the outcome) are first specified. They are then jointly entered into the function, and a total model effect, an average direct effect (ADE) of the independent variable on the outcome and an average causal mediated effect (ACME) of the mediator on the outcome are computed. All effects are provided in Table 31, and more detailed results are available in Supplemental Material 9. The variance inflation factor (VIF) for all models ranged from 1.03 - 1.04, 95% CIs [1.01 - 1.16], and stayed below commonly used thresholds such as 5 or 10 (Marcoulides & Raykov, 2019), suggesting no imminent issues with multicollinearity. When author or text trustworthiness were investigated as outcomes, the level of author or text scientificness as predictors and perceived author or text trustworthiness as mediators, the ACME was significant for all models (b = .14 - .29, 95% CI [.10 - .35], ps < .001). The results tentatively support H3a - H3d and suggest that the relationship between the levels of author or text scientificness and trust could be mediated by lay readers’ perceived scientificness. However, certain limitations pertaining to our cross-sectional design and the absence of an experimental manipulation of the mediator apply. The model parameters and potential causality should thus be interpreted with care. These issues will be taken up again in the discussion section.

H4: Mediation Analyses Including Control Variables

We repeated the mediation analyses described above and included participants’ age, sex, educational background, prior interest, prior topic knowledge, prior beliefs, epistemic justification beliefs, NCC, research summary text and accessibility as control variables. The results can be found in Table 3, and Table 4 offers an overview of the regression coefficients. The VIF for the predictors in all models ranged from 1.01 - 1.38, 95% CIs [1.00 - 1.47], indicating no issues with multicollinearity. Once again, the ACME reached significance for all models (b = .12 - .29, 95% CIs [.09 - .34], ps < .001). The results suggest a mediating role of perceived author and text scientificness, and the effects still emerged when controlling for additional variables. Thus, the analyses lend support to H4a - H4d. However, as before, limitations apply and both the model parameters as well as causal relationships should be interpreted cautiously.

Table 3.
Mediation Analyses Results for H3a-d and H4a-d.
 ACME ADE Total Effect Prop. Mediated 
METI expertise     
H3a .215*** [.155 - .280] .092** [.035 - .150] .307*** [.224 - .390] .701*** [.558 - .870] 
H4a .193*** [.141 - .250] .101*** [.043 - .160] .293*** [.218 - .370] .658*** [.516 - .820] 
METI integrity     
H3b .142*** [.102 - .180] .002[-.061 - .070] .143*** [.070 - .220] .992*** [.671 - 1.800] 
H4b .120*** [.087 - .150] .002 [-.058 - .060] .121** [.054 - .190] .984** [.636 – 2.010] 
METI benevolence     
H3c .146*** [.104 - .190] -.009 [-.072 - .060] .137*** [.063 - .210] 1.06*** [.711 - 2.040] 
H4c .122*** [.089 - .160] -.007 [-.068 - .050] .115** [.047 - .180] 1.059** [.672 - 2.350] 
Text trustworthiness     
H3d .289*** [.229 - .350] -.136*** [-.203 - -.070] .153** [.065 - .240] 1.895** [1.298 - 3.880] 
H4d .286*** [.235 - .340] .001 [-.068 - .070] .286*** [.206 - .370] .998*** [.798 - 1.310] 
 ACME ADE Total Effect Prop. Mediated 
METI expertise     
H3a .215*** [.155 - .280] .092** [.035 - .150] .307*** [.224 - .390] .701*** [.558 - .870] 
H4a .193*** [.141 - .250] .101*** [.043 - .160] .293*** [.218 - .370] .658*** [.516 - .820] 
METI integrity     
H3b .142*** [.102 - .180] .002[-.061 - .070] .143*** [.070 - .220] .992*** [.671 - 1.800] 
H4b .120*** [.087 - .150] .002 [-.058 - .060] .121** [.054 - .190] .984** [.636 – 2.010] 
METI benevolence     
H3c .146*** [.104 - .190] -.009 [-.072 - .060] .137*** [.063 - .210] 1.06*** [.711 - 2.040] 
H4c .122*** [.089 - .160] -.007 [-.068 - .050] .115** [.047 - .180] 1.059** [.672 - 2.350] 
Text trustworthiness     
H3d .289*** [.229 - .350] -.136*** [-.203 - -.070] .153** [.065 - .240] 1.895** [1.298 - 3.880] 
H4d .286*** [.235 - .340] .001 [-.068 - .070] .286*** [.206 - .370] .998*** [.798 - 1.310] 

Note. All analyses were carried out with bootstrapped model parameters and 5000 iterations. ACME = Average Causal Mediation Effect; ADE = Average Direct Effect; Total Effect = Overall Model Effect; Prop. Mediated = Proportion of the ACME related to the Total Effect. Values in brackets indicate lower/upper boundaries of 95% confidence intervals. * p <.05, ** p <.01, *** p <.001

Table 4.
Mixed Model Analyses for the Outcome Dimensions of Author Expertise, Integrity, Benevolence and Text Trustworthiness.
Model Parameter METI expertise  METI integrity  METI benevolence  Text trustworthiness 
 EST SE   EST SE   EST SE   EST SE  
Random effect variance - participant .178  .299  .302  .069 
Residual variance .245  .248  .257  .385 
Intercept -.112 .096  -.103 -.114  -.167 .115  -.199 .092  
Author scientificness - high .100*** .030 .006  .001 .031 <.001  -.007 .032 <.001  
Perceived author scientificness .615*** .017 .418  .381*** .018 .175  .390*** .018 .179  
Text scientificness - high    .001 .035 <.001 
Perceived text scientificness    .591*** .019 .367 
Age .003* .001 .005  .004* .002 .006  .005** .002 .008  .002 .001 .002 
Sex - male -.097* .039 .005  -.093* .046 .004  -.060 -.047 .002  .092* .036 .004 
School - Middle Maturity -.020 .062 <.001  -.007 .073 <.001  -.021 .074 <.001  .023 .057 <.001 
School - Abitur -.007 .067 <.001  .018 .080 <.001  -.002 .081 <.001  .040 .063 <.001 
School - Bachelor -.060 .084 <.001  -.142 .100 .002  -.065 .101 <.001  .126 .079 .002 
School - Master -.173 .097 .003  -.210 .115 .003  -.211 .116 .003  -.124 .091 .001 
School - other -.138 .075 .003  -.147 .089 .003  -.119 .090 .002  .066 .070 .001 
Interest .010 .018 <.001  .042* .020 .002  .049* .020 .003  .042* .020 .003 
Knowledge -.022 .018 .001  -.035 .020 .002  -.020 .020 .001  -.023 .020 .001 
Beliefs .007 .016 <.001  .035* .017 .002  .012 .017 <.001  .051** .018 .005 
Accessibility .090*** .016 .016  .173*** .017 .043  .169*** .018 .041  .195*** .019 .060 
Justification by authority .164*** .021 .049  .232*** .024 .074  .229*** .025 .071  .110*** .020 .021 
Personal justification .033 .021 .002  -.005 .025 <.001  .015 .025 <.001  -.033 .020 .002 
Justification by multiple sources .027 .020 .001  .097*** .024 .015  .107*** .024 .017  .019 .019 .001 
NCC .023 .020 .001  .039 .024 .002  .029 .024 .001  .016 .019 <.001 
Research Summary - Matick .040 .024 .001  .052* .025 .001  .058* .025 .001  .051 .031 .001 
Scientist gender - male -.044 .038 .001  -.051 .045 .001  -.035 .046 .001  -.026 .036 <.001 
Nobs 1,676  1,676  1,676  1,676 
NID 838  838  838  838 
Model Parameter METI expertise  METI integrity  METI benevolence  Text trustworthiness 
 EST SE   EST SE   EST SE   EST SE  
Random effect variance - participant .178  .299  .302  .069 
Residual variance .245  .248  .257  .385 
Intercept -.112 .096  -.103 -.114  -.167 .115  -.199 .092  
Author scientificness - high .100*** .030 .006  .001 .031 <.001  -.007 .032 <.001  
Perceived author scientificness .615*** .017 .418  .381*** .018 .175  .390*** .018 .179  
Text scientificness - high    .001 .035 <.001 
Perceived text scientificness    .591*** .019 .367 
Age .003* .001 .005  .004* .002 .006  .005** .002 .008  .002 .001 .002 
Sex - male -.097* .039 .005  -.093* .046 .004  -.060 -.047 .002  .092* .036 .004 
School - Middle Maturity -.020 .062 <.001  -.007 .073 <.001  -.021 .074 <.001  .023 .057 <.001 
School - Abitur -.007 .067 <.001  .018 .080 <.001  -.002 .081 <.001  .040 .063 <.001 
School - Bachelor -.060 .084 <.001  -.142 .100 .002  -.065 .101 <.001  .126 .079 .002 
School - Master -.173 .097 .003  -.210 .115 .003  -.211 .116 .003  -.124 .091 .001 
School - other -.138 .075 .003  -.147 .089 .003  -.119 .090 .002  .066 .070 .001 
Interest .010 .018 <.001  .042* .020 .002  .049* .020 .003  .042* .020 .003 
Knowledge -.022 .018 .001  -.035 .020 .002  -.020 .020 .001  -.023 .020 .001 
Beliefs .007 .016 <.001  .035* .017 .002  .012 .017 <.001  .051** .018 .005 
Accessibility .090*** .016 .016  .173*** .017 .043  .169*** .018 .041  .195*** .019 .060 
Justification by authority .164*** .021 .049  .232*** .024 .074  .229*** .025 .071  .110*** .020 .021 
Personal justification .033 .021 .002  -.005 .025 <.001  .015 .025 <.001  -.033 .020 .002 
Justification by multiple sources .027 .020 .001  .097*** .024 .015  .107*** .024 .017  .019 .019 .001 
NCC .023 .020 .001  .039 .024 .002  .029 .024 .001  .016 .019 <.001 
Research Summary - Matick .040 .024 .001  .052* .025 .001  .058* .025 .001  .051 .031 .001 
Scientist gender - male -.044 .038 .001  -.051 .045 .001  -.035 .046 .001  -.026 .036 <.001 
Nobs 1,676  1,676  1,676  1,676 
NID 838  838  838  838 

Note. Model parameter estimates are based on mixed-model regression analysis, with fixed effects for all predictors and a random effect for participants. EST = Estimates for variance of random effects; residuals and regression coefficients; SE = standard error; p = p-value of two-tailed significance test; R² = Generalized R² based on Nakagawa et al. (2013); Nobs = Number of outcome ratings; NID = Number of participants considered in the analysis; Intercept = low author/text scientificness, female sex, low educational attainment level (“Hauptschule”), research summary based on Dobson & Ogolsky, female author; * p < .05, ** p <.01, *** p <.001

RQ1: The Influence of Text Scientificness on Author Trustworthiness and Author Scientificness on Text Trustworthiness

In a first step, we tested the overlap between author and text trustworthiness. This revealed significant and large positive correlations between text trustworthiness and METI expertise, r (836) = .70 - .73, ps <.001, integrity, r (836) = .64 - .65, ps <.001 and benevolence, r (836) = .63 - .64, ps <.001. Next, we again specified mixed models as outlined in the H1 section. However, the level of text scientificness was now used to predict author trustworthiness, and the level of author scientificness was used to predict text trustworthiness. Compared to null models, all models provided a superior data fit, χ²(1) = 8.82 - 51.82, ps < .01. The level of text scientificness significantly predicted author trustworthiness on the dimensions of expertise, integrity and benevolence, bs = .11 - .30, 95% CIs [.04 - .37], SEs = .04, ts(1132.39 - 1275.63) = 2.98 - 7.32, ps < .01, R²s = .003 - .022. Similarly, the level of author scientificness significantly predicted laypeople’s trust in the text, b = .17, 95% CIs [.08, .26], SE =. 04, t(1448.06) = 3.86, p < .001, R² = .007. These results suggest that features associated with the level of author scientificness can influence laypeople’s assessments of text trustworthiness, and that features associated with the level of text scientificness can influence assessments of author trustworthiness.

RQ2 The Impact of Author Scientificness on Perceived Text Scientificness and Text Scientificness on Perceived Author Scientificness

We employed mixed model regression analysis to examine the effects of the level of author scientificness on perceived text scientificness, and vice-versa. Aside from changes in the outcomes to investigate these cross-level effects, model specification followed the steps outlined under H2. As before, null model comparisons revealed a superior data fit for the predictor models, χ²(1) = 34.59 - 88.69, ps < .001. The level of author scientificness significantly predicted perceived text scientificness, b = .27, 95% CI [.18, .35], SE = .04, t(1477.51) = 5.92, p < .001, R² =.018. Similarly, the level of text scientificness significantly predicted perceived author scientificness, b = .43, 95% CI [.34, .52], SE = .04, t(1476.23) = 9.66, p < .001, R² = .046. The levels of author and text scientificness thus also affect laypeople’s ratings of scientificness on the opposite dimension and, combined with H2, suggest an interrelatedness of the dimensions of author and text perception.

RQ3 Mediation Models Based on Text Scientificness for Author Trustworthiness and Author Scientificness for Text Trustworthiness

We finally explored whether the relationship between the level of text scientificness and author trustworthiness is mediated by laypeople’s perceived author scientificness, and whether the relationship between the level of author scientificness and text trustworthiness is mediated by perceived text scientificness. Model specifications were based on H3, except for the predictor variable changes. Once more, a superior data fit for all predictor models was observed, χ²(2) = 507.83 - 1179.90, ps < .001. The VIF for all models ranged between 1.02 - 1.06, 95% CIs [1.00 - 1.26], hence no signs of multicollinearity were observed. When testing for a mediating effect of perceived author scientificness on the relationship between the level of text scientificness and the level of author trustworthiness, the ACME reached significance, bs = .20 - .29, 95% CIs [.15 - .35], ps < .001. The ACME also reached significance when a mediation of the relationship between the level of author scientificness and text trustworthiness via perceived text scientificness was specified, b = .18, 95% CIs [.12 - .24], p <.001. Laypeople’s perceived author scientificness significantly predicted author trustworthiness in these models, bs = .45 - .67, SEs = .02, ps <.001, R²s = .23 - .48, and their perceived text scientificness significantly predicted text trustworthiness, b = .67, SE = .02, p <.001, R² = .46 (see Supplemental Material 9). These analyses may allow the preliminary conclusion that subjective author or text scientificness perceptions of laypeople maintain a mediating role, even for predictors on opposite levels. Yet, it is noteworthy that the same limitations outlined under H3 and H4 also apply here. We will shortly elaborate upon why model parameters should be interpreted with caution and why assuming a causal relationship here might require additional research.

Exploratory Analysis: Expectancy Violations

To further explore the issue of expectancy violations for trust judgements, we carried out additional exploratory testing of mixed models. The mediation models specified for H4a-d were used as a baseline for these analyses. In a first step, we simultaneously included perceived author scientificness and perceived text scientificness in the models as predictors. In a second step, we then modeled their interaction effect. Since these analyses go above and beyond the original scope of our study, only an overview will be given here. However, more detailed results are available in Supplemental Material 9. The overall model fit for the models including the interaction between perceived author scientificness and perceived text scientificness only reached significance for the dimensions METI expertise, χ²(1) = 16.42, p < .001 and text trustworthiness, χ²(1) = 9.60, p < .001. For these models, the VIF ranged from 1.02 - 3.97, 95% CIs [1.00 - 4.32]. The interaction between perceived author and text scientificness significantly predicted laypeoples’ ratings of expertise, b = -.05, SE = .01, p <.001, R² = .01 and for ratings of text trustworthiness, b = -.04, SE = .01, p <.001, R² = .01. Graphical illustrations of the interaction effects were created via the package sjPlot (Lüdecke, 2024) and indicate that expertise and text trustworthiness were rated as particularly low when both author and text were perceived as unscientific, but that a high perceived author scientificness could partially compensate for the negative effects of a low perceived text scientificness - at least with regard to the expertise dimension.

The goal of this study was to test the mediating effect of laypeople’s perceived scientificness on the relationship between author and text features of scientificness and corresponding trustworthiness assessments. To our best knowledge, mediation examinations in the context of the scientificness effect have so far only focused on the relationship between textual features (i.e. a scientific discourse style) and credibility (Bromme et al., 2015; Thomm & Bromme, 2012). Our study replicates previous findings and extends this perspective by applying mediation analyses to author features of scientificness as well as to cross-dimensional interdependencies between levels of scientificness on a text or author dimension and layreaders’ perceived scientificness on the opposite dimension.

Our analyses confirmed our a priori assumptions and point towards the importance of laypeople’s scientificness perceptions. Generally, there is both theoretical ground (Bohner et al., 2002; Bromme et al., 2015; Koot et al., 2016; Thomm & Bromme, 2012) and practical ground (i.e., readers must, in a temporal sense, be confronted with scientificness cues before they can form judgements about a text’s or author’s scientificness or assign trust) to assume that subjective scientificness perceptions mediate the relationship between author or text features and trust. Under the assumption of this mediation framework, we found evidence that when a text included features such as references, method descriptions and a neutral tone, laypeople judged it as more scientific. This, in turn, made them assign higher trust to the text. Notably, including perceived text scientificness as a mediator in this model eliminated a positive direct effect of text features on scientificness. Thus, a conclusion could be that not these features per se, but rather laypeople’s perception of them as scientific, seems to propel trustworthiness gains. To further build on this, consider the following: According to the ELM (Kitchen et al., 2014; Petty & Brinol, 2012), individuals often use the peripheral route of processing, especially when time, resources or prior knowledge are lacking. Under such uncertain circumstances, heuristics may be employed to make judgements. While such a “fast and frugal” approach can help to arrive at decisions in complex environments, it can potentially also introduce bias and distort judgements (Dale, 2015; Madison et al., 2021; Whelehan et al., 2020). Relying on a quick assessment of scientificness features could be helpful for laypeople, but it also poses substantial risk for misinformation when scientificness is subjectively judged as high while a scientific text may have shortcomings or contain misinformation. Future research could thus examine how accurately layreaders can gauge the quality of scientific studies based on their subjective appraisal of text scientificness.

For the mediation model we assume here, the pattern discussed above also held true on the author level. When we described the author as a university professor with extensive academic impact and a meticulous research approach, participants rated them as more scientific and attributed higher expertise, integrity and benevolence to the author. While a direct effect of author scientificness on author expertise remained, the impact of author features on integrity and benevolence was completely accounted for by readers’ perceived scientificness. Laypeople’s perceptions of a scientist as scientific may thus be especially crucial for judgements about morality and interest in societal well-being.

Additionally, our exploratory analyses suggest carry-over effects between author and text scientificness. In line with previous research (Jonas et al., 2023), author and text trustworthiness were highly correlated in the present study. If an author was introduced as experienced, impactful and meticulous, readers also tended to perceive their research summary as more scientific. This was in turn connected to laypeople trusting the text to a higher degree, which ties well to the concept of “second hand evaluations” (Bromme & Goldman, 2014). Similarly, increasing scientificness features in a text resulted in participants perceiving the author as more scientific, which was then connected to more favorable views on author expertise, integrity and benevolence. To briefly summarize: Under the assumptions made by our mediation model, lay reader perceptions of scientificness seem to be the central variable driving the scientificness effect. The effect also occurs when author features (affiliation, impact, work approach) are targeted. Furthermore, one can assume carry over effects: Increasing the scientificness of an author or text likely has effects on the other dimension as well. However, to fully and logically conclude this, two aspects need to be considered: First, related to the ELM-arguments above, it may be prudent to confront readers with material in which assessments of scientificness may be unwarranted, such as research summaries based on misinformation. Second, important caveats in the interpretation of our mediation analysis need to be considered.

Strengths and Limitations

The present study displays multiple strongpoints. Using a preregistration approach allowed us to transparently communicate our hypotheses, research design and analysis methods, thereby facilitating replication. Additionally, the pilot study enabled us to carry out a pretest of our materials and manipulations. An experimental design with randomized condition assignment and multiple control variables helped to eliminate potential confounds, and we were able to recruit a large, representative sample of German-speaking lay readers. Additionally, mixed model regression analyses made it possible to control for individual differences, and a Monte Carlo simulation approach made our mediation analyses more robust.

However, limitations certainly remain. The first and most substantial limitation of our present study is that we were only able to apply a cross-sectional, rather than a longitudinal mediation design. This creates issues with the interpretation of our results, as previous research demonstrated that both for full mediation (Maxwell & Cole, 2007) and partial mediation (Maxwell et al., 2011), estimates of direct and indirect mediation effects based on cross-sectional data can be biased. The main reason for this is that autoregressive effects of mediation model variables across time cannot be taken into account and that it is therefore impossible to control for the stability of the predictor, mediator and outcome variables. Of most concern is that a mediation analysis based on a cross-sectional model may identify a significant mediation effect while such an effect would fail to reach significance in a longitudinal model (Maxwell et al., 2011). The strength and direction of bias is hard to account for based on cross-sectional data alone, and thus cannot be ruled out in the context of our study. As such, our results should be treated as preliminary and further longitudinal research is needed. However, two aspects still enable us to argue for the merit of our approach. First, our mediation analyses replicated findings from two previous mediation analyses conducted with independent samples (Bromme et al., 2015; Thomm & Bromme, 2012). And second, our analyses were based on an experimental approach in which our manipulations of author and text scientificness were randomized. Under normal circumstances, it is not permissible to determine an optimal mediation model a posteriori (e. g. by reversing mediation paths and comparing models) due to all models belonging to the same equivalence class (Thoemmes, 2015). However, in the case of an experimental manipulation, a strong case can be made for causality. According to Maxwell et al. (2011), randomization ensures that the effects of a predictor on both mediator and outcome are not merely correlational, but causal. It is this logic that leads Maxwell and colleagues to conclude that experimental designs are inherently longitudinal, since any posttest assessment occurs after the intervention.

A second limitation concerns the fact that we did not experimentally vary our mediator variables, i.e. perceived author and text scientificness. While our experimental design allows us to at least draw some inferences about causality, it remains unclear whether perceived author scientificness serves as a mediator that influences trust (i.e., m → y) or whether trust judgements are formed first and then influence perceived author or text scientificness (i.e., y → m). In addition, unknown confounding variables may be simultaneously correlated with perceived scientificness and trust. As such, it is possible to argue for alternative mediation models. Future studies are hence needed that replicate the effect with an independent sample and preferably longitudinal design. Alternatively, it may be worthwhile to vary the predictor (i.e., author/text scientificness) and the mediator (i.e. perceived author/text scientificness) simultaneously (Imai et al., 2013). For instance, an experimental variation of an accuracy prompt that directs readers’ attention to scientific text elements or of a priming paradigm (Dai et al., 2023) that highlights concepts such as “scientist” or “scientific” prior to reading may help to vary perceived scientificness and truly establish causality.

Besides these major points, additional limitations apply. First, our control variable scales for prior topic beliefs showed inadequate internal consistency, which deprived us of the opportunity to build a scale. The statements we can make on the influence of this control variable are therefore limited. One explanation for the poor internal consistency could be that our inverted items did not adequately capture opposed beliefs (e.g., the belief that couples should primarily spend their free -time with others vs. in dyadic interaction). When employing such short belief scales in the future, researchers should therefore consider their wording and potentially abstain from using inverted items. As has been previously discussed, they might not be ideally suited to address response biases related to acquiescence, inattention or confusion (see Sonderen et al., 2013).

Furthermore, it is possible to make the case that using a scientificness manipulation as an independent variable and scientificness perceptions as a mediator may raise issues of construct validity, especially on the author level. After all, it appears counterintuitive to assume that a researcher introduced as a university professor may be perceived as anything but scientific. This question can be answered both on a conceptual and methodical level. From a conceptual standpoint, one could argue that the stable share of individuals in the German general population reporting low trust in science or ambivalence (cf. Wissenschaft im Dialog/Kantar, 2022) still warrants a closer examination, and that one should not simply draw this conclusion at face value. And from a methodological standpoint, our scientificness manipulation on both the author and text level only showed small significant rank correlations with perceived author or text scientificness, rs (836) = .13 - .22, ps < .001. Given these small correlations, it seems warranted to treat textual or author features of scientificness and layreaders’ perceptions of such features as separate entities.

An additional limitation arises from the fact that all research summaries in the present study were presented in German. While the scientificness effect and the mediating role of perceived scientificness may also emerge in other languages, culture-specific influences such as varying conceptions of scientists and scientific norms may play a role. Likely influences that could vary between Germany and other countries, e.g. South America or Africa, may include variables such as conservatism or spiritualism (see Rutjens et al., 2022). Additional examinations in different language- and culture-contexts are thus needed.

Furthermore, it appears worthwhile to query participants’ demand characteristics in the debriefing section of future studies. If lay readers suspect that an authors’ or texts’ level of scientificness is specifically targeted by researchers, their initial impulse may be to comply with the researchers’ categorization and rate material in the conditions of high scientificness as more trustworthy automatically. This may especially be an issue in online studies, where participant behavior cannot be directly observed.

As a final point, our study did not include behavioral measures related to trustworthiness. We therefore cannot comment on whether the observed increases in trustworthiness have an impact on how participants engage with or argue about the summaries. Future studies could include such measures, for example by having participants select texts for an argumentative essay and comparing selection rates between experimental groups.

Implications for Future Research

Our results enable us to make multiple suggestions for future studies and follow-up questions. First and foremost, we would recommend that future studies use an experimental, longitudinal approach to data collection and mediation analysis. As mentioned above, cross-sectional mediation analyses can lead to biased effect size estimates (Maxwell et al., 2011; Maxwell & Cole, 2007). Our results should only be interpreted preliminary, and future longitudinal examinations may help to substantiate our findings, as well as to allow comments on the stability of the perceived scientificness and trustworthiness of lay readers over time.

Second, we employed only basic mediation models. Yet, additional variables may also serve as mediators, for instance text easiness. Previous research (Bullock et al., 2019; Scharrer et al., 2012, 2021) has established that layreaders show higher claim agreement, rate arguments as more convincing and rely less on expert accounts when information is presented accessibly, a phenomenon known as the “easiness effect”. In the present study, text easiness also emerged as a significant predictor in all models. Initially, it may seem counterintuitive to assume that both a high easiness and a high scientificness jointly influence trust. After all, one may only be able to push easiness so far without omitting a scientific style. However, both easiness and scientificness were significantly positively related in the present study, rs (836) = .22 - .26, 95% CI [.16, .32], ps < .001. Additionally, exploratory evidence from a previous study (Jonas et al., 2023) suggests that the effects may be additive. While further considerations regarding easiness were beyond the scope of the present study, future research could test whether text easiness serves as a second, additional mediator between text scientificness features and author or text trustworthiness. This may help to shed further light on how perceived scientificness and easiness jointly influence trustworthiness.

Another research possibility concerns individual differences related to scientificness perceptions. Epistemic justification beliefs (Ferguson & Bråten, 2013) come to mind, especially justification by authority and justification by multiple sources. The former had a consistent positive impact on all trustworthiness dimensions in the present study, while the latter especially influenced views on a researchers’ integrity and benevolence. An explanation for this could be that when readers with a high justification by authority receive author or text scientificness cues, they may be more inclined to view the author as a scientific authority, and consequently raise their trust assessments. Similarly, readers with a high belief in justification by multiple sources may interpret an author introduced as a university professor as highly moral due to their adherence to scientific norms and conscious work approach and as interested in public well-being due to their position at a public institute. This may result in higher integrity and benevolence ratings. The relationship between scientificness manipulations on an author or text level and perceived scientificness may thus be moderated by epistemic justification beliefs. Specifying a more elaborate model based on moderated mediation could clarify which readers are especially likely to increase their trust when authors are introduced as scientific or texts use scientific discourse style, with implications for both science communication and interventions against misinformation.

An additional point to consider is that our understanding of perceived scientificness as a mediator between scientific features and trust may benefit from direct manipulation. In the present study, we did not systematically vary readers’ scientificness perceptions. Experimentally varying perceived scientificness, e.g. by directing layreaders’ attention to scientific text elements via an accuracy prompt or by activating concepts such as “science’’ or”scientist’’ via priming (Dai et al., 2023) prior to reading could help to substantiate perceived scientificness as a mediator and establish causality.

Lastly, future studies may profit from examining the two dimensions of perceived author scientificness and perceived text scientificness simultaneously. In the present study, our focus was on replicating the mediation effect described by Bromme and colleagues (Bromme et al., 2015; Thomm & Bromme, 2012) and on extending the paradigm to the level of author scientificness. Future studies could look into which of the factors has the most substantial impact on trustworthiness judgements. Our exploratory analyses seem to suggest that both can influence trustworthiness judgments independently, but further research could directly compare the strength of the effects and boundary conditions.

Conclusion

Our study examined the impact of author and text scientificness variations on trustworthiness as well as the mediating role of laypeoples’ scientificness perceptions. Drawing on prior research, we replicated and extended the scientificness effect and demonstrated that readers rated authors and texts as more trustworthy when these displayed higher levels of scientificness. Furthermore, our findings tentatively suggest that subjective perceptions of scientificness are at the core of these effects, and that there is an interplay between variations of scientificness on an author/text level and layreaders’ perception of scientificness on the opposite dimension. While future research is needed, we hope that this study contributes to developing a more theoretically sound basis for the scientificness effect, as well as to heightening awareness among researchers for the importance of scientificness cues when communicating their findings.

Contributed to conception and design: MJ, TR

Contributed to acquisition of data: MJ

Contributed to analysis and interpretation of data: MJ

Drafted and/or revised the article: MJ, TR

Approved the submitted version for publication: MJ, TR

This study was preregistered via the public open science repository PsychArchives and adheres to the repository’s disclosure requirements: https://doi.org/10.23668/psycharchives.12869

This research was funded by internal ZPID funds. The authors received no third-party or research grant funding.

Both MJ and TR are currently employed at the Leibniz Institute for Psychology (ZPID), a German public non-profit research support organization. The authors declare that the research was conducted in the absence of any commercial or financial relationships that could qualify as a conflict of interest.

The experimental stimuli (i.e., the research summaries and author vignettes), participant data and analysis scripts are available via the public open science repository PsychArchives.

The supplemental material for this article can be found online at:

https://doi.org/10.23668/psycharchives.15390

https://doi.org/10.23668/psycharchives.15391

https://doi.org/10.23668/psycharchives.15392

https://doi.org/10.23668/psycharchives.15393

https://doi.org/10.23668/psycharchives.15394

Supplemental Material S1 Overview of the Experimental Conditions [PDF]

Supplemental Material S2 Design Illustration [PDF]

Supplemental Material S3 Original German PLS and English Translations [PDF]

Supplemental Material S4/5/6 Dataset, Codebook and R Markdown File for the Pilot Study [CSV/RMD]

Supplemental Material S7/8/9 Dataset, Codebook and R Markdown File for the Main Study [CSV/RMD]

1.

The preregistration is available at: https://doi.org/10.23668/psycharchives.12869. Compared to the original preregistration, RQ1 and RQ3 have been adjusted. The order for RQ1 has been changed for coherency’s sake (RQ1a in the preregistration now corresponds to RQ1d). For RQ3b-d, the mediator was incorrectly labeled as perceived text trustworthiness, which has also been corrected. Lastly, RQ3a-d have been reformulated. Perceived author scientificness is now assumed to mediate between text scientificness and author trustworthiness, and perceived text scientificness between author scientificness and text trustworthiness. This was done to make exploratory testing more theoretically sound.

2.

Compared to the preregistration, the names of the independent variables have been slightly reworded to avoid confusion between the scientificness manipulation on an author/text level and layreaders’ subjective scientificness perceptions.

3.

The effect size was incorrectly denoted as merely f in the preregistration due to a typing error, this has been corrected here.

4.

The design was described as a within-subjects repeated measures design in the preregistration. However, since both between- and within data were used in the regression models, “mixed design” captures this more correctly.

5.

A German educational degree obtained after graduating from a Haupt- or Abendschule, typically obtained after completing grade 9 or 10. The degree qualifies for apprenticeships and vocational school, and roughly corresponds to the ISCED 2 level (lower secondary schooling education), see UNESCO Institute for Statistics (2012).

6.

We deviated from the preregistered repeated measure, bonferroni-corrected t-tests to more adequately deal with our mixed design. Repeated measure t-tests for conditions in which they were possible are available in Supplemental Material 6 and 9.

7.

Degrees of freedom were computed based on Satterthwaite’s method, see Kuznetsova et al. (2017) 

Baron, R. M., & Kenny, D. A. (1986). The moderator–mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). https://doi.org/10.18637/jss.v067.i01
Bicchieri, C., Fatas, E., Aldama, A., Casas, A., Deshpande, I., Lauro, M., Parilli, C., Spohn, M., Pereira, P., & Wen, R. (2021). In science we (should) trust: Expectations and compliance across nine countries during the COVID-19 pandemic. PLOS ONE, 16(6), e0252892. https://doi.org/10.1371/journal.pone.0252892
Bohner, G., Ruder, M., & Erb, H.-P. (2002). When expertise backfires: Contrast and assimilation effects in persuasion. British Journal of Social Psychology, 41(4), 495–519. https://doi.org/10.1348/014466602321149858
Brashier, N. M., & Marsh, E. J. (2020). Judging Truth. Annual Review of Psychology, 71(1), 499–515. https://doi.org/10.1146/annurev-psych-010419-050807
Bromme, R., & Goldman, S. R. (2014). The Public’s Bounded Understanding of Science. Educational Psychologist, 49(2), 59–69. https://doi.org/10.1080/00461520.2014.921572
Bromme, R., Mede, N. G., Thomm, E., Kremer, B., & Ziegler, R. (2022). An anchor in troubled times: Trust in science before and within the COVID-19 pandemic. PLOS ONE, 17(2), e0262823. https://doi.org/10.1371/journal.pone.0262823
Bromme, R., Scharrer, L., Stadtler, M., Hömberg, J., & Torspecken, R. (2015). Is It Believable When It’s Scientific? How Scientific Discourse Style Influences Laypeople’s Resolution of Conflicts. Journal of Research in Science Teaching, 52(1), 36–57. https://doi.org/10.1002/tea.21172
Bullock, O. M., Colón Amill, D., Shulman, H. C., & Dixon, G. N. (2019). Jargon as a barrier to effective science communication: Evidence from metacognition. Public Understanding of Science, 28(7), 845–853. https://doi.org/10.1177/0963662519865687
Clark, J. K., Wegener, D. T., Habashi, M. M., & Evans, A. T. (2012). Source Expertise and Persuasion: The Effects of Perceived Opposition or Support on Message Scrutiny. Personality and Social Psychology Bulletin, 38(1), 90–100. https://doi.org/10.1177/0146167211420733
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). L. Erlbaum Associates.
Cologna, V., & Siegrist, M. (2020). The role of trust for climate change mitigation and adaptation behaviour: A meta-analysis. Journal of Environmental Psychology, 69, 101428. https://doi.org/10.1016/j.jenvp.2020.101428
Curran, P. G., & Hauser, K. A. (2019). I’m paid biweekly, just not by leprechauns: Evaluating valid-but-incorrect response rates to attention check items. Journal of Research in Personality, 82, 103849. https://doi.org/10.1016/j.jrp.2019.103849
Dai, W., Yang, T., White, B. X., Palmer, R., Sanders, E. K., McDonald, J. A., Leung, M., & Albarracín, D. (2023). Priming behavior: A meta-analysis of the effects of behavioral and nonbehavioral primes on overt behavioral outcomes. Psychological Bulletin, 149(1–2), 67–98. https://doi.org/10.1037/bul0000374
Dale, S. (2015). Heuristics and biases: The science of decision-making. Business Information Review, 32(2), 93–99. https://doi.org/10.1177/0266382115592536
Davern, M., Bautista, R., Freese, J., Morgan, S., & Smith, T. W. (2022). Major declines in the public’s confidence in science in the wake of the pandemic. AP NORC at the University of Chicago. https:/​/​apnorc.org/​projects/​major-declines-in-the-publics-confidence-in-science-in-the-wake-of-the-pandemic/​
Dobson, K., & Ogolsky, B. (2022). The role of social context in the association between leisure activities and romantic relationship quality. Journal of Social and Personal Relationships, 39(2), 221–244. https://doi.org/10.1177/02654075211036504
Douglas, K. M., & Sutton, R. M. (2023). What Are Conspiracy Theories? A Definitional Approach to Their Correlates, Consequences, and Communication. Annual Review of Psychology, 74(1), 271–298. https://doi.org/10.1146/annurev-psych-032420-031329
Eastin, M. S. (2001). Credibility Assessments of Online Health Information: The Effects of Source Expertise and Knowledge of Content. Journal of Computer-Mediated Communication, 6(4), JCMC643. https://doi.org/10.1111/j.1083-6101.2001.tb00126.x
Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods, 41(4). https://doi.org/10.3758/BRM.41.4.1149
Ferguson, L. E., & Bråten, I. (2013). Student profiles of knowledge and epistemic beliefs: Changes and relations to multiple-text comprehension. Learning and Instruction, 25, 49–61. https://doi.org/10.1016/j.learninstruc.2012.11.003
Georgiou, N., Delfabbro, P., & Balzan, R. (2020). COVID-19-related conspiracy beliefs and their relationship with perceived stress and pre-existing conspiracy beliefs. Personality and Individual Differences, 166, 110201. https://doi.org/10.1016/j.paid.2020.110201
Hayes, A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). Guilford Press.
Hendriks, F., Kienhues, D., & Bromme, R. (2015). Measuring Laypeople’s Trust in Experts in a Digital Age: The Muenster Epistemic Trustworthiness Inventory (METI). PLOS ONE, 10(10), e0139309. https://doi.org/10.1371/journal.pone.0139309
Hendriks, F., Kienhues, D., & Bromme, R. (2016). Trust in Science and the Science of Trust. In B. Blöbaum (Ed.), Trust and Communication in a Digitized World (pp. 143–159). Springer International Publishing. https://doi.org/10.1007/978-3-319-28059-2_8
Hendriks, F., Kienhues, D., & Bromme, R. (2017). METI. Muenster Epistemic Trustworthiness Inventory [Verfahrensdokumentation und Fragebogen]. In Leibniz-Institut für Psychologie (ZPID) (Ed.), Open Test Archive. ZPID.
Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. Yale University Press.
Hughes, J. P., Efstratiou, A., Komer, S. R., Baxter, L. A., Vasiljevic, M., & Leite, A. C. (2022). The impact of risk perceptions and belief in conspiracy theories on COVID-19 pandemic-related behaviours. PLOS ONE, 17(2), e0263716. https://doi.org/10.1371/journal.pone.0263716
Huguet, J., Gaya, J. M., Rodríguez-Faba, O., Breda, A., & Palou, J. (2018). El estilo de la comunicación científica [The style of scientific communication]. Actas Urológicas Españolas, 42(9), 551–556. https://doi.org/10.1016/j.acuro.2018.02.013
Igartua, J.-J., & Hayes, A. F. (2021). Mediation, Moderation, and Conditional Process Analysis: Concepts, Computations, and Some Common Confusions. The Spanish Journal of Psychology, 24, e49. https://doi.org/10.1017/SJP.2021.46
Imai, K., Tingley, D., & Yamamoto, T. (2013). Experimental Designs for Identifying Causal Mechanisms. Journal of the Royal Statistical Society Series A: Statistics in Society, 176(1), 5–51. https://doi.org/10.1111/j.1467-985X.2012.01032.x
Ismagilova, E., Slade, E., Rana, N. P., & Dwivedi, Y. K. (2020). The effect of characteristics of source credibility on consumer behaviour: A meta-analysis. Journal of Retailing and Consumer Services, 53, 101736. https://doi.org/10.1016/j.jretconser.2019.01.005
Jaeger, B. (2017). Computes R Squared for Mixed (Multilevel) Models. https:/​/​cran.r-project.org/​web/​packages/​r2glmm/​r2glmm.pdf
Jonas, M., Kerwer, M., Chasiotis, A., & Rosman, T. (2023). Indicators of trustworthiness in lay-friendly research summaries: Scientificness surpasses easiness. Public Understanding of Science, 096366252311763. https://doi.org/10.1177/09636625231176377
Kantar Public. (2023). Covid-19 and the Public Perception of Genetics. https:/​/​genetics.org.uk/​wp-content/​uploads/​2018/​06/​Copy-of-Public-Perception-of-Genetics.pdf
Kitchen, P. J., Kerr, G., E. Schultz, D., McColl, R., & Pals, H. (2014). The elaboration likelihood model: Review, critique and research agenda. European Journal of Marketing, 48(11/12), 2033–2050. https://doi.org/10.1108/EJM-12-2011-0776
König, L., & Jucks, R. (2019). Hot topics in science communication: Aggressive language decreases trustworthiness and credibility in scientific debates. Public Understanding of Science, 28(4), 401–416. https://doi.org/10.1177/0963662519833903
König, L., & Jucks, R. (2020). Effects of Positive Language and Profession on Trustworthiness and Credibility in Online Health Advice: Experimental Study. Journal of Medical Internet Research, 22(3), e16685. https://doi.org/10.2196/16685
Koot, C., Mors, E. T., Ellemers, N., & Daamen, D. D. L. (2016). Facilitation of attitude formation through communication: How perceived source expertise enhances the ability to achieve cognitive closure about complex environmental topics. Journal of Applied Social Psychology, 46(11), 627–640. https://doi.org/10.1111/jasp.12391
Kruglanski, A. W. (2013). Lay Epistemics and Human Knowledge: Cognitive and Motivational Bases. Springer Science & Business Media.
Kruglanski, A. W., & Fishman, S. (2009). The need for cognitive closure. In M. R. Leary & R. H. Hoyle (Eds.), Handbook of individual differences in social behavior (pp. 343–353). The Guilford Press.
Kuznetsova, A., Brockhoff, P. B., & Christensen, R. H. B. (2017). lmerTest Package: Tests in Linear Mixed Effects Models. Journal of Statistical Software, 82(13), 1–26. https://doi.org/10.18637/jss.v082.i13
Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008
Lin, T., Heemskerk, A., Harris, E. A., & Ebner, N. C. (2023). Risk perception and conspiracy theory endorsement predict compliance with COVID-19 public health measures. British Journal of Psychology, 114(1), 282–293. https://doi.org/10.1111/bjop.12613
Lüdecke, D. (2024). sjPlot: Data Visualization for Statistics in Social Science. R package version 2.8.16. https:/​/​CRAN.R-project.org/​package=sjPlot
Madigan, R., Johnson, S., & Linton, P. (1995). The language of psychology: APA style as epistemology. American Psychologist, 50(6), 428–436. https://doi.org/10.1037/0003-066X.50.6.428
Madison, A. A., Way, B. M., Beauchaine, T. P., & Kiecolt-Glaser, J. K. (2021). Risk assessment and heuristics: How cognitive shortcuts can fuel the spread of COVID-19. Brain, Behavior, and Immunity, 94, 6–7. https://doi.org/10.1016/j.bbi.2021.02.023
Marcoulides, K. M., & Raykov, T. (2019). Evaluation of Variance Inflation Factors in Regression Models Using Latent Variable Modeling Methods. Educational and Psychological Measurement, 79(5), 874–882. https://doi.org/10.1177/0013164418817803
Martel, C., Allen, J., Pennycook, G., & Rand, D. G. (2023). Crowds Can Effectively Identify Misinformation at Scale. Perspectives on Psychological Science, 17456916231190388. https://doi.org/10.1177/17456916231190388
Matick, E., Kottwitz, M. U., Rigotti, T., & Otto, K. (2022). I can’t get no Sleep: The Role of Leaders’ Health and Leadership Behavior on Employees’ Sleep Quality. European Journal of Work and Organizational Psychology, 31(6), 869–879. https://doi.org/10.1080/1359432X.2022.2077198
Maxwell, S. E., & Cole, D. A. (2007). Bias in cross-sectional analyses of longitudinal mediation. Psychological Methods, 12(1), 23–44. https://doi.org/10.1037/1082-989X.12.1.23
Maxwell, S. E., Cole, D. A., & Mitchell, M. A. (2011). Bias in Cross-Sectional Analyses of Longitudinal Mediation: Partial and Complete Mediation Under an Autoregressive Model. Multivariate Behavioral Research, 46(5), 816–841. https://doi.org/10.1080/00273171.2011.606716
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20(3), Article 3. https://doi.org/10.5465/amr.1995.9508080335
McGuire, W. J. (1996). The Yale communication and attitude-change program in the 1950s. In E. E. Dennis & E. A. Wartella (Eds.), American communication research—The remembered history (pp. 39–59). Lawrence Erlbaum Associates, Inc.
Nakagawa, S., Schielzeth, H., & O’Hara, R. B. (2013). A general and simple method for obtaining R2 from generalized linear mixed-effects models. Methods in Ecology and Evolution, 4(2). https://doi.org/10.1111/j.2041-210x.2012.00261.x
Petty, R. E., & Brinol, P. (2012). The Elaboration Likelihood Model. In P. A. M. Von Lange, A. W. Kruglanski, & E. T. Higgins (Eds.), Handbook of Theories of Social Psychology: Collection: Volumes 1 & 2 (pp. 224–246). SAGE Publications.
Pornpitakpan, C. (2004). The Persuasiveness of Source Credibility: A Critical Review of Five Decades’ Evidence. Journal of Applied Social Psychology, 34(2), 243–281. https://doi.org/10.1111/j.1559-1816.2004.tb02547.x
R Core Team. (2024). R: A language and environment for statistical computing. R Foundation for Statistical Computing. https:/​/​www.r-project.org/​
Reinhard, M.-A., & Schwarz, N. (2012). The influence of affective states on the process of lie detection. Journal of Experimental Psychology: Applied, 18(4), 377–389. https://doi.org/10.1037/a0030466
Rutjens, B. T., Sengupta, N., Der Lee, R. V., Van Koningsbruggen, G. M., Martens, J. P., Rabelo, A., & Sutton, R. M. (2022). Science Skepticism Across 24 Countries. Social Psychological and Personality Science, 13(1), 102–117. https://doi.org/10.1177/19485506211001329
Sailer, M., Stadler, M., Botes, E., Fischer, F., & Greiff, S. (2022). Science knowledge and trust in medicine affect individuals’ behavior in pandemic crises. European Journal of Psychology of Education, 37(1), 279–292. https://doi.org/10.1007/s10212-021-00529-1
Scharrer, L., Bromme, R., Britt, M. A., & Stadtler, M. (2012). The seduction of easiness: How science depictions influence laypeople’s reliance on their own evaluation of scientific information. Learning and Instruction, 22(3), 231–243. https://doi.org/10.1016/j.learninstruc.2011.11.004
Scharrer, L., Bromme, R., & Stadtler, M. (2021). Information Easiness Affects Non-experts’ Evaluation of Scientific Claims About Which They Hold Prior Beliefs. Frontiers in Psychology, 12, 678313. https://doi.org/10.3389/fpsyg.2021.678313
Schlink, S., & Walther, E. (2007). Kurz und gut: Eine deutsche Kurzskala zur Erfassung des Bedürfnisses nach kognitiver Geschlossenheit [Short and sweet: A German short scale to measure need for cognitive closure]. Zeitschrift Für Sozialpsychologie, 38(3), Article 3. https://doi.org/10.1024/0044-3514.38.3.153
Schröder-Pfeifer, P., Talia, A., Volkert, J., & Taubner, S. (2018). Developing an assessment of epistemic trust: a research protocol. Research in Psychotherapy: Psychopathology, Process and Outcome, 21(3). https://doi.org/10.4081/ripppo.2018.330
Schwarz, N., Jalbert, M., Noah, T., & Zhang, L. (2021). Metacognitive experiences as information: Processing fluency in consumer judgment and decision making. Consumer Psychology Review, 4(1), 4–25. https://doi.org/10.1002/arcp.1067
Sonderen, E. V., Sanderman, R., & Coyne, J. C. (2013). Ineffectiveness of Reverse Wording of Questionnaire Items: Let’s Learn from Cows in the Rain. PLoS ONE, 8(7), e68967. https://doi.org/10.1371/journal.pone.0068967
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic Vigilance. Mind & Language, 25(4), 359–393. https://doi.org/10.1111/j.1468-0017.2010.01394.x
Thoemmes, F. (2015). Reversing Arrows in Mediation Models Does Not Distinguish Plausible Models. Basic and Applied Social Psychology, 37(4), 226–234. https://doi.org/10.1080/01973533.2015.1049351
Thomm, E., & Bromme, R. (2012). It should at least seem scientific! Textual features of “scientificness” and their impact on lay assessments of online information. Science Education, 96(2). https://doi.org/10.1002/sce.20480
Tingley, D., Yamamoto, T., Hirose, K., Keele, L., & Imai, K. (2014). mediation: R Package for Causal Mediation Analysis. https:/​/​cran.r-project.org/​web/​packages/​mediation/​vignettes/​mediation.pdf
Tormala, Z. L., & Clarkson, J. J. (2008). Source Trustworthiness and Information Processing in Multiple Message Situations: A Contextual Analysis. Social Cognition, 26(3), 357–367. https://doi.org/10.1521/soco.2008.26.3.357
UNESCO Institute for Statistics. (2012). International Standard Classification of Education ISCED 2011. UNESCO Institute of Statistics. https://doi.org/10.15220/978-92-9189-123-8-en
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. https://doi.org/10.1126/science.aap9559
Wang, X., Shi, J., & Kong, H. (2021). Online Health Information Seeking: A Review and Meta-Analysis. Health Communication, 36(10), 1163–1175. https://doi.org/10.1080/10410236.2020.1748829
Whelehan, D. F., Conlon, K. C., & Ridgway, P. F. (2020). Medicine and heuristics: Cognitive biases and medical decision-making. Irish Journal of Medical Science (1971 -), 189(4), 1477–1484. https://doi.org/10.1007/s11845-020-02235-1
Wilholt, T. (2013). Epistemic Trust in Science. The British Journal for the Philosophy of Science, 64(2), 233–253. https://doi.org/10.1093/bjps/axs007
Wissenschaft im Dialog/Kantar. (2022). Science Barometer 2022 Brochure. Wissenschaft im Dialog/Kantar. https:/​/​www.wissenschaft-im-dialog.de/​fileadmin/​user_upload/​Projekte/​Wissenschaftsbarometer/​Dokumente_22/​Englisch/​sciencebarometer2022.pdf
Zaboski, B. A., & Therriault, D. J. (2020). Faking science: Scientificness, credibility, and belief in pseudoscience. Educational Psychology, 40(7), 820–837. https://doi.org/10.1080/01443410.2019.1694646
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material