Intellectual humility involves recognizing one’s own epistemic limitations and is associated with several beneficial outcomes. This work presents a French version of the General Intellectual Humility Scale (Leary et al., 2017) to investigate this concept in French-speaking populations. We translated the scale and, across five studies (NTotal = 2172), provide evidence of its structure (through EFA, CFA, internal consistency, test-retest reliability, and measurement invariance across French and English speakers) and the validity of its scores (convergent, divergent, and predictive). Study 1 showed that the scale related to need for evidence. Study 2 revealed that the scale (a) positively related to openness and need for cognition, (b) negatively related to dogmatism, and (c) had a negligible relationship with social desirability. In Study 3, scores predicted attention paid to the quality of persuasive arguments over and above need for closure. Study 4 showed that the scale correlated positively with another self-report measure of intellectual humility. A test-retest analysis also indicated adequate reliability for the French version of the scale. Finally, Study 5 tested measurement invariance and revealed that the structure of the scale was similar between French and English speakers and that their latent scores were comparable. Overall, this work offers evidence of the (French) General Intellectual Humility Scale as a valid measure of intellectual humility.

According to the Edelman Trust Barometer France 2021, only 18% of respondents are concerned with using the source of information in a news story to judge its quality or to consult multiple sources before relaying this news. This opens the door to the spread of misinformation and false beliefs, which may lead to significant societal consequences. For instance, France is one of the countries with the most vaccine skeptics: 33% of French people disagree that vaccines are safe (the highest worldwide proportion) and 19% disagree that vaccines are effective (the second highest worldwide proportion; Wellcome Global Monitor, 2019). This skepticism may limit the effectiveness of national health interventions. Therefore, it is crucial to identify and investigate the variables related to the endorsement of misinformation and the development of false beliefs. The literature suggests that intellectual humility could be one of these variables. However, to date, there is no validated intellectual humility scale available in French, which limits the scope of research. The aim of this paper is to fill this gap, and to provide additional validation evidence for the scale we selected, the General Intellectual Humility Scale (GIHS, Leary et al., 2017). We have validated a French translation of this scale by replicating and extending the evidence from the original validation work.

People may vary in their willingness to recognize that their beliefs might be wrong or incomplete. Recognizing ignorance and the fallibility of one’s beliefs is the core of what distinguishes an intellectually humble person from a less humble one (Leary, 2018; Porter, Baldwin, et al., 2022; Porter, Elnakouri, et al., 2022). Various definitions of intellectual humility appear in the literature, ranging from those focusing on its metacognitive features to those also including motivational or interpersonal features (for a review see Porter, Baldwin, et al., 2022). However, the recognition of one’s epistemic limitations (i.e., fallibility and ignorance concerning one’s own knowledge) is the most common feature across the philosophical and psychological research literature. This definition, as a form of (epistemic) metacognition, can serve as a widely accepted general definition of intellectual humility. The term intellectual should thus be understood in the sense of knowledge and beliefs, not in the sense of intelligence.

The literature suggests that acknowledging one’s epistemic limitations may be a genuine virtue in relation to people’s beliefs at the individual, social, and societal levels (Ballantyne, 2023; Leary, 2022; Porter, Elnakouri, et al., 2022). At the individual level, intellectual humility might protect people against common cognitive fallacies and, at least partly, protect them from some of their unjustifiably rigid and biased tendencies. Research indeed shows that higher intellectual humility is associated with greater cognitive flexibility (Zmigrod et al., 2019), less desire to obtain definitive, certain, and unambiguous answers (Leary et al., 2017), greater openness to opposing views (Porter & Schumann, 2018), and greater investment to learn about something they initially failed to master (Porter et al., 2020). Intellectual humility is also inversely related to the tendency to over-claim one’s knowledge (Alfano et al., 2017; Krumrei-Mancuso et al., 2020; but see Deffler et al., 2016).

At the social level, intellectual humility may promote positive, respectful, and tolerant social interactions. Indeed, being intellectually humble is positively related to prosocial personality traits and outcomes like agreeableness (Bowes et al., 2022; Leary et al., 2017; Porter & Schumann, 2018), empathy, gratitude, and altruism (Krumrei-Mancuso, 2017). Considering one’s knowledge as potentially wrong could lead one to be more open-minded, because opposing views may, in fact, be correct. In line with this idea, people high in intellectual humility manifest greater respect toward views that differ from their own (Porter & Schumann, 2018). They are more likely to befriend someone with opposing views and less likely to derogate the intellectual capabilities or moral character of this opponent (Stanley et al., 2020; see also Bowes et al., 2020).

Finally, at the societal level, intellectual humility may promote notably the efficacy of prevention campaigns. For instance, intellectual humility relates negatively to anti-vaccination attitudes but positively to intentions to vaccinate (for seasonal flu, Senger & Huynh, 2021; for COVID 19 vaccine, Huynh & Senger, 2021). Intellectual humility may also limit conflicts between opposing political partisans because intellectually humble people show less political myside bias (Bowes et al., 2022). They are more open to information that conflicts with their political views (Porter & Schumann, 2018), tend less to perceive the opposing party as immoral and unlikable (i.e., affective polarization, Bowes et al., 2020), or to accuse a candidate of flip-flopping for utilitarian reasons (Bowes et al., 2022; Leary et al., 2017).

To summarize, the literature suggests that intellectual humility is related to individual, social, and societal variables that could prove essential in addressing many of the challenges facing societies today (e.g., misinformation endorsement, vaccine skepticism). To establish and deepen these relations, it is necessary to have appropriate intellectual humility measurement tools. Over the past decade, several measures of intellectual humility have been developed, mainly in the form of self-report questionnaires (e.g., Alfano et al., 2017; Krumrei-Mancuso & Rouse, 2016; Leary et al., 2017; Porter & Schumann, 2018; for a review see Porter, Baldwin, et al., 2022; for a behavioral measure see Danovitch et al., 2019; Hanel et al., 2023). However, most of these measures have been only validated in English. This is problematic for two reasons. First, if one considers that researchers only rely on validated measures, this limits the scope of investigation of intellectual humility to English-speaking countries. To shed light on the understanding of French-speaking societal phenomena related to people’s beliefs through their link with intellectual humility, a French measure of the construct is therefore needed. Second, if, as it is often the case, researchers use non-validated translations, we then risk that these measures may not capture the same construct. If these versions are supposed to measure the same construct but fail to do so (i.e., the jingle fallacy, Flake & Fried, 2020), then the variability in study conclusions increases and their validity becomes limited. Thus, ensuring that a French measure of intellectual humility actually measures intellectual humility, does so reliably, and in the same way that the original English version does, is crucial to the robustness and replicability of the intellectual humility literature. In this work, we aimed to translate and validate an existing measure of intellectual humility.

Insofar as different definitions of intellectual humility exist in the literature, several measurement scales (based on these different definitions) have emerged. Among these scales, the GIHS (Leary et al., 2017) has been developed to measure general intellectual humility, defined as “recognizing that a particular personal belief may be fallible, accompanied by an appropriate attentiveness to limitations in the evidentiary basis of that belief and to one’s own limitations in obtaining and evaluating relevant information” (Leary et al., 2017, p. 793). The GIHS consists of six items (e.g., “I accept that my beliefs and attitudes may be wrong”) answered using a 5-point scale ranging from not at all true of me to extremely true of me. All six items load onto a single factor interpreted as the general tendency to be high or low in intellectual humility.

The GIHS offers several advantages in studying intellectual humility. The first advantage is the strong support in the literature for the construct validity of the GIHS, demonstrated by its associations with various psychological concomitants of intellectual humility. For instance, the scale is positively correlated with openness, epistemic curiosity, existential quest, and need for cognition, and negatively correlated with dogmatism, intolerance of ambiguity, and self-righteousness (Leary et al., 2017). Moreover, the higher people score on the GIHS, the more they prefer balanced perspectives that acknowledge both sides of a position, evaluate people who change their beliefs more positively, and distinguish strong from weak arguments (Leary et al., 2017). Higher scores on the GIHS also relate to lower endorsement of misinformation, such as beliefs in the paranormal and better discrimination between real and fake news headlines (Bowes & Tasimi, 2022). They also come with a greater likelihood of engaging in investigative behaviors in the face of COVID-19 fake news or political misinformation (Koetke et al., 2022, 2023). The GIHS also positively correlates with both self-report and actual mastery behaviors (Porter et al., 2020). Importantly, although developed on the basis of a specific metacognitive conception of intellectual humility, the GIHS correlates positively with other intellectual humility scales (Haggard et al., 2018; Porter & Schumann, 2018). The second advantage of the GIHS is that by focusing on the metacognitive core of intellectual humility, this scale avoids conflating the measured construct with its behavioral, emotional, motivational, and social consequences (Leary, 2018). Third, due to its short format and unidimensional structure, the scale is easy to use and interpret. It prevents considering two persons as similarly intellectually humble on the basis of different subdimensions scores and does not require to weight the score. For all these reasons, we decided to translate and validate the GIHS in French to provide an adequate tool to measure and study intellectual humility in the French-speaking context.

The purpose of this work was to translate and validate the GIHS (Leary et al., 2017) in French. In Study 1, we aimed to replicate the factor structure of the French GIHS and assessed its convergent validity by investigating the correlation between the French GIHS and the importance that people attach to the consistency between available evidence and their beliefs (i.e., need for evidence; Garrett & Weeks, 2017). In Study 2, we extended the test of convergent validity by testing some of the most important relationships with related constructs obtained with the original scale (i.e., openness to values, dogmatism, and need for cognition). We also tested the divergent validity of the scale vis-à-vis social desirability. In Study 3, we tested whether the French GIHS predicts the extent to which people distinguish strong from weak evidence over and above the need for closure (Leary et al., 2017). Owing to the overlap of participants between Studies 2 and 3, we were also able to examine the test-retest reliability of the scale. In Study 4, we investigated whether the GIHS correlates with another intellectual humility scale. Finally, in Study 5, we tested the measurement invariance of the scale across French and English speakers. For each study, all data, materials, and codes are available on the OSF project: https://osf.io/8q6rx/?view_only=21400eb81c6143bc979327c18c7aff27. We collected and analyzed anonymously all data with written informed consent from participants in accordance with the American Psychological Association’s ethical principles. However, we did not seek explicit ethics approval for this work because it was not required by our university’s guidelines and applicable national regulations.

We translated the GIHS using a forward-backward method (Brislin, 1970). Five bilingual native French speakers independently translated the English version into French. A committee of researchers, who were either familiar or unfamiliar with the concept of intellectual humility, resolved any discrepancy in the translations to obtain a preliminary version of the French GIHS. This version was then back-translated by three bilingual native English speakers. Finally, French items were adjusted when required following exchanges between researchers and bilingual native English speakers to ensure that we kept the meaning of original items in the translation. See Appendix for the original English version and the final French version of the GIHS.

In Study 1, we replicate the unidimensional factor structure of the GIHS with the translated version. We also provide preliminary evidence of its convergent validity. Intellectual humility involves paying attention to evidence concerning one’s beliefs, because these beliefs might be incorrect (Leary, 2018). Accordingly, people who score high on intellectual humility should also place greater value on ensuring that their beliefs are consistent with available evidence, a tendency labeled the need for evidence (Garrett & Weeks, 2017). Therefore, in Study 1, we predicted a positive relationship between the French GIHS and the need for evidence.

Method

Sample Size Determination

We performed a Monte Carlo simulation analysis (“simsem” R package version 0.5-16; Beaujean, 2014; Muthén & Muthén, 2002), with N = 300 (sample size), m = 1000 (number of samples), and seed = 56765, fixing the values of the parameters of the measurement models from the results obtained in the original papers (Garrett & Weeks, 2017; Leary et al., 2017). To the best of our knowledge, the correlation between GIHS and the need for evidence has never been investigated before, so it was difficult to estimate the anticipated effect size. As the correlations obtained in the original paper (Leary et al., 2017) ranged from r = .15 to r = .49, we deemed a value of r = .25 to be a reasonable anticipated effect size to conduct our simulation analysis. The simulation analysis showed that a sample size of 300 was sufficient to reliably detect the factor structure of the GIHS and the Need for Evidence as well as a correlation between the two scales of at least r =.25 with satisfactory power (> 80%, see the pre-registration in the OSF project).1 Pre-registration for Study 1 is available from the following link: https://osf.io/7tcx2/?view_only=fed7a40eb9e84858aaaacdd190dd9408.

Participants

We recruited 722 participants on social networks (we exceeded our planned sample size due to a massive increase in participation following the last call for participants). The study was conducted online on Qualtrics and participants joined the study voluntarily without remuneration. Neither the (pre-registered) exclusion criterion based on an unrealistically fast completion time, nor the criterion based on French fluency led to any exclusions. However, we excluded participants who failed the attention check (i.e., wrote something other than “baguette”, n = 25). We analyzed the data of the remaining 697 participants (nFemale = 342; nMale = 339; nOther = 16; MAge = 35.61; SDAge = 11.73). Half of the sample had a Master degree or more advanced degree (51.51%), while a minority had no degree (0.86%).

Materials

GIHS. We used the French version of the GIHS. The GIHS consists of six items (e.g., “I accept that my beliefs and attitudes may be wrong”) answered using a 5-point scale (i.e., 0 = not at all true of me, 1 = slightly true of me, 2 = moderately true of me, 3 = very true of me, and 4 = extremely true of me; M = 2.84, SD = 0.56, Cronbach’s ⍺ = .74, MacDonald’s ω = .75).

Need for Evidence scale. To assess the value an individual places on ensuring that beliefs are consistent with available evidence, we used a French translation of the Need for Evidence scale already used by our research team (M = 1.08, SD = 0.70, Cronbach’s ⍺ = .76, MacDonald’s ω = .77; Garrett & Weeks, 2017). The scale consists of four items (e.g., “I need to be able to justify my beliefs with evidence”) rated on a 5-point scale (i.e., 1= strongly disagree, 2 = disagree, 3 = neither agree nor disagree, 4 = agree, and 5 = strongly agree).

Procedure

After consenting to participate in the study, participants completed the French GIHS and the Need for Evidence scale, in a randomized order.2 At the end of the questionnaire, participants had to take an attention check (i.e., to answer “baguette” to the question “What is your favorite color?” as explicitly required in the instructions preceding the question). After that, they indicated their age, gender, if they were fluent in French, and the highest level of education they completed.

Results

We began by investigating the factor structure of the French GIHS with an exploratory factor analysis (“psych” R package version 2.3-3; Revelle, 2022). For this factor analysis, we used a minimum residual factoring method and applied an oblique rotation. We performed a Parallel Analysis (Horn, 1965) and an Empirical Kaiser Criterion extraction (Braeken & van Assen, 2017) which respectively suggested extracting two factors and one factor, a one-factor solution was more adequate. Indeed, the two-factor solution revealed that only one item (i.e., “In the face of conflicting evidence, I am open to changing my opinions”) had an appropriate, but extremely high, factor loading for the second factor. This second factor could be considered weak and unstable, and thus uninterpretable (Costello & Osborne, 2005; Velicer & Fava, 1998). This item is also the only one that does not begin with the pronoun “I” and it might load on a different factor because it is phrased differently than the other five items (i.e., method variance) The one-factor solution revealed factor loadings that were all above .50 (see Table 1), replicating the factor structure of the GIHS scale in the original development paper (Leary et al., 2017).

Table 1.
Factor loadings for the French GIHS
Items Factor loadings 
I question my own opinions, positions, and viewpoints because they could be wrong. .591 
I reconsider my opinions when presented with new evidence. .606 
I recognize the value in opinions that are different from my own. .514 
I accept that my beliefs and attitudes may be wrong. .631 
In the face of conflicting evidence, I am open to changing my opinions. .587 
I like finding out new information that differs from what I already think is true. .534 
Items Factor loadings 
I question my own opinions, positions, and viewpoints because they could be wrong. .591 
I reconsider my opinions when presented with new evidence. .606 
I recognize the value in opinions that are different from my own. .514 
I accept that my beliefs and attitudes may be wrong. .631 
In the face of conflicting evidence, I am open to changing my opinions. .587 
I like finding out new information that differs from what I already think is true. .534 

We then tested the correlation between the French GIHS scale and the Need for Evidence scale using structural equation modeling (SEM). We built a model with intellectual humility and need for evidence as latent factors and their respective scale items as corresponding indicators. We conducted the analyses using the “sem” function from the “lavaan” R package (version 0.6-15, Rosseel, 2012). Our data violated the assumption of multivariate normality, but for models with at least five response options per item, maximum likelihood has been shown to accurately estimate factor loadings and robust standard errors (Rhemtulla et al., 2012). Thus, we treated the items as continuous, and we used the robust maximum likelihood estimation method (MLR; Yuan & Bentler, 2000) and robust standard errors in our CFA.

The intellectual humility factor was identified using the marker method (constraining the first loading of the factor to 1). To evaluate the fit of our confirmatory factor analyses, our criteria were: (a) not to rely on the chi-squared indicator, because when using large samples (> 200), it becomes an uninformative fit index, but we still report it in the manuscript (Kenny, 2015); (b) CFI ≥ 0.90; (c) RMSEA ≤ 0.06 with, ideally, a 90% confidence interval where the minimum is close to 0 and the maximum is not more than 0.10; and (d) SRMR ≤ 0.08 (Gana & Broc, 2019). The model, shown in Figure 1, provided an overall good fit to the data (Gana & Broc, 2019; Hu & Bentler, 1999; Schreiber et al., 2006): χ²(34) = 85.048, CFI = .957, SRMR = .049, RMSEA = .052, 90% CI [.038, .066]. All standardized factor loadings were above .49. We did not detect Heywood cases (factor loading > 1 or negative variances). Because modification indices were low, we did not apply any modifications to the model. The estimated model also revealed a positive correlation between intellectual humility and need for evidence, r = .28, 95% CI [.160, .390], p < .001.

Figure 1.
Structural equation model relating intellectual humility (GIHS) and need for evidence (standardized coefficients)
Figure 1.
Structural equation model relating intellectual humility (GIHS) and need for evidence (standardized coefficients)
Close modal

Discussion

Study 1 supports the unidimensional structure of the French version of the GIHS. Moreover, consistent with the idea that intellectually humble people should pay appropriate attention to the limitations in the evidentiary basis of their beliefs, those scoring high on the French GIHS report a strong need to ensure that their beliefs are consistent with the available evidence. In other words, Study 1 also provides an initial demonstration of convergent validity for the French GIHS.

In Study 2, we sought to extend the French GIHS test of convergent validity by replicating some of the most important relationships observed with related constructs with the original scale (Leary et al., 2017). More specifically, intellectual humility “often manifests through an openness to other people’s views and by a lack of rigidity and conceit regarding one’s beliefs and opinions” (Leary et al., 2017, p. 793). Thus, intellectual humility is expected to be associated with constructs that reflect open- versus closed-minded thinking. Those high in intellectual humility should also be open to values, opinions, and beliefs of others (i.e., openness to values, a facet of openness to experience reflecting openness to alternative belief systems, such as spiritual or religious beliefs; Costa & McCrae, 1992), and low on dogmatism defined as an unchangeable and rigid certainty in one’s beliefs (Altemeyer, 1996). Accordingly, and in line with Leary and colleagues (2017), we predicted a positive relationship between the French GIHS and openness to values, but a negative one with dogmatism. Intellectual humility is not being merely open to everything. Intellectually humble people are chiefly concerned with their epistemic limitations and thus, they are supposed to be motivated to think and seek the truth (Leary, 2018; Porter, Baldwin, et al., 2022; Porter, Elnakouri, et al., 2022). Those high in intellectual humility should score high in need for cognition, the tendency to engage in and enjoy effortful cognitive activities (Cacioppo & Petty, 1982). Thus, as for the original scale validation, we predicted a positive correlation between the French GIHS and the need for cognition. The literature suggests that, because intellectual humility is a socially desirable construct, it is somewhat correlated with social desirability bias (Porter, Elnakouri, et al., 2022). If that is the case, then it is difficult to know whether a high level of measured intellectual humility reflects genuine intellectual humility or a desirability bias. However, Leary and colleagues (2017) did not obtain a correlation between the GIHS and social desirability (r = .03). Here, we also investigate this link to test the divergent validity of the scale. Pre-registration for Study 2 is available from the following link: https://osf.io/chzum/?view_only=f68c643b91fe461fa7ca9c8d0ef9a623.

Method

Sample Size Determination

We performed a Monte Carlo simulation analysis (“simsem” R package; Beaujean, 2014; Muthén & Muthén, 2002), with N = 535 (sample size), m = 1000 (number of samples), seed = 76418. Concerning the GIHS factor loadings, we fixed the SEM parameters values based on the results obtained in our previous study. Because including the factor structures of the Dogmatism, the Need for Cognition, the Openness to Values, and the Social Desirability scales in the model increased the number of parameters to estimate, this also increased the sample size required to estimate these parameters with sufficient power. Due to resource constraints, we decided not to include them in the model and to declare each scale as a manifest variable. For the GIHS-Dogmatism, the GIHS-Need for Cognition, and the GIHS-Openness to Values correlations, we fixed the parameters’ values to r = .25 to compensate for potential overestimation with the values obtained by Leary and colleagues (2017; respectively r = -.49, r = .34 and r = .39). We also fixed the GIHS-Social Desirability correlation to r = .20 (see below). We had no information about the potential size of the remaining correlations (e.g., Dogmatism-Need for Cognition, Openness to Values-Need for Cognition). Because these correlations were not our primary interest, we fixed their values at r = .30. To control for potential Type-I error due to multiple testing (i.e., 10 correlation hypotheses), we divided the alpha by 10 leading to an adjusted alpha level of .005. This simulation analysis confirmed that a sample size of 535 is sufficient to reliably detect the factor structure of the GIHS and the correlations with satisfactory power (> 80%) and an adjusted alpha level of .005. More specifically, this sample size enables the detection of correlations between the GIHS and Dogmatism, Need for cognition, Openness to Values of at least r = .25, with Social Desirability of at least r = .20, as well as correlations between these variables of at least r = .30 with satisfactory power (> 80%) and an adjusted alpha level of .005.

We used the powerTOSTr function of the TOSTER package in R (version 0.4-1, Lakens, 2017) to estimate the sample size needed to test whether the relationship between the GIHS and Social Desirability is non-meaningful with 90% power (with 2 one-sided tests, see TOST, Lakens, 2017). Because intellectual humility is somewhat correlated with social desirability bias (Porter, Elnakouri, et al., 2022), we set the smallest effect size of interest for considering the correlation to be non-meaningful at |r| = .20. The required sample size to achieve 90% power with equivalence bounds of -.20 and .20 is 267.3

Participants

We recruited 554 participants through Prolific to take part in the online study hosted on Qualtrics. Participants were paid £1.5 for this study. We excluded participants who were unrealistically fast (i.e., less than 3.30 minutes, n = 4), who failed the attention check (i.e., wrote something other than “baguette”, n = 5), who were not fluent in French (n = 0), and those reporting that we should not take their data seriously (n = 5). We analyzed the data of 540 participants (nFemale = 275; nMale = 252; nOther = 124; MAge = 31.10; SDAge = 10.21). Half of the sample had a Master degree or more advanced degree (48.89%), while a minority had no degree (2.04%).

Materials

GIHS. We used the French version of the GIHS from Study 1 (M = 2.82, SD = 0.65, Cronbach’s ⍺ = .83, MacDonald’s ω = .84).

Openness to Values scale. To assess the openness of individuals to experience, we used the Openness to Values subscale from the NEO-PI 3 (Costa & McCrae, 1992; French version of Rolland, 1998). The eight items of this subscale (e.g., “I consider myself broad-minded and tolerant of other people’s lifestyles”) are answered on a 5-point scale ranging from 1 = strongly disagree to 5 = strongly agree, with 3 = neutral as the midpoint (M = 0.69, SD = 0.48, Cronbach’s ⍺ = .60, MacDonald’s ω = .61).

Dogmatism scale. We measured the degree of unchangeability and rigidity of the certainty in one’s beliefs—dogmatism—with our French translation of Altemeyer’s (2002) measure. This scale consists of 20 items (e.g., “The things I believe in are so completely true, I could never doubt them.”) answered on a 5-point scale ranging from -2 = strongly disagree to +2 = strongly agree, with 0 = neither agree nor disagree as the midpoint (M = -0.86, SD = 0.50, Cronbach’s ⍺ = .88, MacDonald’s ω = .88).

Need for Cognition scale. We measured the need for cognition—the tendency to engage in and enjoy effortful cognitive activities—with the Need for Cognition scale (Cacioppo et al., 1984; French version of Salama-Younes et al., 2014). This French version consists of 11 items to be answered on a 4-point scale with the following labels: 0 = completely wrong, 1 = somewhat wrong, 2 = somewhat true, 3 = completely true (M = 1.83, SD = 0.52, Cronbach’s ⍺ = .88, MacDonald’s ω = .88).

Social Desirability scale. To ensure that the GIHS scores were not influenced by socially desirable responses, we used the short form of the Marlowe-Crowne Social Desirability scale (MCSD C short version; Reynolds, 1982; French version of Verardi et al., 2010). This 13-item (e.g., “There have been occasions when I took advantage of someone”) version of the scale uses a true-false response format (M = 5.86, SD = 1.75, KR20 = .65).

Procedure

The procedure for Study 2 was the same as for Study 1, except that participants completed the French GIHS, the Dogmatism scale, the Openness to Values scale, the Need for Cognition scale, and the Social Desirability scale. The order of these scales was randomized. At the very end of the study, participants also indicated whether they responded seriously to the scales and whether their data could be taken as reliable.

Results

First, for the Dogmatism, Openness to Values, and Need for Cognition scales, we reversed the scores of items when necessary and then calculated the average score for each scale. For the Social Desirability scale, we computed an aggregate score based on the sum of its items. For each scale, higher scores reflect higher levels of the variable.

As in Study 1, we built an SEM including intellectual humility as the latent factor and the aggregated scores of dogmatism, openness to values, need for cognition, and social desirability as manifest variables. The intellectual humility factor was identified using the marker method (constraining the first loading of the factor to 1), and items were treated as continuous in line with the MLR estimation method. To evaluate the fit of our CFA, our pre-registered criteria were the same as in Study 1 (i.e., CFI ≥ .90; RMSEA ≤ .06, CI 90% upper bound < .10, and non-significant p-value; SRMR ≤ .08; see Gana & Broc, 2019; Hu & Bentler, 1999; Kline, 2011). The model, shown in Figure 2, provided overall good fit to the data (although some criteria are just above or below the goodness-of-fit thresholds; Gana & Broc, 2019; Hu & Bentler, 1999; Schreiber et al., 2006): χ2(29) = 126.953, CFI = .929, SRMR = .042, RMSEA = .084, 90% CI [.069, .099]. All standardized factor loadings were above .60. The estimated model revealed that intellectual humility related positively to both the need for cognition, r = .27, p < .001, 95% CI [.177, .361] and openness to values, r = .31, p < .001, 95% CI [.214, .399]. Intellectual humility also related negatively to dogmatism, r = -.49, p < .001, 95% CI [-.579, -.394].

Figure 2.
Structural equation model relating intellectual humility (GIHS), dogmatism, openness, need for cognition, and social desirability (standardized coefficients)
Figure 2.
Structural equation model relating intellectual humility (GIHS), dogmatism, openness, need for cognition, and social desirability (standardized coefficients)
Close modal

The SEM analyses also revealed that the GIHS was not significantly related to Social Desirability, r = .05, p = .291, 95% CI [-.043, .141]. The two one-sided tests (TOST) procedure, conducted with the “TOSTr” function of the TOSTER R package (Lakens, 2017; Lakens et al., 2018), indicated that the observed correlation (r = .049) was significantly within the equivalence bounds of r = - .20 and r = .20, p < .001, 90% CI [-.022, .119].

Because future research will likely use an aggregated score of the GIHS’s items, we further explored the relationships between the aggregated GIHS and aggregated dogmatism, openness, need for cognition, and social desirability. We computed four zero-order correlations, and we reported the Pearson’s r coefficient in Table 2. The results provided by the zero-order correlations are consistent with the SEM results.

Table 2.
Zero-order correlation between the GIHS and the Dogmatism, Openness to Values, Need for Cognition, and Social Desirability scales
Zero-order correlation Pearson r 95% CI p-value 
GIHS - Dogmatism -.43 [-.510, -.358] < .001 
GIHS - Openness to Values .27 [.192, .355] < .001 
GIHS – Need for Cognition .26 [.179, .343] < .001 
GIHS – Social Desirability .05 [-.029, .139] = .200 
Zero-order correlation Pearson r 95% CI p-value 
GIHS - Dogmatism -.43 [-.510, -.358] < .001 
GIHS - Openness to Values .27 [.192, .355] < .001 
GIHS – Need for Cognition .26 [.179, .343] < .001 
GIHS – Social Desirability .05 [-.029, .139] = .200 

Discussion

Study 2 offers further evidence of the French GIHS’s convergent validity. More specifically, we replicated the relationship between intellectual humility and theoretically related constructs obtained with the original scale (Leary et al., 2017). That is, the more intellectually humble people are, the more they engage in and enjoy effortful cognitive activities (i.e., need for cognition), the more they are open to alternative belief systems (i.e., openness to values), and the less they have an unchangeable and rigid certainty in their beliefs (i.e., dogmatism). Although intellectual humility is generally considered to be a desirable trait, the relation between the French GIHS and social desirability bias was negligible, supporting the scale’s divergent validity. This replication work not only supports the validity of the French version of the scale, but also attests to the robustness of the results obtained with the original scale.

In Study 3, we aimed to test the predictive validity of the French GIHS. More specifically, based on Leary and colleagues’ (2017) definition of intellectual humility, intellectually humble people have “an appropriate attentiveness to limitations in the evidentiary basis of that belief.” Higher scores on the scale should be related to greater attention to the quality of the evidence on which their beliefs are based. In line with this idea, the original GIHS validation paper shows that the higher participants scored, the better they distinguish strong (i.e., evidence-based and supported by dental experts) from weak (i.e., anecdotal and supported by laypeople) arguments for dental flossing (Leary et al., 2017). We thus decided to replicate this finding to validate our French version of the scale. Moreover, intellectually humble people tolerate more ambiguity and uncertainty. They are less driven by the desire to obtain definitive answers that might prevent them from reconsidering their position (Leary et al., 2017). Accordingly, in line with the original study, we also wanted to test whether the scale predicts attention to the quality of presented evidence over and above the need for cognitive closure—the tendency to desire definitive, certain, and unambiguous answers to questions (Kruglanski & Webster, 1996). Pre-registration for Study 3 is available from the following link: https://osf.io/xnes4/?view_only=5ae73e3b4f23434f88845b45164773f7.

Method

Sample Size Determination

In Leary and colleagues (2017; Study 4), the effect size of the interaction between intellectual humility and argument strength (for participants flossing rarely) that we want to replicate in this study was η²p = .019. This interaction, however, was the one tested in a model without controlling for the need for closure and could be overestimated. For this reason, we computed a power analysis with a η²p of .015. This power analysis indicated that a sample size of 518 is required to obtain an effect size (η²p) of .015 with an alpha of .05 and a power of 80%.

Participants

We recruited 834 participants on Prolific and Crowdpanel. Participants were paid £1.1 in this study. They took part in the online study on Qualtrics. We excluded participants who were unrealistically fast (i.e., < 2.30 min, n = 41), who failed the attention check (i.e., wrote something other than “baguette”, n = 8), who were not fluent in French (n = 4), who indicated that their data are not reliable (n = 7), and those who participated on both Prolific and Crowdpanel (n = 3). We also excluded participants who reported having flossed their teeth twice or more in the last week (n = 251)5. Some participants met several exclusion criteria and we analyzed the data of the remaining 537 participants (nFemale = 258; nMale = 263; nOther = 16; MAge = 32.33; SDAge = 11.26; nWeak arguments = 272; nStrong arguments = 265). Half of the sample had a Master degree or more advanced degree (46.37%), while a minority had no degree (0.02%).

Materials

GIHS. We used the French version of the GIHS, as in Studies 1 and 2 (M = 2.87, SD = 0.60, Cronbach’s ⍺ = .79, MacDonald’s ω = .80).

Need for Closure scale. We measured the need for closure with the Brief Need for Closure scale (BNFC, Roets & Van Hiel, 2011; our translation). The BNFC consists of 15 items to be answered on a 6-point scale with the following labels: 0 = completely disagree, 1 = moderately disagree, 2 = slightly disagree, 3 = slightly agree, 4 = moderately agree, and 5 = completely agree (M = 3.04, SD = 0.65, Cronbach’s ⍺ = .69, MacDonald’s ω = .70)

Essays about flossing. We translated and adapted the two essays about flossing used by Leary and colleagues (2017, Study 4).

Procedure

After having consented to take part in the online study, all participants completed the French GIHS and the BNFC scale in a randomized order. Then, they indicated how often they engaged in different behaviors, including dental flossing, during the past week (1 = never, 2 = once, 3 = 2 or 3 times, 4 = 4 or 5 times, 5 = every day or almost every day, and 6 = more than once each day). After that, participants were randomly assigned to the strong or the weak arguments condition. They were instructed to read an article carefully (they were not able to move to the next screen for 60 seconds to ensure thorough reading) and that they would be asked some questions about it later. Then participants rated the quality of the evidence supporting the usefulness of dental flossing presented in the article on three 9-point bipolar scales: Strong-Weak, Convincing-Unconvincing, and Scientific-Unscientific. Finally, participants answered the same follow-up questions as in previous studies.

Results

We summed the three ratings of evidence quality (i.e., strong, convincing, scientific; Cronbach’s ⍺ = .93, MacDonald’s ω = .93) to obtain an index of the perceived quality of evidence presented in the article. High scores on this index reflect a high level of perceived quality. With this index as a dependent variable, we ran a linear regression with the following predictors: centered GIHS scores, centered BNFC scores, the quality of the essay (coded -0.5 and 0.5, respectively, for the weak and strong conditions), the GIHS by essay quality interaction, and the BNFC by essay quality interaction (see Yzerbyt et al., 2004). The results revealed that participants who read the article with strong arguments for dental flossing rated the evidence more positively (M = 17.34, SD = 6.25) than participants who read the article with weak arguments (M = 11.24, SD = 6.01), F(1, 531) = 132.807, p < .001, η²p = .200, 90% CI [.152, .248]. More important, and as predicted, the higher the GIHS scores were, the more participants who read the article with strong arguments for dental flossing rated the evidence as more convincing compared to those who read the article with weak arguments, F(1, 531) = 9.945, p = .002, η²p = .018, 90% CI [.004, .042] (Figure 3). No other effect was significant, Fs < 0.401, ps > .527.6

Figure 3.
Perceived quality of evidence as a function of GIHS scores and as a function of the quality of the essay, and controlling for the need for closure (with 95% confidence interval)
Figure 3.
Perceived quality of evidence as a function of GIHS scores and as a function of the quality of the essay, and controlling for the need for closure (with 95% confidence interval)
Close modal

Test-Retest Reliability

The GIHS has been developed to capture “the general tendency to be low or high in intellectual humility” (p. 794, Leary et al., 2017). Because GIHS scores are supposed to reflect a general tendency, they should be relatively stable across time. The authors did not examine the temporal reliability of the scale, but our data offer a suitable context for testing it. Indeed, Studies 2 and 3 were conducted on Prolific which enabled us to match (based on Prolific IDs) the data of participants who therefore completed the scale twice. Studies were separated by an average interval of 106 days (M = 106.49, SD = 12.27). Consequently, we also examined the temporal (test-retest) reliability of our French version of the GIHS. Overall, after having applied exclusion criteria of Studies 2 and 3 (without excluding participants on the basis of flossing frequency), we were able to match the data of 383 participants (nFemale = 199, nMale = 171, nOther = 13; MAge = 32.08, SDAge = 10.61). The test-retest correlation for the GIHS scale was r = .64, p < .001, 95% CI [.574, .693].

Discussion

Study 3 provides evidence of the predictive validity of the French GIHS by showing that higher scores, and thus higher intellectual humility, are related to paying more attention to the evidentiary basis of one’s beliefs. People with higher intellectual humility scores more clearly distinguish strong (i.e., evidence-based and supported by dental experts) from weak (i.e., anecdotal and supported by laypeople) arguments for dental flossing. Importantly, intellectual humility is related to the discrimination of evidence quality even when considering the need for cognitive closure that could lead people to cling to any definitive answer. Because this study replicates the results obtained with the original (English) scale, we also provide cumulative evidence of the scale’s overall validity.

The test-retest reliability of the scale is not very high and below .75, which is considered a standard for reliability (Weiner & Greene, 2008), and which has been observed in some multidimensional intellectual humility scales (Haggard et al., 2018; Krumrei-Mancuso & Rouse, 2016). Although not anticipated, this result resonates with recent work showing that scores on the GIHS can be contextually manipulated (Koetke et al., 2022). This suggests that the construct that the GIHS captures may vary across time and/or contexts, and that researchers should keep this moderate reliability in mind when using the scale.

In Study 4, we aimed to investigate the convergent validity of the French GIHS by testing whether the GIHS positively correlates with another intellectual humility scale. For this purpose, we chose the Porter and Schumann (2018) Intellectual Humility Scale (PSIHS), because of its unidimensionality. To test this relationship, we used a panel of studies conducted among students at our university. This study was not pre-registered. The direction of the hypothesis is obvious with prediction in the opposite direction being hardly justifiable. We also report our analyses in a fully transparent fashion.

Method

Sample Size Determination

Because this study was conducted as part of a wider study, we were not able to anticipate the number of participants. However, we ran a sensitivity analysis on the basis of the number of analyzed participants (see below). We performed a Monte Carlo simulation analysis (“simsem” R package; Beaujean, 2014; Muthén & Muthén, 2002), with N = 398 (analyzed sample size), m = 1000 (number of samples), seed = 76418. This simulation analysis confirms that a sample size of 398 is sufficient to reliably detect the factor structure of both the GIHS and the PSIHS, as well as a correlation between the two measures of intellectual humility of at least r = .22 with an alpha of .05 and a power of 80%.

Participants

We recruited 451 students from our university who took part in the study. Participants were not paid for their participation but received course credits. We excluded participants who did not report having French nationality (n = 47) and participants with missing data on the scales, because this was not supposed to occur (n = 6). We analyzed the data of the remaining 398 participants (nFemale = 353; nMale = 37; nOther = 8; MAge = 19.91; SDAge = 2.12).

Materials

GIHS. We used the French version of the GIHS, as in the previous studies (M = 2.72, SD = 0.69, Cronbach’s ⍺ = .78, MacDonald’s ω = .79).

PSIHS. We measured intellectual humility with the PSIHS (our French translation; Porter & Schumann, 2018). The PSIHS consists of 9 items that include six positively-worded (e.g., “I am willing to admit it if I don’t know something”) and three negatively-worded (e.g., “I feel uncomfortable when someone points out one of my intellectual shortcomings”) items. These items were answered on a 7-point scale with the following labels: -3 = strongly disagree, -2 = disagree, -1 = slightly disagree, 0 = neither agree nor disagree, 1 = slightly agree, 2 = agree, and 3 = strongly agree (M = 1.04, SD = 0.81, Cronbach’s ⍺ = .72, after reverse-coding of the negatively-worded items, MacDonald’s ω = .73).

Results and Discussion

We built an SEM including the GIHS and the PSIHS. Both factors were identified using the marker method (constraining the first loading of each factor to 1), and items were treated as continuous in line with the MLR estimation method. Our criteria to evaluate the fit of our CFA were the same as in Studies 1 and 2 (i.e., CFI ≥ .90; RMSEA ≤ .06, CI 90% upper bond < .10, and non-significant p-value; SRMR ≤ 08; see Gana & Broc, 2019; Kline, 2016; Hu & Bentler, 1999).

Due to a poor SEM fit when including both GIHS and PSIHS measurement models: χ2 (89) = 378.189, CFI = .764, SRMR = .076, RMSEA = .099, 90% CI [.089, .109], we ran an SEM only with the GIHS measurement model and included the PSIHS as a manifest variable. This model provided a better overall good fit (although some criteria are just above/below the goodness-of-fit thresholds; Gana & Broc, 2019; Hu & Bentler, 1999; Schreiber et al., 2006): χ2 (14) = 52.970, CFI = .934, SRMR = .050, RMSEA = .092, 90% CI [.066, .119]. The estimated model revealed that the French version of the GIHS was positively and strongly related to the PSIHS, r = .62, p < .001, 95% CI [.526, .707]. This suggests that both scales measure a related construct and add support to the convergent validity of the French version of the GIHS.

In this study, we tested the measurement invariance of the GIHS (i.e., the measure of intellectual humility), that is whether the psychometric properties of the scale are invariant across French and English speakers. More specifically, we investigated (a) whether the scales in French and English are measuring the same construct, and (b) whether a cross-cultural comparison between the French and English latent scores is meaningful. We relied on multiple-group CFA (MGCFA) and alignment method to assess measurement invariance.

Method

Participants

This study relied on existing datasets from previous studies (NTotal = 2881; nFemale = 1641; nMale = 1185; nOther = 55; MAge = 33.54; SDAge = 13.10). We relied on a French-speaking sample of participants, based on the four samples of the previous studies presented in this manuscript (nTotal_French = 1917; nFemale = 1110; nMale = 761; nOther = 46; MAge = 30.45; SDAge = 11.53). We also obtained an English-speaking sample, based on three already existing sets of data (nTotal_English = 964; nFemale = 531; nMale = 424; nOther = 9; MAge = 39.68; SDAge = 13.85). The first dataset is the data from the original validation paper by Leary and colleagues (2017, Study 1). The second and third datasets are unpublished datasets from two studies conducted by Tagand and Muller (2022) in which the GIHS has been added for exploratory purposes.

Materials

French GIHS Version. We aggregated GIHS data from the previous studies presented in the manuscript. As a reminder, our GIHS French versions all consisted of six items and were answered using a 5-point scale ranging from not at all true of me to extremely true of me (M = 2.82, SD = 0.62, Cronbach’s ⍺ = .78, MacDonald’s ω = .78).

English GIHS Version. We aggregated GIHS data from the previous existing studies. These GIHS English versions included in this sample also consisted of six items and were answered using the same 5-point scale (M = 2.67, SD = 0.67, Cronbach’s ⍺ = .84, MacDonald’s ω = .84).

Results

We investigated measurement invariance through MGCFA at three incremental levels of invariance: (a) configural invariance that investigates if the structure of the scale (one factor underlying the six items) holds across French and English groups (by imposing the same pattern of zero loadings); (b) metric invariance—also called weak invariance—that investigates if the construct is similar across groups (by imposing equal factor loadings); (c) scalar invariance—also called strong invariance—that investigates if it is possible to compare people’s latent scores on the construct across groups (by imposing equal intercepts); (d) residual invariance—also called strict invariance—that investigates if the proportion of item variance due to the factor and due to random error is similar across groups (by imposing equal residuals). To assess configural invariance we relied on CFA, and for the other steps of measurement invariance (i.e., metric, scalar, and residual invariance), we relied on MGCFA. When metric or scalar invariance was not supported, we proceeded with partial invariance analyses and we further assessed the degrees of non-invariance through alignment optimization between the French and English groups.

Configural, Metric, Scalar, and Residual Invariance (CFA and MGCFA)

To investigate configural invariance, that is, whether the factor structure of the scale is the same across groups, we first ran two CFAs, one for each group separately. We used the same criteria as in the previous studies (i.e., CFI ≥ .90; RMSEA ≤ .06, CI 90% upper bound < .10, and non-significant p-value; SRMR ≤ .08; see Gana & Broc, 2019; Kline, 2016; Hu & Bentler, 1999). Results are presented in Table 3. The two configural CFAs suggest a mixed fit to the data with relatively adequate values for the CFI and SRMR statistics, but not for the RMSEA. Using MGCFA on the full dataset, we obtained similar fit statistics. These results are similar to the ones originally reported by Leary et al. (2017). We decided to continue with the next steps to test measurement invariance.

Table 3.
Fits of the measurement invariance models (CFA and MGCFA)
χ2 (df)χ2 p-⁠valueCFIRMSEARMSEA 90% CISRMR
CFA       
French dataset 141.643 (9) < .001 .935 .099 [.085, .114] .040 
English dataset 114.163 (9) < .001 .938 .121 [.102, .142] .042 
MGCFA       
Configural invariance model 256.495 (18) < .001 .938 .107 [.096, .119] .036 
Metric invariance model 280.650 (23) < .001 .934 .098 [.088, .108] .044 
Scalar invariance model 416.306 (28) < .001 .904 .107 [.098, .116] .056 
Partial scalar invariance modela 333.961 (27) < .001 .923 .097 [.088, .106] .048 
Residual invariance modela 399.10 (33) < .001 .905 .098 [.089, .106] .055 
Partial residual invariance modelab 371.01 (32) < .001 .913 .095 [.087, .104] .056 
χ2 (df)χ2 p-⁠valueCFIRMSEARMSEA 90% CISRMR
CFA       
French dataset 141.643 (9) < .001 .935 .099 [.085, .114] .040 
English dataset 114.163 (9) < .001 .938 .121 [.102, .142] .042 
MGCFA       
Configural invariance model 256.495 (18) < .001 .938 .107 [.096, .119] .036 
Metric invariance model 280.650 (23) < .001 .934 .098 [.088, .108] .044 
Scalar invariance model 416.306 (28) < .001 .904 .107 [.098, .116] .056 
Partial scalar invariance modela 333.961 (27) < .001 .923 .097 [.088, .106] .048 
Residual invariance modela 399.10 (33) < .001 .905 .098 [.089, .106] .055 
Partial residual invariance modelab 371.01 (32) < .001 .913 .095 [.087, .104] .056 

a In the partial scalar and residual invariance models, intercept for Item 3 was not constrained to equality over groups. b In the partial residual invariance model, residuals for Item 5 were not constrained to equality over groups.

We tested whether metric and scalar invariance held using MGCFA across the French and English samples. In a large sample, such as ours, the chi-squared difference test criterion is too sensitive to detect negligible deviations from a “perfect” model; instead, we relied on changes in CFI (∆CFI) and RMSEA (∆RMSEA) between each measure invariance model (configural to metric, metric to scalar, and scalar to residual). Our criteria for measurement invariance were changes in CFI ≤ .01, paired with a change in RMSEA ≤ .015 (Chen, 2007; Cheung & Rensvold, 2002).

Across the French and English samples, we found that metric invariance held, ∆CFI = .004, ∆RMSEA = .009, but that scalar invariance did not, ∆CFI = .030, ∆RMSEA = -.009. Even if our results did not support scalar invariance, for exploratory purposes we further investigated the residual invariance, ∆CFI = .017, ∆RMSEA = .002. Our results suggest that the construct is the same across French and English speakers (metric invariance), but that the comparison of the scores between the French and English speakers is not meaningful (no scalar invariance).

Partial Measurement Invariance (MGCFA)

Since scalar invariance failed, we decided to investigate partial measurement invariance. Among the six items, we found that we were able to reach partial scalar invariance by freeing the intercept of Item 3 (“I recognize the value in opinions that are different from my own”), ∆CFI = .010, ∆RMSEA = .001. However, this did not enable us to reach residual invariance, ∆CFI = .018, ∆RMSEA = -.001. No adjustment to the item residuals allowed us to achieve partial residual invariance; the best results were obtained by freeing the residuals of Item 5, ∆CFI = .011, ∆RMSEA = .002. Thus, by freeing Item 3’s intercept in our partial scalar invariance model, we demonstrated that GIHS latent means can be used to compare French and English speakers.

Alignment Method

Since scalar invariance using the traditional CFA approach held only partially, we further investigated measurement invariance through the alignment method (Asparouhov & Muthén, 2014). Unlike MGCFA, alignment estimates a common configural invariance model and then adjusts the factor loadings and intercepts to make them as similar across groups as possible without deteriorating the model fit (in a similar fashion as the rotation applied in exploratory factor analyses). This method enables a comparison of factor means and factor variances across groups while allowing for approximate measurement invariance. As a by-product, this method also provides information about the degree of measurement invariance through the R2 (i.e., proportion of the configural parameter variation across groups that can be explained by variation in the factor means and factor variances). In the case of complete invariance, R2 = 100%, in the case of complete non-invariance, R2 = 0%, and a minimum of R2 = 75% is generally recommended (Fischer & Karl, 2019; Han, 2024). We conducted measurement alignment using the “sirt” R package (Robitzsch, 2022). Results reveal a high proportion of invariance after alignment: R2loadings = 99.67% and R2intercepts = 99.93%.

Additionally, we performed an item-level test to examine whether there is any item demonstrating deviations exceeding the tolerance threshold for factor loadings (.40) and intercepts (.20; Fischer & Karl, 2019; Han, 2024). Results reveal that 0% of factor loadings and intercepts exceeded thresholds, and thus are non-invariant. This is below the 25% suggested by Asparouhov and Muthén (2014) as a rule of thumb to determine the presence of significant deviations. In other words, the alignment method did not detect any problematic factor loading or intercept at the item level in the GIHS scale.

Overall, our alignment analysis tends to alleviate our previous results that showed non-invariance and partial invariance through MGCFA. The alignment method showed that the impact of the non-invariant item (Item 3) can be considered negligible, and that optimized latent means can be meaningfully compared between French and English speakers.

Discussion

To assess measurement invariance, we first relied on the traditional CFA approach (testing configural, metric, scalar, and residual invariance). We found metric invariance and partial scalar invariance—by freeing the intercept of Item 3, “I recognize the value in opinions that are different from my own”—between the French and English groups. We failed to reach (partial) residual invariance, but expecting that the random error component would be invariant is argued not to be attainable, and that misfit due to equality-constrained residual variances could lead to the propagation of specification error (Kline, 2016; Little, 2013). We then further assessed the degrees of non-invariance through alignment optimization. We found that alignment absorbed almost all the non-invariance suggesting that latent means on the French GIHS can be compared with those on the original English one.

Recognizing one’s own epistemic limitations constitutes the core of being intellectually humble and may be related to several individual, social, and societal benefits (Ballantyne, 2023; Leary, 2022; Porter, Elnakouri, et al., 2022). In this work, we propose a French measure of intellectual humility to facilitate research on intellectual humility in French-speaking countries. We translated the GIHS (Leary et al., 2017) into French and, across five studies, we assessed its structural validity (including measurement invariance across French and English speakers), convergent validity, divergent validity, and predictive validity. We also tested its reliability in terms of temporal stability.

As conceived by Leary and colleagues (2017), the GIHS measures the general tendency to “recognize that a particular personal belief may be fallible, accompanied by an appropriate attentiveness to limitations in the evidentiary basis of that belief and to one’s own limitations in obtaining and evaluating relevant information” (p. 793). This implies that higher scores on the scale, and thus on the French version as well, should relate to various concomitants of this general tendency. Through this research we showed that higher scores on the French version of the GIHS related to the tendency to be open- versus closed-minded, the tendency to engage in and enjoy effortful cognitive activities (Study 2), and the tendency to focus on the evidentiary basis of their beliefs (Studies 1 & 3). We also showed (Study 4) that higher scores on the scale related to higher scores on another scale measuring intellectual humility (i.e., PSIHS; Porter & Schumann, 2018). Moreover, although intellectual humility is a desirable tendency, we did not obtain evidence of a significant correlation between the French GIHS and social desirability, and this relationship could be considered negligible (Study 2). This limits the risk that a high level of measured intellectual humility reflects a desirability bias rather than genuine intellectual humility. In other words, this work provides initial evidence that the French version of the GIHS is a validated unidimensional scale measuring intellectual humility and sets the stage for further research. Finally, while we recommend further research to confirm our measurement invariance conclusions, this work suggests that the current French version of the GIHS can be used for cross-national comparisons with English-speaking countries (Study 5).

In a context where the replicability of results is one of the main challenges of research (Open Science Collaboration, 2015; Zwaan et al., 2018), the current work also contributes to the construction of a solid science. Not only do we replicate and extend the results of the original scale’s validation, attesting to their robustness, but we also provide a valid French measure of intellectual humility that is comparable to the original one, thereby preventing the risk of jingle fallacy (Flake & Fried, 2020).

Limitations and Implications

Nevertheless, the present work has some limitations. First, concerning the predictive validity of the scale that we tested in Study 3, we decided to replicate the results obtained in the original validation work by testing it over and above the need for closure (i.e., the tendency to desire definitive, certain, and unambiguous answers to questions). However, although intellectually humble people tolerate more ambiguity and uncertainty (Leary et al., 2017), the need for closure may not be the most discriminating variable regarding intellectual humility in Study 3. Indeed, in this study, we investigated whether participants’ GIHS scores predicted their attention to the quality of persuasive arguments for flossing. The need for evidence or cognition would have offered a more stringent test of the predictive validity of the scale. Future research should explore whether the GIHS predicts attentiveness to evidence quality beyond these variables.

Second, the relevance of a self-report measure of intellectual humility may be questioned. In self-reports, people assess their own functioning on the basis of their subjective judgment. For this reason, several concerns have been raised against self-report measurement since individuals may not always be aware of their own mental processes (Nisbett & Wilson, 1977), or their judgment may be prone to response biases like social desirability (Paulhus & Vazire, 2007). Another major threat to validity when measuring intellectual humility via self-report is the following paradox: Saying that you are intellectually humble can be a sign that it is unlikely that you are (see also Alfano et al., 2017; Davis et al., 2010). While we acknowledge the limitations of self-report measures, there are reasons to believe that the French GIHS remains an efficient tool to capture an initial, general tendency to be intellectually humble. Indeed, as already mentioned, various definitions of intellectual humility concur in the literature, but these definitions converged on the idea that intellectual humility involves recognizing one’s own intellectual limitations (Leary, 2018; Porter, Baldwin, et al., 2022), in the sense of knowledge and beliefs. In other words, from a purely theoretical perspective, to be intellectually humble, people should be aware that their own epistemic processes might be fallible, not necessarily aware of these processes. Thus, apart from response biases, people should be able to self-report their intellectual humility. In the present work, this seems to be the case because people’s scores on the GIHS are correlated with other related, but distinct, constructs (e.g., need for evidence, openness, need for cognition). Moreover, because intellectual humility is a valued disposition, its measurement could be subject to social desirability bias in responding. However, similar to the original scale (Leary et al., 2017), we did not find evidence of such a bias and even showed that in our data this relationship was significantly negligible. Also, concerns about the humility paradox may be alleviated by the negative correlation observed between the GIHS and dogmatism. Reporting intellectual humility on the GIHS is supported by reporting less rigidity and inflexibility in the certainty of one’s beliefs on the Dogmatism scale. This would not be the case if those claiming intellectual humility were not genuinely humble. Furthermore, although this issue remains understudied, overstating that one recognizes one’s own epistemic limitations might reflect more a lack of humility than a lack of intellectual humility. To further confirm this validity and explore the potential influence of self-report bias, future work should investigate how the French GIHS scores relate to behavioral outcomes (e.g., behavioral investigative behaviors; Koetke et al., 2023; behaviors in a debate; Hanel et al., 2023).

Third, we acknowledge two limitations of the current work concerning the way we investigated the French GIHS validity through its relationships with related constructs. The first of these limitations is that we chose the related constructs based on the original scale validation work (Leary et al., 2017) and based on the possibilities of comparison with the other intellectual humility measure (e.g., unidimensionality), but for some of these constructs, we lacked validated French versions of the scales (i.e., Need for evidence, Dogmatism, Need for closure, and the PSIHS). Although we carefully translated each of those scales and agreed on the final items, there is less assurance that our translations measure the targeted constructs. The second of these limitations is that, in Study 2, our resources in terms of sample size did not allow us to model constructs other than intellectual humility as latent variables. As a result, we were unable to consider the measurement error of the scales assessing these constructs, which may limit our conclusions about the estimated relationships. Although this issue does not apply to all the investigated constructs, it remains a limitation to keep in mind.

Fourth, we conducted Studies 2 and 3 on Prolific where French speakers are less numerous than English speakers, leading to samples overlap. On the one hand, this sample overlap allowed us to investigate the test-retest reliability of the scale. On the other hand, validity evidence is more convincing when gathered from independent samples. Therefore, although we provided evidence of the scale validity with such independent samples (e.g., Study 4), such an overlap between samples should still be avoided.

Finally, the current studies provide evidence of the French GIHS construct validity, but we did not reach residual measurement invariance. Nevertheless, taken altogether, the results of Study 5 using MGCFA (notwithstanding the high RMSEA values) and alignment suggest that the French and the English versions of the GIHS measure a similar construct, and that latent scores of the French and English samples can be meaningfully compared.

Conclusion

To summarize, although the present work has its limits, it offers the first evidence of the structural, convergent, divergent, and predictive validity of the French GIHS as an intellectual humility measurement. This French validation, and the replication work it implies, suggests that the GIHS can be used to better understand intellectual humility and its role in understanding certain social phenomena. In doing so, we not only pave the way for the study of intellectual humility in French-speaking countries, but also contribute to a broader research dynamic around this concept.

IN: Conceptualization, Investigation, Formal analysis, Writing - Original Draft

CN: Conceptualization, Writing - Review & Editing

OD: Formal analysis, Writing - Review & Editing

MT: Conceptualization, Writing - Review & Editing

GW: Writing - Review & Editing

GG: Conceptualization, Investigation

DM: Conceptualization, Supervision, Writing - Review & Editing

We sincerely thank Mark Leary for providing the materials and data from his own studies.

This research was partly funded by the LIP/PC2S (Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, LIP/PC2S, 38000 Grenoble, France) and supported by a grant allocated by the CerCog program to Dominique Muller in the framework of the Investissements d’avenir programs ANR-15-IDEX-02.

The authors have no relevant financial or non-financial interests to disclose.

All data, materials and analyses scripts are available in OSF (https://osf.io/8q6rx/?view_only=21400eb81c6143bc979327c18c7aff27). Pre-registration for Study 1 is available here: https://osf.io/7tcx2/?view_only=fed7a40eb9e84858aaaacdd190dd9408, pre-registration for Study 2 is available here: https://osf.io/chzum/?view_only=f68c643b91fe461fa7ca9c8d0ef9a623, and pre-registration for Study 3 is available here: https://osf.io/xnes4/?view_only=5ae73e3b4f23434f88845b45164773f7.

Appendix: Original and French version of the GIHS
Original GIHS (Leary et al., 2017) a French GIHSb 
I question my own opinions, positions, and viewpoints because they could be wrong. Je remets en question mes propres opinions, propositions et points de vue, car ils pourraient être erronés. 
I reconsider my opinions when presented with new evidence. Je reconsidère mes opinions dès lors que j'ai connaissance de nouveaux éléments de preuve. 
I recognize the value in opinions that are different from my own. Je reconnais la valeur des opinions qui sont différentes des miennes. 
I accept that my beliefs and attitudes may be wrong. J’accepte que mes croyances et opinions puissent être erronées. 
In the face of conflicting evidence, I am open to changing my opinions. Face à des preuves qui les contredisent, je suis prêt⸱e à changer mes opinions. 
I like finding out new information that differs from what I already think is true. J'aime découvrir de nouvelles informations qui diffèrent de ce que je pense déjà être vrai. 
Original GIHS (Leary et al., 2017) a French GIHSb 
I question my own opinions, positions, and viewpoints because they could be wrong. Je remets en question mes propres opinions, propositions et points de vue, car ils pourraient être erronés. 
I reconsider my opinions when presented with new evidence. Je reconsidère mes opinions dès lors que j'ai connaissance de nouveaux éléments de preuve. 
I recognize the value in opinions that are different from my own. Je reconnais la valeur des opinions qui sont différentes des miennes. 
I accept that my beliefs and attitudes may be wrong. J’accepte que mes croyances et opinions puissent être erronées. 
In the face of conflicting evidence, I am open to changing my opinions. Face à des preuves qui les contredisent, je suis prêt⸱e à changer mes opinions. 
I like finding out new information that differs from what I already think is true. J'aime découvrir de nouvelles informations qui diffèrent de ce que je pense déjà être vrai. 

Note.

a Participants responded to each item on a 5-point scale: not at all true of me, slightly true of me, moderately true of me, very true of me, and extremely true of me.

b Participants responded to each item on a 5-point scale: pas du tout vrai pour moi, un peu vrai pour moi, modérément vrai pour moi, très vrai pour moi, extrêmement vrai pour moi.

1.

In all studies reported in this manuscript, some participants responded in an overly consistent manner (i.e., SD = 0 for all scales). Excluding these participants for the analyses reduces statistical power and neither changes the significance nor the direction of the effects (see Supplementary Material in the OSF project). Therefore, we included them in the reported analyses.

2.

For exploratory purposes we also added a single-item conspiracy mentality measure in Study 1 (Lantian et al., 2016). This measure was positively, but not significantly, related to the GIHS, F(1, 695) = 0.28, p = .595, η²p = .00, 95% CI [.000, .007].

3.

Because the power analysis for the SEM indicated a larger required sample size than the power analysis for the equivalence test, we determined our sample size based on the SEM power analysis. The final sample size of Study 2 (N = 540) enabled us to consider a smaller effect size of interest (|r| = .14) for considering the correlation to be non-meaningful with 90% power.

4.

One participant did not provide gender information.

5.

Leary and colleagues (2017) obtained the expected interaction between intellectual humility and the quality of the essay only for people who rarely floss.

6.

Results remained unchanged whether or not we controlled for the need for closure.

Alfano, M., Iurino, K., Stey, P., Robinson, B., Christen, M., Yu, F., & Lapsley, D. (2017). Development and validation of a multi-dimensional measure of intellectual humility. PLoS One, 12(8), e0182950. https://doi.org/10.1371/journal.pone.0182950
Altemeyer, B. (1996). The authoritarian specter. Harvard University Press.
Altemeyer, B. (2002). Dogmatic behavior among students: Testing a new measure of dogmatism. Journal of Social Psychology, 142(1), 713–721. https://doi.org/10.1080/00224540209603931
Asparouhov, T., & Muthén, B. (2014). Multiple-group factor analysis alignment. Structural Equation Modelling, 21(1), 495–508. https://doi.org/10.1080/10705511.2014.919210
Ballantyne, N. (2023). Recent work on intellectual humility: A philosopher’s perspective. Journal of Positive Psychology, 18(2), 200–220. https://doi.org/10.1080/17439760.2021.1940252
Beaujean, A. A. (2014). Sample size determination for regression models using Monte Carlo methods in R. Practical Assessment, Research, and Evaluation, 19(1), 12. https://doi.org/10.7275/d5pv-8v28
Bowes, S. M., Blanchard, M. C., Costello, T. H., Abramowitz, A. I., & Lilienfeld, S. O. (2020). Intellectual humility and between-party animus: Implications for affective polarization in two community samples. Journal of Research in Personality, 88(1), 103992. https://doi.org/10.1016/j.jrp.2020.103992
Bowes, S. M., Costello, T. H., Lee, C., McElroy-Heltzel, S., Davis, D. E., & Lilienfeld, S. O. (2022). Stepping outside the echo chamber: Is intellectual humility associated with less political myside bias? Personality and Social Psychology Bulletin, 48(1), 150–164. https://doi.org/10.1177/014616722199761
Bowes, S. M., & Tasimi, A. (2022). Clarifying the relations between intellectual humility and pseudoscience beliefs, conspiratorial ideation, and susceptibility to fake news. Journal of Research in Personality, 98(1), 104220. https://doi.org/10.1016/j.jrp.2022.104220
Braeken, J., & van Assen, M. A. L. M. (2017). An empirical Kaiser criterion. Psychological Methods, 22(3), 450–466. https://doi.org/10.1037/met0000074
Brislin, R. W. (1970). Back-translation for cross-culture research. Journal of Cross-Cultural Psychology, 1(3), 185–216. https://doi.org/10.1177/135910457000100301
Cacioppo, J. T., & Petty, R. E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116–131. https://doi.org/10.1037/0022-3514.42
Cacioppo, J. T., Petty, R. E., & Kao, C. F. (1984). The efficient assessment of need for cognition. Journal of Personality Assessment, 48(1), 306–307. https://doi.org/10.1207/s15327752jpa4803_13
Chen, F. F. (2007). Sensitivity of goodness of fit indexes to lack of measurement invariance. Structural Equation Modeling, 14(3), 464–504. https://doi.org/10.1080/10705510701301834
Cheung, G. W., & Rensvold, R. B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. https://doi.org/10.1207/S15328007SEM0902_5
Costa, P. T., & McCrae, R. R. (1992). Revised NEO Personality Inventory (NEO-PI-R) and NEO Five-Factor Inventory (NEO-FFI) professional manual. Psychological Assessment Resources.
Costello, A. B., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research, and Evaluation, 10(7), 1–9. https://doi.org/10.7275/jyj1-4868
Danovitch, J. H., Fisher, M., Schroder, H., Hambrick, D. Z., & Moser, J. (2019). Intelligence and neurophysiological markers of error monitoring relate to children’s intellectual humility. Child Development, 90(3), 924–939. https://doi.org/10.1111/cdev.12960
Davis, D. E., Worthington, E. L., Jr., & Hook, J. N. (2010). Humility: Review of measurement strategies and conceptualization as personality judgment. The Journal of Positive Psychology, 5(4), 243–252. https://doi.org/10.1080/17439761003791672
Deffler, S. A., Leary, M. R., & Hoyle, R. H. (2016). Knowing what you know: Intellectual humility and judgments of recognition memory. Personality and Individual Differences, 96(1), 255–259. https://doi.org/10.1016/j.paid.2016.03.016
Fischer, R., & Karl, J. A. (2019). A primer to (cross-cultural) multi-group invariance testing possibilities in R. Frontiers in Psychology, 10, Article 1507. https://doi.org/10.3389/fpsyg.2019.01507
Flake, J. K., & Fried, E. I. (2020). Measurement schmeasurement: Questionable measurement practices and how to avoid them. Advances in Methods and Practices in Psychological Science, 3(4), 456–465. https://doi.org/10.1177/2515245920952393
Gana, K., & Broc, G. (2019). Structural equation modeling with lavaan. John Wiley & Sons. https://doi.org/10.1002/9781119579038
Garrett, R. K., & Weeks, B. E. (2017). Epistemic beliefs’ role in promoting misperceptions and conspiracist ideation. PloS One, 12(9), e0184733. https://doi.org/10.1371/journal.pone.0184733
Haggard, M., Rowatt, W. C., Leman, J. C., Meagher, B., Moore, C., Fergus, T., Whitcomb, D., Battaly, H., Baehr, J., & Howard-Snyder, D. (2018). Finding middle ground between intellectual arrogance and intellectual servility: Development and assessment of the limitations-owning intellectual humility scale. Personality and Individual Differences, 124(1), 184–193. https://doi.org/10.1016/j.paid.2017.12.014
Han, H. (2024). Using measurement alignment in research on adolescence involving multiple groups: A brief tutorial with R. Journal of Research on Adolescence, 34(1), 235–242. https://doi.org/10.1111/jora.12891
Hanel, P. H., Roy, D., Taylor, S., Franjieh, M., Heffer, C., Tanesini, A., & Maio, G. R. (2023). Using self-affirmation to increase intellectual humility in debate. Royal Society Open Science, 10(2), 220958. https://doi.org/10.1098/rsos.220958
Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. https://doi.org/10.1007/BF02289447
Hu, L. -t., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118
Huynh, H. P., & Senger, A. R. (2021). A little shot of humility: Intellectual humility predicts vaccination attitudes and intention to vaccinate against COVID-19. Journal of Applied Social Psychology, 51(4), 449–460. https://doi.org/10.1111/jasp.12747
Kline, R. B. (2011). Principles and practice of structural equation modeling (5th ed.). Guilford Press.
Koetke, J., Schumann, K., & Porter, T. (2022). Intellectual humility predicts scrutiny of COVID-19 misinformation. Social Psychological and Personality Science, 13(1), 277–284. https://doi.org/10.1177/19485506209882
Koetke, J., Schumann, K., Porter, T., & Smilo-Morgan, I. (2023). Fallibility salience increases intellectual humility: Implications for people’s willingness to investigate political misinformation. Personality and Social Psychology Bulletin, 49(5), 806–820. https://doi.org/10.1177/01461672221080979
Kruglanski, A. W., & Webster, D. M. (1996). Motivated closing of the mind: “seizing” and “freezing.” Psychological Bulletin, 103(1), 263–283. https://doi.org/10.1037/0033-295X.103.2.263
Krumrei-Mancuso, E. J. (2017). Intellectual humility and prosocial values: Direct and mediated effects. The Journal of Positive Psychology, 12(1), 13–28. https://doi.org/10.1080/17439760.2016.1167938
Krumrei-Mancuso, E. J., Haggard, M. C., LaBouff, J. P., & Rowatt, W. C. (2020). Links between intellectual humility and acquiring knowledge. The Journal of Positive Psychology, 15(2), 155–170. https://doi.org/10.1080/17439760.2019.1579359
Krumrei-Mancuso, E. J., & Rouse, S. V. (2016). The development and validation of the comprehensive intellectual humility scale. Journal of Personality Assessment, 98(2), 209–221. https://doi.org/10.1080/00223891.2015.1068174
Lakens, D. (2017). TOSTER: Two one-sided tests (TOST) equivalence testing (Version 0.3) [Computer software]. https:/​/​CRAN.R-project.org/​package=TOSTER
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence testing for psychological research: A tutorial. Advances in Methods and Practices in Psychological Science, 1(2), 259–269. https://doi.org/10.1177/251524591877096
Lantian, A., Muller, D., Nurra, C., & Douglas, K. M. (2016). Measuring belief in conspiracy theories: Validation of a French and English single-item scale. International Review of Social Psychology, 29(1), 1–14. https://doi.org/10.5334/irsp.8
Leary, M. R. (2018). The psychology of intellectual humility. John Templeton Foundation. https:/​/​www.templeton.org/​wp-content/​uploads/​2018/​11/​Intellectual-Humility-Leary-FullLength-Final.pdf
Leary, M. R. (2022). Intellectual humility as a route to more accurate knowledge, better decisions, and less conflict. American Journal of Health Promotion, 36(8), 1401–1404. https://doi.org/10.1177/08901171221125326
Leary, M. R., Diebels, K. J., Davisson, E. K., Jongman-Sereno, K. P., Isherwood, J. C., Raimi, K. T., Deffler, S. A., & Hoyle, R. H. (2017). Cognitive and interpersonal features of intellectual humility. Personality and Social Psychology Bulletin, 43(6), 793–813. https://doi.org/10.1177/0146167217697695
Little, T. D. (2013). Longitudinal structural equation modeling. The Guilford Press.
Muthén, L. K., & Muthén, B. O. (2002). How to use a Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 9(4), 599–620. https://doi.org/10.1207/S15328007SEM0904_8
Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. https://doi.org/10.1037/0033-295X.84.3.231
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
Paulhus, D. L., & Vazire, S. (2007). The self-report method. In R. W. Robins, R. C. Fraley, & R. F. Krueger (Eds.), Handbook of Research Methods in Personality Psychology (pp. 224–239). Guilford.
Porter, T., Baldwin, C. R., Warren, M. T., Murray, E. D., Cotton Bronk, K., Forgeard, M. J., … Jayawickreme, E. (2022). Clarifying the content of intellectual humility: A systematic review and integrative framework. Journal of Personality Assessment, 104(5), 573–585. https://doi.org/10.1080/00223891.2021.1975725
Porter, T., Elnakouri, A., Meyers, E. A., Shibayama, T., Jayawickreme, E., & Grossmann, I. (2022). Predictors and consequences of intellectual humility. Nature Reviews Psychology, 1(9), 524–536. https://doi.org/10.1038/s44159-022-00081-9
Porter, T., & Schumann, K. (2018). Intellectual humility and openness to the opposing view. Self and Identity, 17(2), 139–162. https://doi.org/10.1080/15298868.2017.1361861
Porter, T., Schumann, K., Selmeczy, D., & Trzesniewski, K. (2020). Intellectual humility predicts mastery behaviors when learning. Learning and Individual Differences, 80(1), 101888. https://doi.org/10.1016/j.lindif.2020.101888
Revelle, W. (2022). Psych: Procedures for personality and psychological research. Northwestern University.
Reynolds, W. M. (1982). Development of reliable and valid short forms of the Marlowe-Crowne Social Desirability Scale. Journal of Clinical Psychology, 38(1), 119–125. https:/​/​doi.org/​10.1002/​1097-4679(198201)38:1%3C119::AID-JCLP2270380118%3E3.0.CO;2-I
Rhemtulla, M., Brosseau-Liard, P. É., & Savalei, V. (2012). When can categorical variables be treated as continuous? A comparison of robust continuous and categorical SEM estimation methods under suboptimal conditions. Psychological Methods, 17(3), 354–373. https://doi.org/10.1037/a0029315
Robitzsch, A. (2022). sirt: Supplementary Item Response Theory Models. R package version 3.12-66. https:/​/​cran.r-project.org/​package=sirt
Roets, A., & Van Hiel, A. (2011). Item selection and validation of a brief, 15-item version of the need for closure scale. Personality and Individual Differences, 50(1), 90–94. https://doi.org/10.1016/j.paid.2010.09.004
Rolland, J. P. (1998). Manuel de l’inventaire NEO PI-R (adaptation française) [Manual of the neo-pi-r, French adaptation]. ECPA.
Rosseel, Y. (2012). lavaan: An R Package for Structural Equation Modeling. Journal of Statistical Software, 48(2), 1–36. https://doi.org/10.18637/jss.v048.i02
Salama-Younes, M., Guingouain, G., Le Floch, V., & Somat, A. (2014). Besoin de cognition, besoin d’évaluer, besoin de clôture: proposition d’échelles en langue française et approche socio-normative des besoins dits fondamentaux [Need for cognition, need for closing, need to evaluate: Proposal of scales in French and socio-normative approach of fundamental needs]. European Review of Applied Psychology, 64(2), 63–75. https://doi.org/10.1016/j.erap.2014.01.001
Schreiber, J. B., Stage, F. K., King, J., Nora, A., & Barlow, E. A. (2006). Reporting structural equation modeling and confirmatory factor analysis results: A review. The Journal of Educational Research, 99(6), 323–337. https://doi.org/10.3200/JOER.99.6.323-338
Senger, A. R., & Huynh, H. P. (2021). Intellectual humility’s association with vaccine attitudes and intentions. Psychology, Health & Medicine, 26(9), 1053–1062. https://doi.org/10.1080/13548506.2020.1778753
Stanley, M. L., Sinclair, A. H., & Seli, P. (2020). Intellectual humility and perceptions of political opponents. Journal of Personality, 88(6), 1196–1216. https://doi.org/10.1111/jopy.12566
Tagand, M., & Muller, D. (2022). Counter-evaluative conditionning and conspiracy mentality. Unpublished data.
Velicer, W. F., & Fava, J. L. (1998). Affects of variable and subject sampling on factor pattern recovery. Psychological Methods, 3(2), 231–251. https://doi.org/10.1037/1082-989X.3.2.231
Verardi, S., Dahourou, D., Ah-Kion, J., Bhowon, U., Tseung, C. N., Amoussou-Yeye, D., … Rossier, J. (2010). Psychometric properties of the Marlowe-Crowne social desirability scale in eight African countries and Switzerland. Journal of Cross-Cultural Psychology, 41(1), 19–34. https://doi.org/10.1177/0022022109348918
Weiner, I. B., & Greene, R. L. (2008). Handbook of personality assessment. Wiley.
Yuan, K. H., & Bentler, P. M. (2000). Three likelihood-based methods for mean and covariance structure analysis with nonnormal missing data. Sociological Methodology, 30(1), 165–200. https://doi.org/10.1111/0081-1750.00078
Yzerbyt, V. Y., Muller, D., & Judd, C. M. (2004). Adjusting researchers’ approach to adjustment: On the use of covariates when testing interactions. Journal of Experimental Social Psychology, 40(3), 424–431. https://doi.org/10.1016/j.jesp.2003.10.001
Zmigrod, L., Zmigrod, S., Rentfrow, P. J., & Robbins, T. W. (2019). The psychological roots of intellectual humility: The role of intelligence and cognitive flexibility. Personality and Individual Differences, 141(1), 200–208. https://doi.org/10.1016/j.paid.2019.01.016
Zwaan, R. A., Etz, A., Lucas, R. E., & Donnellan, M. B. (2018). Improving social and behavioral science by making replication mainstream: A response to commentaries. Behavioral and Brain Sciences, 41(1), e157. https://doi.org/10.1017/S0140525X18000961
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material