Constraints on Generality statements (COGs, Simons, et al., 2017) address the boundary conditions of study findings, informing readers of the participant populations and methodological and social/cultural/historical contexts within which the reported phenomena are expected to occur. This in turn informs readers of the contexts in which additional studies are needed, facilitating cumulative science. Knowledge of boundary conditions is also crucial for conducting meaningful direct replications of findings (Simons, et al., 2018). In our review of 282 articles, we found that 6 included a separate COG subsection. The proportion of articles that contained any explicit COG statements anywhere in the discussion section ranged from 5% to 40% across the four journals we examined. This proportion was higher in journals whose editors publicly encouraged inclusion of COGs. We believe continued monitoring of the inclusion of COGs would be beneficial not only in psychology but also for other scientific disciplines.

To improve the contribution of empirical studies to the body of scientific knowledge, Simons, Shoda & Lindsay (2017, 2018) proposed that reports of all empirical studies include a section on Constraints on Generality (COG), discussing the likely boundary conditions of a finding. Originally a concept in mathematics and physics, boundary conditions define “the scope of a model … to which a set of governing equations applies” (Bursten, 2021). In a broader scientific context, boundary conditions have come to refer to “the domain of applicability of the model” within which “the phenomenon in the real world to be explained falls” (Bokulich, 2008, p. 226).

In the social sciences, “boundary conditions” refer to the “‘who, when, and where’ (i.e., under what circumstances will these concepts and relationships work)” of any theoretically predicted behavioral phenomena (Bhattacherjee, 2012, p. 26; Busse et al., 2017; Dubin, 1978; Whetten, 1989). Discovery of boundary conditions is “critically important for understanding a phenomenon and developing an accurate theory about it” (Roediger & McCabe, 2007, p. 29). COG statements prompt authors to think about the boundary conditions of the phenomena their study investigated, such as the participant populations, study designs, or contexts, among other factors, within which the phenomena are expected to occur. “Writing a COG statement prompts authors to consider and articulate their target populations, and reading a COG statement prompts other researchers to evaluate which claims of generality already have empirical support and which do not” (Simons et al., 2017, p. 1128).

Note that COG refers to constraints on generality. Although there is extensive overlap between “generality,” “generalizability,” and “external validity,” we believe it is worth highlighting how in one important way they are different: “generality” is not about the validity of any finding. Of course, if the original findings are unreplicable false positives even in the original context, their validity is indeed suspect. However, if the phenomenon of interest occurs reliably in the original context, but not in other contexts, the generality framework views both the original and new findings as contributing together to a more comprehensive set of information about “who, when, and where” of the behavioral phenomena (Bhattacherjee, 2012, p. 26; Busse et al., 2017; Dubin, 1978; Whetten, 1989), paving the way to discovering boundary conditions for the phenomenon. In contrast, in the traditional “generalizability” framework, if a phenomenon found in a study does not occur in other contexts or participant populations, the generalizability and external validity of the original finding is called into question even if it has been found replicable within the original context.

COGs Contribute to Cumulative Scientific Progress

Metascience research has examined how well limitations are acknowledged in the scientific literature (e.g., Ioannidis, 2007). A recent study (Clarke et al., 2023) reported that only 61.9% of the 440 articles sampled from Social Psychological and Personality Science published between 2010 and 2020 reported at least one limitation, and an estimated 52% of articles mentioned potential problems with external validity.

However, boundary conditions are not necessarily limitations in the same sense that stating the particular location and time frame to which a weather forecast applies is not a limitation. In fact, predicting the typical weather regardless of geographic locations and regardless of the day is hardly useful for most purposes. Just as it would not be scientific to assume the weather is the same across locations, presuming, without empirical evidence, that findings from studies conducted in one context are universal borders on being unscientific. In fact, the “consider findings to be universal unless otherwise shown” default practices are increasingly recognized as problematic (e.g., Brady et al., 2018; Kline et al., 2018; Nielsen et al., 2017; Singh et al., 2023; Visser et al., 2022; Yarkoni, 2020).

In contrast, imagine that the COG statements of a paper alerted readers of potential boundary conditions by stating, for example:

… at the very least, further replications with other populations, cohorts, and testing conditions seem necessary next steps. We also do not wish to overgeneralize … these results were obtained from a population of middle-class preschool children not selected for any self-regulatory difficulties, in a relatively narrow age span [therefore] any diagnostic “window” … may be fragile and narrow in time. […] It is also possible that the effects … interact with the particular characteristics of the subject population, and such interactions will require systematic exploration in future work. (Shoda et al., 1990, p. 985).

Not only do COG statements facilitate more meaningful direct replication attempts by specifying the contexts in which a finding is expected to replicate, it encourages boundary condition-seeking efforts to examine if the same occurs in other contexts (Greenwald et al., 1986; Simons et al., 2018). For example, in studies conducted in new contexts seeking to replicate Shoda et al. (1990) whose COG statement was quoted above (e.g., Kidd et al., 2013; Michaelson et al., 2013), it was found that trust in the promised outcome is likely a boundary condition, consistent with earlier suggestions in hindsight (e.g., Mischel & Staub, 1965). In another example, the “social discounting” phenomenon has been demonstrated in numerous studies with Western participants, who would sacrifice more resources to benefit individuals who are socially closer to them. However, studies in non-Western contexts led to the discovery that the social discounting phenomenon rarely occurs among rural Bangladesh and Indonesian participants (Tiokhin et al., 2019). This in turn led to the development of hypotheses about the boundary conditions, such as that “social discounting only exists when people treat others based largely on individual feelings” rather than their need, or it would not occur in cultures in which “giving without recipient need is a frowned-upon signal of superiority” (Tiokhin et al., 2019, p. 11).

More broadly, knowledge about when a phenomenon occurs, and when it does not, can contribute to genuine scientific progress (e.g., Busse et al., 2017; Roediger & McCabe, 2007). For example, Isaac Newton pondered why the moon and other “heavenly bodies” do not fall to the ground, in apparent exception to the Aristotelian law of motion. Awareness of this and other boundary conditions ultimately led to a fundamentally new theory of gravity and motion of objects (Faller et al., 2024; Sheth, 2019). In fact, boundary conditions are a key component of the “condition seeking” program of research (Greenwald et al., 1986), in which the main research question is not “does this occur,” but rather “under what conditions does this occur?” COG statements facilitate these discoveries by nudging a shift away from the current “unless otherwise shown, consider findings to be universal” default to a new default: “unless otherwise shown, consider findings to be specific.” It also challenges the view that only universal findings are valuable.

It is neither feasible nor an effective use of individual scientists’ efforts to document every single COG in their study. Instead, it is the ongoing task for the entire scientific community to accumulate knowledge about the boundary conditions of a phenomenon. This is facilitated by authors stating COGs that are known or seem likely to them, which will in turn encourage other researchers and the scientific community to collectively identify more boundary conditions. Simultaneously, the process of articulating COGs can also facilitate discoveries of the reasons why a phenomenon is likely to occur in contexts (broadly defined, including participant populations, social contexts, and laboratory settings) that differ in certain ways from the original study. Articulating these reasons is just as important as articulating the contexts beyond which there is no compelling reason to expect the same phenomenon. Both invite the scientific community to develop theories to predict the contexts on which the phenomenon depends.

Most importantly, perhaps, prompted by the need to make COG statements, authors may explicitly state that they don’t know the boundary conditions of their study. Publicly recognizing lack of knowledge is key to scientific progress. In addition, when new studies find that the previous findings do in fact occur in a new context, manuscripts reporting the new findings are sometimes dismissed by editors because “we already know that.” However, if the original study had a COG statement informing readers that it is not known whether the phenomenon occurs in a new context, demonstration of its occurrence in a new context (e.g., a new participant population) would be more likely to be considered a genuine contribution to cumulative scientific knowledge, and articles reporting such findings are more likely to be accepted for publication. This should incentivize conceptual replication efforts, which in turn can lead to discovering important boundary conditions.

COG Statements Can Improve the Replicability of Findings

Inclusion of COG statements is important for addressing the replication crisis in psychology, cancer biology, and many other fields (Klein et al., 2018; Open Science Collaboration, 2015). Explicitly stating COGs addresses these issues because COG statements define direct replication by specifying the contexts in which attempts at direct replications should be conducted (Simons et al., 2018), which should improve the replicability of published findings. And when conceptual replications that vary these conditions do not reproduce the original findings, the information contained in the COG statements will give researchers a priori hypotheses about the contextual factors that may matter. In addition, the replication crisis is in part due to overconfidence in the generality of study findings. When authors fail to specify the populations and contexts in which they believe the behavioral phenomenon of interest occurs, unwarranted overgeneralization of findings is the likely result (Yarkoni, 2020). Then, readers should not be blamed for expecting the phenomenon to occur regardless of participant demographics or the social and methodological contexts. As a result, new studies that fail to support these expectations indeed question the validity of the overly general inferences based on the original findings. In short, with COGs, both original and replication studies will have a greater ability to contribute valuable information to scientific knowledge. Each study will contribute more replicable and more clearly contextualized finding(s) that become a part of cumulative science.

Contribution of COGs to a More Globally Relevant Science

Currently, a large majority of studies in psychology are conducted in Western, Educated, Industrialized, Rich, and Democratic (WEIRD) societies (e.g., Henrich et al., 2010; Rad et al., 2018). It has been shown that findings with WEIRD samples do not always occur in non-WEIRD societies. Even for optical illusions, which one may well suspect are relatively universal, research has shown cross-cultural differences (e.g., in the magnitude of Mueller-Lyer illusion, Henrich et al., 2010). Of course, the extent of culture-specificity depends on the phenomena being studied. However, findings from WEIRD populations should not be presumed to occur in a non-WEIRD population without empirical evidence. Lack of research in these populations not only is an injustice to those in non-WEIRD societies but can also have aversive outcomes for them. For example, if COGs are not discussed, a public health policy touted as effective based on studies conducted with participants from WEIRD societies could be automatically applied to non-WEIRD populations even though there is little empirical evidence to support its effectiveness beyond WEIRD societies. Explicitly discussing COGs in scientific publications helps alert readers to the need to empirically determine if the same or similar results occur in the target population. Policymakers and the general public can also benefit from reports of studies that more accurately describe the populations to which the results apply.

The findings of a study may also be dependent on the features of the materials or stimuli used and may well be specific to the unique characteristics of the stimuli (e.g., Clark, 1973; Judd et al., 2012; Wells & Windschitl, 1999). In that case, the findings from one study may not occur in other studies that use a different set of materials. Researchers should include COG statements referring to the materials used in their study and discuss the extent to which they believe the same findings will, or will not, occur in studies using other materials. The findings may also depend on features of the procedures of a study, such as the testing environment, equipment, or the person administering the experiment. Finally, findings from a study may be dependent on the historical and social context due to changes in societal attitudes or the political environment, for example (Gergen, 1973). Therefore, COG statements should address such components of the study design and study contexts.

In sum, COG statements documenting boundary conditions are instrumental in accumulating scientific knowledge, addressing the replication crisis, and making psychology research more relevant to the entire human population. Has including COG statements become a common publication practice in psychology?

The Present Studies

As of October 2024, the article that proposed that COG statements be included in all empirical studies (Simons et al., 2017) has been cited in over 1000 articles. But that does not mean that including COG statements has in fact become a common practice in psychology. To the best of our knowledge, no study has investigated the inclusion of COG statements in psychology publications. The present study examined the prevalence, types, and explicitness of COG statements in four psychology publications from 2018 to 2022.

We addressed the following questions: (1) what proportion of articles published between 2018 and 2022 in psychology journals has separate sections dedicated to COG and (2) what proportion of articles discusses COGs anywhere in the discussion section; (3) what proportion of articles discusses implicit (in contrast to explicit) COGs; (4) whether there is a change in the proportion of articles that discusses COGs over these four years following the publication of the Simons et al., 2017 article; and (5) what types of COGs are most frequently discussed.

In Study 1, we addressed these questions among articles published in the three journals considered as “a premier outlet for all psychological research [and] leading disciplinary-specific journals for social psychology and cognitive psychology” by the article that is often credited for calling attention to the low rate of replicability of psychology findings (Open Science Collaboration, 2015).

In Study 2, we also addressed these questions for articles published in Neuropsychology in 2021 and 2022, during which its editor-in-chief publicly emphasized the importance of COGs (e.g., in publication guidelines; Neuropsychology Submission Guidelines, 2021).

Due to scheduling constraints, coding of the articles started without preregistered plans for the studies. However, data, code, and other supplemental materials are available at https://osf.io/hng5p/?view_only=2341ee50ba024a3db5185e14e6663ad2.

Materials and Methods

Journal Selection

We examined the articles published in the following three journals: Psychological Science (PSCI), Journal of Personality and Social Psychology (JPSP), Journal of Experimental Psychology: Learning, Memory, and Cognition (JEP: LMC).

Coding Articles

We aimed to randomly sample 20 articles per year per journal, resulting in 82 articles from JPSP, 79 from PSCI, 81 from JEP:LMC1 for a total of 242 coded articles. Articles by one of the present authors were excluded from coding and analyses to avoid any possible conflict of interest.

Three undergraduate Research Assistants (RAs) coded the articles. The RAs were instructed to read the abstract of each article to gain a general overview of the context of the article and what it was examining and then code the General Discussion sections for COGs. This is where Simons et al. (2017) urged a separate section for COGs to be inserted, as it is most important to discuss COGs where an article’s findings are summarized.

The RAs looked for phrases that indicated that the study results may hold true only for certain contexts (e.g., situations or participant populations) including those contexts that are a part of the experimental design. Specifically, they used the following questions to determine whether a sentence, or a part of a sentence, describes a COG: (1) whether, in the absence of the potential COG statement, the reader would have been inclined to assume that the results would occur broadly even in other contexts or participant populations, and (2) whether the potential COG statement would reduce the inclination to make unwarranted assumptions of broad generality. If the answer is yes to both of those questions, then it was considered a COG statement.

Throughout the coding process, we also differentiated internal validity limitations from COGs. Phrases that call into question the authors’ inferences about the processes (causal or otherwise) underlying the phenomenon of interest in their study are considered to address internal validity. In contrast, phrases referring to the contexts (both immediate and broader, including participant populations) in which the same or similar findings are expected to occur and those in which they are not (i.e., addressing the potential boundary conditions) are considered as COG statements.

Note that the presence of phrases such as “we cannot generalize” or “caution against generalizing” was not automatically considered indicative of a COG statement. For example, “we don’t have good internal validity, so we cannot generalize” would still be considered an internal validity issue, not a COG. We found that authors sometimes used the word “generalize” to refer to uncertainty about all types of inferences based on their study results.

RAs coded 20 articles from one journal, then coded 20 articles from the next journal, and so on until they had coded articles from all three journals we sampled for this study; then they repeated the process. No one journal was coded entirely in the beginning or end of the process.

RAs recorded the result of coding each article using an online data entry form, with an option to list up to 10 COG statements in each article. There were no articles in which more than 10 COGs were found. For each article, after identifying and indicating the existence of one (or more) COG(s) in the coding form, RAs copied and pasted the COG wording, indicated the type of COG it was, and rated the explicitness of the COG on a scale of 1 (very implicit) to 5 (very explicit). If two different sentences in the same paragraph described two different COGs, they were coded as two separate COGs, and, if one sentence contained two different types of COGs, that sentence was also coded with two different types of COGs with separate explicitness ratings. The online data entry form used for coding can be found at https://osf.io/hng5p/?view_only=2341ee50ba024a3db5185e14e6663ad2.

Explicitness of COGs

We used a numerical scale of 1-5 with 5 indicating an explicitly and clearly stated COG and 1 indicating an implicitly and indirectly stated COG. “Explicit COGs” are statements that explicitly discussed the participant populations, materials/stimuli/procedure, or contexts in which the findings may occur and those in which they may not occur, as recommended by Simons et al. (2017). Explicit COG statements may contain, but are not limited to, phrases such as “we caution against generalizing…”, “these findings cannot be generalized…”, or “this may not occur under X, Y, or Z situation…” An example of explicit COG (Average RA explicitness rating = 5) is the following: “…our samples did not include older couples, gay and lesbian couples, or, in Study 2, interracial couples, thus limiting generalizability” (Ross et al., 2019, p. 14).

Implicit COGs were defined as statements that may allow readers to infer constraints on generality even though they do not directly caution the reader against assuming that the findings occur under conditions dissimilar to the ones in the reported study or studies. These implicit statements often address what future research should focus on, thereby allowing readers to infer that the present findings may not apply to new situations. Wordings that could be considered a weak, implicit COG include general phrases such as “future research can further explore…” This is because such wording does not recommend or indicate the necessity of examining the conditions under which the findings are likely to occur (and those under which they are unlikely to occur). An example of implicit COG (average RA explicitness rating = 1.5) is the following: “…this claim could be further tested by varying the reliability and validity of other linguistic cues…to assess how this affects their online interpretation” (Morett et al., 2021, p. 1524). In contrast, an example of a more explicit COG (Average RA explicitness rating = 3) is the following: “Additional research will be needed…to generalize that conclusion and…to extend the current results beyond the particular task and population tested here” (McCarley & Yamani, 2021, p. 1681).

The purpose of coding implicit COGs is that their existence suggests that authors may be aware of the need to test the generality of their findings even though they did not discuss the boundary conditions explicitly (possibly for fear that doing so would lead the editor and reviewers to consider their findings as not important enough to publish). To the extent that they are already aware of such boundary conditions, they may state them more explicitly in the future when, for example, the editorial policy of the journal clearly indicates that discussions of boundary conditions will make it more, rather than less, likely for the manuscript to be accepted for publication. We envision Implicit COGs as a kind of “proto-COG” which could become a “full-blown,” or explicit, COG.

Types of COGs

RAs also categorized the COGs they found in each paper into four types: “Participants”, “Materials/Stimuli/Procedure”, “Social/Cultural/Historical Context”, and “Other.”

Participant COGs are defined as constraints on generality due to the characteristics of the participants studied. These characteristics include age, disability, gender, nationality, race, sexuality, or other information about participants. The impact of cultural context experienced by each person as a part of their life history is also considered a participant COG. An example of an explicit participant COG is as follows: “Although one of the aims of this paper was to examine the effects of support across a major life transition, this sample of couples expecting their first child limits the extent to which we can generalize the findings to other transitions” (Ryon & Gleason, 2018, p. 1049).

Materials/Stimuli/Procedure COGs include the specific materials or stimuli used in the study, the particular experimental design (e.g., repeated-measures, within-subject design), and/or the step-by-step procedure used. An example of an explicit Materials/Stimuli/Procedure COG is as follows: “A limitation of the present studies is that all of the stimuli are witness reports, verbally communicated to a third-party moral judge. This is the typical way in which theories of moral judgment are tested (including previous work on biases in blame), and it does constrain the generalizability of the results” (Monroe & Malle, 2019, p. 223).

Context COGs include historical, social, and cultural context(s) not manipulated by the researcher explicitly or intentionally. We defined context COGs in this way for ease of identification and because we wanted to keep the idea of context solely to extraneous factors influencing the outcome of the study that are not a part of the studies’ focus. Factors such as studies done within the context of COVID-19, the social context of the Civil Rights Movement, before vs. after a given US presidential election, and other social and cultural contexts (e.g., the study was conducted in the Central Valley of California) were all considered context COGs. An example of an explicit context COG is as follows: “Finally, we cannot be certain that school composition effects generalize to today’s time and beyond. The present study addressed selective schools in 1960 and their consequences for individuals’ lives into the present day” (Göllner et al., 2018, p. 1794).

COGs that did not fit the definition of the three types discussed above were coded as “other.” An example of COGs that fall in this category might be called “phenomena” COGs (e.g., stating that the study found that the manipulation of X had an effect on Y1, but one should not assume, without empirically testing, that X will also have an effect on a different outcome variable, Y2).

Note that these categories of COGs were not considered mutually exclusive. Thus, RAs were allowed to indicate more than one COG type category for each COG. For example, the culture in which the study is conducted can be a participant COG because the culture absorbed throughout a person’s life influences the values, world views, and meaning system of the individual, but it can also be a context COG when it provides the current cultural context for the study.

Coder Training

In order to train coders, we identified a set of 100 “practice” articles so that RAs could learn the coding process and provide feedback on our initial set of codes used to quantify the extent each article discussed COGs. These “practice” articles were selected from JPSP, one of the journals we used for the study, but the “practice” articles did not include the articles we coded for our study. RAs’ feedback during the training period were incorporated in creating the final coding system used to generate the data reported below. The coding of the articles included in the analyses reported below was conducted after the training period. Examples of COGs were discussed in weekly meetings with RAs during the coding period to maintain the quality of coding, but the codes remained unchanged once the coding of the main set of articles started.

Interrater Reliability

Interrater reliability was examined for the 241 articles coded by all three RAs, out of the total of 282 articles examined in Studies 1 and 2.

RAs examined whether each article had a separate section within the Discussion section dedicated to discussing COGs. Cronbach’s alpha was .92 based on the inter-rater correlations.

RAs also indicated whether each article they read discussed any COGs (including explicit and implicit COGs) of any type. Cronbach’s alpha for this was .78, indicating that the inter-rater reliability was relatively high.

In addition, the RAs rated the explicitness of each COG they identified. For each article, we identified the COG that received the highest explicitness rating by each RA. Articles that were found to contain no COGs were given a score of 0. Thus, an article was given three scores each ranging from 0 to 5: the highest explicitness rating given to the article by RA #1; the highest explicitness given by RA#2, and the highest explicitness given by RA #3. Cronbach’s alpha of the highest explicitness ratings, based on inter-rater correlations (with articles as the unit of analysis) between each pair of the three RAs, was .86.

Lastly, we also looked at interrater reliability for identifying each type of COGs. For Participation COGs, Cronbach’s alpha was .88, for Context COGs it was .65, and for Material/Stimuli/Procedure COGs it was only .41. Thus, the results regarding Material/Stimuli/Procedure COGs should be interpreted with much caution.

Results

The results reported in this section are based on 242 articles from JPSP, PSCI, and JEP: LMC coded by at least two RAs. For all analyses, we aggregated the coding provided by all RAs. For binary (i.e., “COG yes/no”) variables, we used the answer that the majority of the three RAs selected, and when only two RAs coded an article and disagreed about whether there are COGs or not, we randomly selected one RA’s answer (we repeated the analyses and confirmed that the results did not meaningfully vary as a result of this random selection; see footnote 2). For quantitative (e.g., rating) variables, we used an arithmetic average rating.

Proportion of Articles With a Separate COG Section

The percentage of articles coded in JPSP, PSCI, and JEP:LMC that contained a separate COG section was very low. Of the 242 articles, only 5 articles (2%) had a separate section for COGs, whereas 237 articles did not.

COG Explicitness

If an RA found a COG in an article, it was rated on a scale of 1 to 5 of COG explicitness. When an RA identified multiple COGs in one article, we took the maximum of the COG explicitness ratings given by the RA to the article and considered it as the RA’s COG explicitness score for the article. When an RA did not find any COG in an article, we assigned 0 as the RA’s explicitness score for that article. Finally, the scores were averaged across the RAs, resulting in one COG explicitness score for each article. Figure 1 shows the distribution of COG explicitness scores.

Figure 1.
Histogram Showing the Distribution of COG Explicitness Across Articles

When an article mentioned multiple COGs, each COG received its own explicitness rating. Then, for each article, we first identified, for each RA, the highest of the ratings the RA gave to the article, and considered it as the article’s COG explicitness score according to the RA. When an RA found no COGs in an article, we assigned 0 as the article’s COG explicitness score according to the RA. To arrive at one score for each article, which is what was used for the histogram, we averaged the result across the RAs.

Figure 1.
Histogram Showing the Distribution of COG Explicitness Across Articles

When an article mentioned multiple COGs, each COG received its own explicitness rating. Then, for each article, we first identified, for each RA, the highest of the ratings the RA gave to the article, and considered it as the article’s COG explicitness score according to the RA. When an RA found no COGs in an article, we assigned 0 as the article’s COG explicitness score according to the RA. To arrive at one score for each article, which is what was used for the histogram, we averaged the result across the RAs.

Close modal

Proportion of Articles with No COGs, Implicit COGs, and Explicit COGs

The COG explicitness ratings were used to group articles into 3 categories. When the majority of the RAs found no COG in an article, that article was categorized as “no COG.”2 Of the remaining articles, when the COG explicitness score of an article is equal to or less than 3, the article was categorized as “implicit COG.” The remaining articles, all of which had COG explicitness score greater than 3, were categorized as “explicit COG.’ As shown in Table 1, of the 242 articles from all three journals, 136 (56%) were categorized as”no COG,” 73 (30%) as “implicit COG,” and 33 (14%) as “explicit COG.” A chi-square test against the null hypothesis that the true proportion of articles with explicit COGs is 50% was clearly rejected (𝛘2 = 134.06, df =1, p < 10-15).

Table 1.
Presence and Explicitness of COGs Across Three Psychology Journals
Journal Name N of articles with no COG N of articles with implicit COG N of articles with explicit COG Total N of articles coded 
JEP:LMC 55 (68%) 22 (27%) 4 (5%) 81 (100%) 
JPSP 27 (33%) 32 (39%) 23 (28%) 82 (100%) 
PSCI

Total 
54 (68%)

136 (56%) 
19 (24%)

73 (30%) 
6 (8%)

33 (14%) 
79 (100%)

242 (100%) 
Journal Name N of articles with no COG N of articles with implicit COG N of articles with explicit COG Total N of articles coded 
JEP:LMC 55 (68%) 22 (27%) 4 (5%) 81 (100%) 
JPSP 27 (33%) 32 (39%) 23 (28%) 82 (100%) 
PSCI

Total 
54 (68%)

136 (56%) 
19 (24%)

73 (30%) 
6 (8%)

33 (14%) 
79 (100%)

242 (100%) 

Table 1 shows that the three journals we examined (JPSP, PSCI, and JEP: LMC) had different proportions of implicit and explicit COGs. A larger proportion of articles in JPSP had COGs compared to JEP: LMC and PSCI. Among the three journals, JPSP also had the highest percentage of explicit and implicit COGs.

Change over the Past Four Years

Figure 2 shows the proportion of articles that reported at least one COG based on responses to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates were “jittered” so that when multiple articles had the same combination of year and presence of COG (1: yes, 0: no), they are visible as small circles at slightly different locations. The results do not suggest an increase over time from 2018 to 2021, and in fact none of the slopes, when tested with logistic regression against the null hypothesis of no change, were statistically significantly different from 0.

Figure 2.
Presence of COG Statements in Articles Published in Three Leading Psychological Journals as a Function of Publication Year

The proportion of articles in JPSP, Psychological Science, and JEP:LMC in which at least one COG was identified based on the majority response to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates of these circles were “jittered” so that when multiple articles have the same combination of year and presence of COG, they are visible at slightly different locations.

Figure 2.
Presence of COG Statements in Articles Published in Three Leading Psychological Journals as a Function of Publication Year

The proportion of articles in JPSP, Psychological Science, and JEP:LMC in which at least one COG was identified based on the majority response to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates of these circles were “jittered” so that when multiple articles have the same combination of year and presence of COG, they are visible at slightly different locations.

Close modal

Proportion of Each Type of COG Included in Each Article

Collapsing across explicit and implicit COGs, 49 (21%) of the 242 articles contained a Participant COG, 56 (23%) contained a Materials/Stimuli/Procedure COG, and 30 (13%) contained a Context COG.

In Study 2, we examined an additional leading journal to serve as an example of a “best case scenario,” because the editor of that journal stated in an interview with APA that he was encouraging authors to include COG statements (APA, 2021). He also explicitly called for inclusion of COG statements in an editorial (Yeates, 2022) and encouraged COGs in the journal publication guidelines (Neuropsychology Submission Guidelines, 2021). Thus, we examined articles from Neuropsychology published in 2021 and 2022 after his term as editor-in-chief started.

Methods

The same methods used in Study 1 were used to code 40 articles randomly selected from those published in 2021 and 2022 in Neuropsychology, likely submitted after the editor started explicitly calling for COG statements.

Results

The following results are based on the 40 articles from Neuropsychology, all of which were coded by all three RAs.

Proportion of Articles with a Separate COG Section

With regard to articles with a separate section for COG, only 1 of the 40 (2.5%) articles we examined (published in 2022) included a separate section for discussing COG.

COG Explicitness

Figure 3 shows the distribution of the highest explicitness rating received by each of the articles published in Neuropsychology. As shown in Figure 3, more than half of the articles published in 2021-22 had a COG explicitness rating of 2 or above, whereas the other three journals published in 2018-21 had a majority of articles with a COG explicitness rating under 2 (see Figure 1).

Figure 3.
Histogram Showing the Distribution of COG Explicitness Across Articles Published in Neuropsychology.

When an article mentioned multiple COGs, each COG received its own explicitness rating. Then, for each article, we first identified the highest of the ratings given to it by each RA as the article’s COG explicitness score according to the RA. When an RA found no COGs in an article, we assigned 0 as the article’s COG explicitness score according to the RA. To arrive at one score for each article, which was used in this histogram, we averaged the result across the RAs.

Figure 3.
Histogram Showing the Distribution of COG Explicitness Across Articles Published in Neuropsychology.

When an article mentioned multiple COGs, each COG received its own explicitness rating. Then, for each article, we first identified the highest of the ratings given to it by each RA as the article’s COG explicitness score according to the RA. When an RA found no COGs in an article, we assigned 0 as the article’s COG explicitness score according to the RA. To arrive at one score for each article, which was used in this histogram, we averaged the result across the RAs.

Close modal

Proportion of Articles with No COGs, Implicit COGs, and Explicit COGs

Table 2 summarizes the results by grouping articles into 3 categories using the same procedure used for Table 1.

Table 2.
Presence and Explicitness of COGs in Neuropsychology
Journal Name COG Explicitness 
 N of articles with no COG N of articles with implicit COG N of articles with explicit COG Total N of articles coded 
Neuropsychology 12 (30%) 12 (30%) 16 (40%) 40 (100%) 
Journal Name COG Explicitness 
 N of articles with no COG N of articles with implicit COG N of articles with explicit COG Total N of articles coded 
Neuropsychology 12 (30%) 12 (30%) 16 (40%) 40 (100%) 

Change over Time

Next, we examined if the percentage of articles reporting COGs in Neuropsychology increased over time. Figure 4 shows the scatterplots plotting the proportion of articles in which at least one COG was identified based on the majority response to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates of these circles were “jittered” so that when multiple articles have the same combination of year and presence of COG (1: yes, 0: no), they are visible at slightly different locations.

Figure 4.
Presence of COG Statements in Articles Published in Neuropsychology as a Function of Publication Year

The proportion of articles in Neuropsychology in which at least one COG was identified based on the majority response to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates of these circles were “jittered” so that when multiple articles have the same combination of year and presence of COG (1: yes, 0: no), they are visible at slightly different locations.

Figure 4.
Presence of COG Statements in Articles Published in Neuropsychology as a Function of Publication Year

The proportion of articles in Neuropsychology in which at least one COG was identified based on the majority response to a binary COG yes/no question. Each small circle represents an article. Both the X and Y coordinates of these circles were “jittered” so that when multiple articles have the same combination of year and presence of COG (1: yes, 0: no), they are visible at slightly different locations.

Close modal

Logistic regression indicated that, even though the proportion of articles with COGs increased from 2021 to 2022, the odds of COG presence was still not significantly related to publication year (β = .92, df =38, p =.19). Note, however, given the small sample size (N=40), statistical power was not high. Thus, the failure to reject this null hypothesis should not be interpreted as indicating that the proportion of articles with COG remained unchanged.

Proportion of Each Type of COG Included in Each Article

Collapsing across explicit and implicit COGs, 23 (58%) of the 40 articles in Neuropsychology contained a Participant COG, 12 (30%) contained a Materials/Stimuli/Procedure COG, and 3 (8%) contained a Context COG.

Overall, of the 282 articles we examined across four journals, only 6 (2%) contained a separate COG section. We also did not find evidence that COG statements became more frequently included over the time span and in the journals we examined. In total, 49 out of 282 (17%) of all articles across all four journals contained explicit COGs in the General Discussion section, and 134 (48%) of articles contained any COG, including implicit ones that do not directly caution against generalizing their findings.

Participant COG was a relatively common type of COG identified in all four journals, which demonstrated an awareness of how participant characteristics can affect findings, but even this type of COG was not present in the majority of articles sampled from JPSP, JEP:LMC, and PSCI. With regard to other types of COG, only 23% of articles we examined in these journals contained Materials/Stimuli/Procedure COG, and only 13% of articles contained Context COG. This suggests that there is a need to promote a greater awareness of constraints on generality across stimuli (e.g., Clark, 1973; Judd et al., 2012; Wells & Windschitl, 1999) and contextual factors (e.g., Gergen, 1973) as potential boundary conditions for a finding.

On a more positive note, even though explicit statements about COGs are still rare, the presence of implicit statements about COGs in articles suggests that authors may already be aware of COGs but decided to state them only implicitly. This is promising because it suggests that if strongly encouraged to include explicit COGs by the scientific community, authors may be able to do so, since they already have some level of awareness of them.

In his interview with American Psychological Association and in his editorial, Neuropsychology’s editor-in-chief emphasized the importance of instituting best practices in reproducibility and open science and urged authors to include COGs (APA, 2021; Yeates, 2022). Moreover, the submission guideline for Neuropsychology strongly encouraged authors to include a section on constraints on generality in the discussion preferably titled “Constraint on Generality” (Neuropsychology Submission Guidelines, 2021). Similarly, Editor in Chief of the Journal of Personality and Social Psychology: Attitudes and Social Cognition (JPSP:ASC) has stated in his editorial that “I therefore ask every paper to offer a candid discussion of the extent of generality of the findings under consideration as well as the potential limit thereof” (Kitayama, 2017, p. 359).

Our results showed that Neuropsychology and JPSP contained the greatest proportion of explicit COGs (40% of the articles coded in Neuropsychology in 2021 and 2022 and 28% of the articles coded in JPSP from 2018 to 2021 contained explicit COGs) among the journals we examined. If more editors emphasize the importance of including explicit COGs in editorial statements, submission guidelines, and other journal communications to authors, more authors could become aware of the importance of discussing COGs. They may also feel less trepidation about including explicit COGs in their submissions due to the possibility of their paper being rejected by the journal because they explicitly stated that their findings may be specific to the participant populations, research methods, and social and cultural contexts of the study. Still, given that one of the co-authors of the 2017 COG paper was the editor-in-chief of PSCI including in years 2018 to 2019, the low percentage of articles with COGs in PSCI may indicate that it is difficult for editors alone to effect changes in the deeply ingrained practices and norms, demonstrating the need for systemic changes in incentive structures involving authors and editors.

It should also be noted that different journals have different word count limits for the discussion section. For example, JPSP allows longer discussion sections than PSCI (Journal of Personality and Social Psychology Submission Guidelines, 2018; Psychological Science Submission Guidelines, 2018). For shorter articles, authors may not have the space to include COGs. This suggests that it may be beneficial to consider revising the section structures and word count allocations, as well as explicitly incentivizing authors to prioritize the inclusion of COGs.

Constraints on Generality (COGs) Regarding the Findings Reported in This Article

Because we sampled articles from only four psychology journals, our results should not be assumed to hold in other journals. For example, considering that the four journals we examined are seen as some of the most prestigious in psychology, it is possible that other journals may include more (or less) COG statements. Claims of generality of the findings, implied, explicit, substantiated or unsubstantiated, may play a greater role in determining whether the article is accepted by journals perceived to be more prestigious.

Next, because we only coded articles published between 2018 and 2021 for JPSP, JEP: LMC, and PSCI, and between 2021 and 2022 for Neuropsychology, our results should not be assumed to apply to articles published in those journals outside of this timeframe. We encourage other researchers to extend our study by examining empirical articles published prior to the publication of the COG proposal (Simons et al., 2017) and those published after 2022 and conduct a time-series analysis to gauge the impact of calls for change in publication practices (such as the 2017 article) and to understand the change in COG inclusion over more years.

Additionally, the RAs who collected our data were undergraduates at the University of Washington in Seattle, USA, who did not have much prior knowledge about the four journals we asked them to code. Had we recruited RAs from other populations, the results may have been different. Moreover, the RAs were trained and asked to primarily pinpoint COGs; therefore, they were highly alert to the presence of any possible COGs and may have been more sensitive to identifying COGs than typical readers.

Finally, our results are likely specific to the procedure of our study such that studies using other COG classification systems (e.g. with different operationalizations of implicit COG, or of context COG) may reach different conclusions.

Limitations Affecting Internal Validity

Our studies also have several limitations affecting their internal validity. Although it is not clear how it may have biased the results, the articles with which the RAs practiced coding were all from JPSP, which was also one of the journals the present study examined. In addition, for Study 2 on Neuropsychology, only those articles published in 2021 and 2022 were included (i.e., after the term of Keith Yeates as the editor-in-chief stated), while Study 1 examined articles published in 2018 to 2021. This should be taken into account when considering the differences between the journals. The apparently greater increase in articles with COG statements that we observed for Neuropsychology, compared to the other three journals, may in part be due to the inclusion of 2022 data. Also, only half the number of articles were coded from Neuropsychology (40) as compared with JEP:LMC (81), JPSP (82), and PSCI (79). It is possible that our results may have been substantially different had we coded more articles from Neuropsychology.

Additionally, although it is evident that the editor-in-chief of Neuropsychology valued including COG statements (e.g., in his 2022 interview, and in the journal submission guideline, starting in May-June of 2021), we do not have direct evidence that his views influenced authors’ or associate editors’ decisions on COGs.

Implications and Future Directions

By discussing the boundary conditions for the findings of interest, scientific publications can provide a more accurate understanding of the findings, and encourage new studies with different populations, methods, or contexts. Discussion of COGs facilitate productive replication efforts and evidence-based application of findings in a diversity of participant populations and contexts.

Heightened awareness of the role of discovering boundary conditions for facilitating scientific discoveries may in turn encourage editors to accept more manuscripts examining boundary conditions. That would be in contrast to dismissing findings because “they’ve already been shown” even when the new study reports them in a context that is meaningfully different from previous studies. We believe it would be beneficial to further encourage authors and editors to take the next step by shifting the incentive structure so that authors’ professional interest (e.g., having a paper accepted for publication) aligns with the interest of the scientific community (e.g., having a more accurate understanding of the generality of the findings).

Of course, not all information about boundary conditions is useful, such as when the primary purpose of a program of research is to understand a phenomenon only in a specific applied context. However, even if it is not of interest when viewed through that perspective, information about the generality of the phenomenon could provide a useful clue for other researchers who are interested in the processes underlying the phenomenon. For example, the Cognitive Interview (CI) procedure increases the amount of information eyewitnesses report without inflating the proportion of erroneous details. To the extent that researchers of CI are only interested in improving its effectiveness within the legal system in the US, examining whether CI is effective in a new language is unlikely to be of interest to them. However, if the effects of CI are found to depend on cultural or legal contexts, even if it is of no direct relevance to the American judicial system, the finding may provide a provocative clue for cognitive psychologists interested in the processes underlying the CI.

We urge peer reviewers in psychology to use their influence on publication decisions to promote better reporting practices by asking for an explicit COG statement in the manuscript, while reassuring authors that acknowledging boundary conditions strengthens, rather than weakens their article. We also believe it may be helpful to introduce COGs as a concept in undergraduate and/or graduate psychology research methods curriculums as well as in other disciplines. We hope doing so will highlight the value of discovering boundary conditions as a foundation of scientific knowledge.

Lastly, during the coding process, we noted that a number of authors raised a point about their study that could be interpreted as a COG but then immediately argued against this COG, saying that it would not be something to be concerned about. While this practice itself is not a problem, we often found that authors’ arguments against a potential COG were overconfident. We suspect this practice reflects the belief that constraints of generality, real or imagined, decrease the likelihood of manuscripts being accepted for publication. We suggest a systematic examination of this practice in the future.

Conclusion

Human behavior is complex and variable. There is a growing recognition that “findings previously thought to be universal are actually specific to individuals with certain characteristics as a result of sample biases and constraints” (Campbell & Brauer, 2020, p. 616; Henrich et al., 2010). Thus, presuming, without empirical evidence, the applicability of findings in contexts different from the original studies borders on being unscientific. COG statements can help prevent this by making salient the extent of generality we can infer from empirical studies and by facilitating the discovery of the characteristics of the target population and contexts, broadly defined, that need to be taken into account for successful replications and applications of the findings to real-world problems. More broadly, we hope the present study will contribute to the discussion about the importance of appropriately communicating uncertainties about findings, as well as continued improvements in reporting practices in the service of a cumulative and replicable psychological science.

All three authors contributed to the following:

  • Substantial contributions to conception and design

  • Acquisition of data

  • Analysis and interpretation of data

  • Drafting the article or revising it critically for important intellectual content

  • Final approval of the version to be published.

The authors would like to thank Ida Eikeng, Simrita Gopalan, and Renin Surucu for their work as research assistants coding the scientific articles in this study. We greatly appreciate Steve Lindsay and an anonymous reviewer for providing invaluable feedback.

The work reported in this article was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

This study did not involve testing of human participants.

The data, code, and other supplemental materials are available at https://osf.io/hng5p/?view_only=2341ee50ba024a3db5185e14e6663ad2.

1.

Due to clerical error, one extra article from JPSP and one less article from PSCI were selected.

2.

For 10 of the 242 articles, we needed a tie-breaker because only two RAs coded an article and they disagreed as to whether or not it had a COG statement. Table 1 reports the results obtained by randomly selecting one of their responses. Considering all of these 10 articles to have a COG, vs. considering that none of these 10 articles have a COG, did not result in differences that affected gist of the findings. For more detail, see link to specific supplementary document https://osf.io/mzrct?view_only=2341ee50ba024a3db5185e14e6663ad2.

APA. (2021, November 15). Constraints on Generality Statements - A Conversation with Dr. Keith Yeates [Video]. YouTube. https:/​/​www.youtube.com/​watch?v=kiZNtqyppbQt=214s
Bhattacherjee, A. (2012). Scholar Commons Social Science Research: Principles, Methods, and Practices. Global Text Project. https:/​/​digitalcommons.usf.edu/​cgi/​viewcontent.cgi?article=1002context=oa_textbooks
Bokulich, A. (2008). Can Classical Structures Explain Quantum Phenomena? The British Journal for the Philosophy of Science, 59(2), 217–235. https:/​/​doi.org/​10.1093/​bjps/​axn004
Brady, L. M., Fryberg, S. A., & Shoda, Y. (2018). Expanding the interpretive power of psychological science by attending to culture. Proceedings of the National Academy of Sciences - PNAS, 115(45), 11406–11413. https:/​/​doi.org/​10.1073/​pnas.1803526115
Bursten, J. R. S. (2021). The Function of Boundary Conditions in the Physical Sciences. Philosophy of Science, 88, 234–257. https:/​/​doi.org/​10.1086/​711502
Busse, C., Kach, A. P., & Wagner, S. M. (2017). Boundary Conditions: What They Are, How to Explore Them, Why We Need Them, and When to Consider Them. Organizational Research Methods, 20(4), 574–609. https:/​/​doi.org/​10.1177/​1094428116641191
Campbell, M. R., & Brauer, M. (2020). Incorporating Social-Marketing Insights Into Prejudice Research: Advancing Theory and Demonstrating Real-World Applications. Perspectives on Psychological Science, 15(3), 608–629. https:/​/​doi.org/​10.1177/​1745691619896622
Clark, H. H. (1973). The language-as-fixed-effect fallacy: A critique of language statistics in psychological research. Journal of Verbal Learning and Verbal Behavior, 12(4), 335–359. https:/​/​doi.org/​10.1016/​S0022-5371(73)80014-3
Clarke, B., Schiavone, S., & Vazire, S. (2023). What limitations are reported in short articles in social and personality psychology? Journal of Personality and Social Psychology. https:/​/​doi.org/​10.1037/​pspp0000458
Dubin, R. (1978). Theory building (Rev. ed.). Free Press.
Faller, J. E., Cook, A. H., & Nordtvedt, K. L. (2024, July 19). gravity. Encyclopedia Britannica. https:/​/​www.britannica.com/​science/​gravity-physics
Gergen, K. J. (1973). Social psychology as history. Journal of Personality and Social Psychology, 26(2), 309–320. https:/​/​doi.org/​10.1037/​h0034436
Göllner, R., Damian, R. I., Nagengast, B., Roberts, B. W., & Trautwein, U. (2018). It’s Not Only Who You Are but Who You Are With: High School Composition and Individuals’ Attainment Over the Life Course. Psychological Science, 29(11), 1785–1796. https:/​/​doi.org/​10.1177/​0956797618794454
Greenwald, A. G., Pratkanis, A. R., Leippe, M. R., & Baumgardner, M. H. (1986). Under what conditions does theory obstruct research progress? Psychological Review, 93(2), 216–229. https:/​/​doi.org/​10.1037/​0033-295X.93.2.216
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). Most people are not WEIRD. Nature, 466(7302), 29–29. https:/​/​doi.org/​10.1038/​466029a
Ioannidis, J. P. A. (2007). Limitations are not properly acknowledged in the scientific literature. Journal of Clinical Epidemiology, 60(4), 324–329. https:/​/​doi.org/​10.1016/​j.jclinepi.2006.09.011
Journal of Personality and Social Psychology Submission Guidelines. (2018). Journal of Personality and Social Psychology. https:/​/​web.archive.org/​web/​20180718011123/​https:/​/​www.apa.org/​pubs/​journals/​psp?
Judd, C. M., Westfall, J., & Kenny, D. A. (2012). Treating stimuli as a random factor in social psychology: a new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 103(1), 54–69. https:/​/​doi.org/​10.1037/​a0028347
Kidd, C., Paleri, H., & Aslin, R. N. (2013). Rational snacking: Young children’s decision- making on the marshmallow task is moderated by beliefs about environmental reliability. Cognition, 126(1), 109–114. https:/​/​doi.org/​10.1016/​j.cognition.2012.08.004
Kitayama, S. (2017). Journal of Personality and Social Psychology: Attitudes and social cognition [Editorial]. Journal of Personality and Social Psychology, 112(3), 357–360. https:/​/​doi.org/​10.1037/​pspa0000077
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Jr., Alper, S., … Sowden, W. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https:/​/​doi.org/​10.1177/​2515245918810225
Kline, M. A., Shamsudheen, R., & Broesch, T. (2018). Variation is the universal: making cultural evolution work in developmental psychology. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1743), 20170059–20170059. https:/​/​doi.org/​10.1098/​rstb.2017.0059
McCarley, J. S., & Yamani, Y. (2021). Psychometric Curves Reveal Three Mechanisms of Vigilance Decrement. Psychological Science, 32(10), 1675–1683. https:/​/​doi.org/​10.1177/​09567976211007559
Michaelson, L., De la Vega, A., Chatham, C. H., & Munakata, Y. (2013). Delaying gratification depends on social trust. Frontiers in Psychology, 4. https:/​/​doi.org/​10.3389/​fpsyg.2013.00355
Mischel, W., & Staub, E. (1965). Effects of expectancy on working and waiting for larger reward. Journal of Personality and Social Psychology, 2(5), 625–633. https:/​/​doi.org/​10.1037/​h0022677
Monroe, A. E., & Malle, B. F. (2019). People systematically update moral judgments of blame. Journal of Personality and Social Psychology, 116(2), 215–236. https:/​/​doi.org/​10.1037/​pspa0000137
Morett, L. M., Fraundorf, S. H., & McPartland, J. C. (2021). Eye see what you’re saying: Contrastive use of beat gesture and pitch accent affects online interpretation of spoken discourse. Journal of Experimental Psychology: Learning, Memory, and Cognition, 47(9), 1494–1526. https:/​/​doi.org/​10.1037/​xlm0000986
Neuropsychology Submission Guidelines. (2021). Apa.Org; American Psychological Association. https:/​/​web.archive.org/​web/​20210629102956/​https:/​/​www.apa.org/​pubs/​journals/​neu
Nielsen, M., Haun, D., Kärtner, J., & Legare, C. H. (2017). The persistent sampling bias in developmental psychology: A call to action. Journal of Experimental Child Psychology, 162, 31–38. https:/​/​doi.org/​10.1016/​j.jecp.2017.04.017
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, aac4716. https:/​/​doi.org/​10.1126/​science.aac4716
Psychological Science Submission Guidelines. (2018). Psychological Science. https:/​/​web.archive.org/​web/​20191120102206/​https:/​/​www.psychologicalscience.org/​publications/​psychological_science/​ps-submissions
Rad, M. S., Martingano, A. J., & Ginges, J. (2018). Toward a psychology of Homo sapiens: Making psychological science more representative of the human population. Proceedings of the National Academy of Sciences, 115(45), 11401–11405. https:/​/​doi.org/​10.1073/​pnas.1721165115
Roediger, H. L., III, & McCabe, D. P. (2007). Evaluating Experimental Research: Critical Issues. In R. J. Sternberg, H. L. Roediger IIfI, & D. F. Halpern (Eds.), Critical thinking in psychology (pp. 15–36). Cambridge University Press.
Ross, J. M., Karney, B. R., Nguyen, T. P., & Bradbury, T. N. (2019). Communication that is maladaptive for middle-class couples is adaptive for socioeconomically disadvantaged couples. Journal of Personality and Social Psychology, 116(4), 582–597. https:/​/​doi.org/​10.1037/​pspi0000158
Ryon, H. S., & Gleason, M. E. J. (2018). Reciprocal support and daily perceived control: Developing a better understanding of daily support transactions across a major life transition. Journal of Personality and Social Psychology, 115(6), 1034–1053. https:/​/​doi.org/​10.1037/​pspi0000141
Sheth, B. (2019, September 12). The story of the famous Newtonian Apple. Medium. https:/​/​medium.com/​swlh/​the-story-of-the-famous-newtonian-apple-15458b03b79a
Shoda, Y., Mischel, W., & Peake, P. K. (1990). Predicting adolescent cognitive and self- regulatory competencies from preschool delay of gratification: Identifying diagnostic conditions. Developmental Psychology, 26(6), 978–986. https:/​/​doi.org/​10.1016/​j.cognition.2012.08.004
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2017). Constraints on Generality (COG): A Proposed Addition to All Empirical Papers. Perspectives on Psychological Science, 12(6), 1123–1128. https:/​/​doi.org/​10.1177/​1745691617708630
Simons, D. J., Shoda, Y., & Lindsay, D. S. (2018). Constraints on generality statements are needed to define direct replication. Behavioral and Brain Sciences, 41, E148. https:/​/​doi.org/​10.1017/​S0140525X18000845
Singh, L., Cristia, A., Karasik, L. B., Rajendra, S. J., & Oakes, L. M. (2023). Diversity and representation in infant research: Barriers and bridges toward a globalized science of infant development. Infancy, 28(4), 708–737. https:/​/​doi.org/​10.1111/​infa.12545
Tiokhin, L., Hackman, J., Munira, S., Jesmin, K., & Hruschka, D. (2019). Generalizability is not optional: Insights from a cross-cultural study of social discounting. Royal Society Open Science, 6(2), 181386. https:/​/​doi.org/​10.1098/​rsos.181386
Visser, I., Bergmann, C., Byers-Heinlein, K., Dal Ben, R., Duch, W., Forbes, S., Franchin, L., Frank, M. C., Geraci, A., Hamlin, J. K., Kaldy, Z., Kulke, L., Laverty, C., Lew-Williams, C., Mateu, V., Mayo, J., Moreau, D., Nomikou, I., Schuwerk, T., … Zettersten, M. (2022). Improving the generalizability of infant psychological research: The ManyBabies model. Behavioral and Brain Sciences, 45, e35. https:/​/​doi.org/​10.1017/​S0140525X21000455
Wells, G. L., & Windschitl, P. D. (1999). Stimulus Sampling and Social Psychological Experimentation. Personality and Social Psychology Bulletin, 25(9), 1115–1125. https:/​/​doi.org/​10.1177/​01461672992512005
Whetten, D. A. (1989). What Constitutes a Theoretical Contribution? The Academy of Management Review, 14, 490–495. https:/​/​doi.org/​10.2307/​258554
Yarkoni, T. (2020). The generalizability crisis. Behavioral and Brain Sciences, 45, e1. https:/​/​doi.org/​10.1017/​S0140525X20001685
Yeates, K. O. (2022). Toward a more open and diverse science in Neuropsychology. Neuropsychology, 36(1), 1–3. https:/​/​doi.org/​10.1037/​neu0000790
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material