In this meta-analysis, the authors investigated whether being in nature and emotional social support are reliable strategies to downregulate stress. We retrieved all the relevant articles that investigated a connection between one of these two strategies and stress. For being in nature we found 54 effects reported in 16 papers (total N = 1,697, MdnN = 52.5), while for emotional social support we found 18 effects reported in 13 papers (total N = 3,787, MdnN = 186). Although we initially found an effect for being in nature and emotional social support on stress (Hedges’ g = -.42; Hedges’ g = -.14, respectively), the effect only held for being in nature after applying our main publication bias correction technique (Hedges’ g = -.60). The emotional social support literature also had a high risk of bias. Although the being-in-nature literature was moderately powered (.72) to detect effects of Cohen’s d = .50 or larger, the risk of bias was considerable, and the reporting contained numerous statistical reporting errors.

How can we live in a fast-paced world where every unexpected challenge is just around the corner? Sometimes the obstacles are low or easy to get around; others may seem insurmountable. Life’s obstacles can trigger a stress response that can be understood as, “a particular relationship between the person and the environment that is appraised by the person as taxing or exceeding his or her resources and endangering his or her well-being” (Lazarus & Folkman, 1984, p. 19). Stress experienced on a daily basis has an impact on health and on well-being of individuals (Bolger et al., 1989).

Thus, stifling the build-up of excessive stress is of paramount importance. In a previous meta-analysis, we synthesized empirical research on two stress regulation strategies (self-administered mindfulness and heart rate variability biofeedback; Sparacio et al., 2022). As we aim to build a comprehensive database in which different stress regulation strategies are evaluated based on their efficacy, here we add the synthesis of two other strategies: Being in nature and emotional social support. The reason why we chose these two strategies is similar to what guided the choice in our previous work: The decision was partly based on the fact that we were interested in analyzing scalable, non-invasive and cheap strategies that could be used by an extended number of individuals and partly arbitrary as to where we start with our approach. To check whether the named strategies have an effective role in reducing stress levels we conducted a meta-analysis with the following objectives: 1) To assess the evidential value of identified studies in both literatures, 2) for both being in nature and emotional social support, to calculate mean effect sizes for the stress response, for the different components of stress, as well as for the affective consequences of stress, 3) to apply publication bias correction techniques to have more realistic estimates of the efficacy of either regulation strategy 4) and to determine whether personality traits were used as moderators in stress regulation studies.

We intend to shed light on whether being in nature and emotional social support has stress reducing effects or not through our meta-analysis and how big the effect - if any - is. Our combination of publication bias-correction techniques can provide a less biased estimate of the effects of interest (Cf., IJzerman et al., 2022; Sparacio et al., 2022).

Stress is usually defined as a state of strain and tension that occurs when we are overwhelmed by external demands with the impossibility of dealing with them for lack of resources (Lazarus & Folkman, 1984). In our previous meta-analysis, we classified the stress response based on three components: Affective, physiological, and cognitive (see Du et al., 2018; Schneiderman et al., 2005; Sparacio et al., 2022; Watson et al., 1988). As we noted there, these different components are not truly conceptually separate (Pessoa, 2008; Phelps, 2006), but we apply them as useful categories for application. Because stress can have long-term consequences if not kept under control, we also included an assessment of the affective consequences of stress (such as depression and chronic anxiety). We decided to pick depression and chronic anxiety as relatively arbitrary starting points for constraints of time and resource and because those are traditionally the most investigated outcomes for these interventions.

The first strategy we focused on here, being in nature, we restricted to interventions like walking in a natural environment and/or watching it (Antonelli et al., 2019). According to the “stress recovery theory” (Ulrich, 1983), nature provides a restorative influence helping individuals recover from stress. Ulrich’s (1983) theory relies on a psycho-evolutionary theorizing: Humans evolved in the course of centuries in natural places adapting both psychologically and physiologically to these types of environments. The argument is that when a stressor is encountered, an unthreatening natural environment might evoke feelings of pleasantness, decrease stressful thoughts, and promote physiological restoration (see also Ulrich, 1979).

In the empirical literature, being in nature has been found to have a positive influence on the different components of stress. For the affective component, one study found that participants that walked in a natural setting (as compared to when they walked in a built environment) had a greater reduction of levels of self-reported stress (Beil & Hanes, 2013). For what concerns the physiological component, in one study coronary artery disease participants, who were randomly allocated to a seven day walking-in-a-park (vs. a seven days walking-in-an-urban-environment) condition had lower cortisol levels and lower heart rates (Grazuleviciene et al., 2015). As it pertains to the cognitive component, a brief walk in a natural setting (vs. 90 a minute walk in an urban setting) reduced self-reported levels of rumination (Bratman et al., 2015). Finally, for what concerns the affective consequences of stress, one study found that a walk in a green area (as compared to a group of non-walkers) reduced symptoms of depression (Marselle et al., 2014).

The other strategy, emotional social support, has probably garnered the most empirical support out of the two (e.g., Cohen, 2004; Lakey & Cronin, 2008). Cohen and Wills (1985) suggested that social support can act as a shield protecting the individual from negative consequences of stress. There are two main models that explain the relationship between stress and close relationships. The first, the stress-buffering hypothesis, states that social support is connected to wellbeing by reducing stress appraisals or weakening the association between stress and negative health outcomes. The second, the main effect hypothesis, posits that social support has a beneficial effect, decreasing the level of distress, regardless of whether people are under stress (Cohen & Wills, 1985). The buffering effect has been thought to be associated with a dampened hypothalamic–pituitary–adrenal (HPA) axis activity and a decrease in the response of the autonomic nervous system (ANS; C. S. Carter, 1998).

One particular theory, “social baseline theory” (e.g., Beckes & Coan, 2011) offers an account that can provide a mechanism for the stress-buffering hypothesis, as it suggests that social support and proximity to others reduces the perceived threat of a stressor and people can thus exert less effort in regulating stress (Coan & Sbarra, 2015; Ein-Dor et al., 2015). Stress reduction, according to the theory, is reduced because individuals can distribute the efforts needed to achieve particular goals with other people (e.g., partner, friends, family members, or even strangers), a phenomenon known as “load sharing”. In one particular study illustrating this phenomenon, people held hands with a partner or a stranger and were confronted with the threat of a (mild) electric shock. When people held hands with someone, areas related to stress were less activated when confronted with the electric shock and the reduction of stress was greater the more familiar the partner (Coan et al., 2006, 2017).

For the current Registered Report meta-analysis, we take a narrower view on social support, as we restrict ourselves to emotional social support that is defined at a global level as the act of talking, listening, and being empathetic with a distressed individual (Zellars & Perrewé, 2001). Emotional social support can be achieved through verbal expressions (talking to or listening to the partner) or via physical contact (e.g., holding a partner’s hand or talking with the partner; Coan et al., 2006; Ditzen et al., 2007). For now, we leave out other forms of social support (informational, instrumental, and appraisal) as emotional social support is thought to be associated with well-being and consequently lower mortality and lower levels of stress (Reblin & Uchino, 2008).

For what concerns the affective component of stress, in one study participants’ state anxiety decreased when emotional support was provided by a friend (compared to participants that did not receive any kind of support; Bowers & Gesten, 1986). In a study focused on the physiological component, participants that were assigned to a physical contact condition (as compared to the no social support condition) exhibited lower heart rate activation and cortisol response (Ditzen et al., 2007). For what concerns the cognitive component, one study found that participants with high levels of emotional social support responded to daily stressors with less ruminative behaviors (as compared to participants with low levels of emotional social support; Puterman et al., 2010). Finally, as regards to the affective consequences of stress, studies have found that low levels of social support predict depression both in a non-clinical and clinical populations (Brugha et al., 1987; Revenson et al., 1991).

How can we assess whether there is solid evidence on the efficacy of these strategies? Many fields of science, including psychology, have been confronted with a replication crisis (the fact that replication studies have failed to find the same results as original studies; Klein et al., 2018; Maxwell et al., 2015; Open Science Collaboration, 2015). Publication bias (the likelihood that positive results have a higher probability of getting published; Rosenthal, 1979; Sutton et al., 2000) and questionable research practices (which is generally used as a term to encompass various scientific misconducts such as excluding data on the basis of post-hoc criteria; L. K. John et al., 2012) are often seen as two of the main culprits for low replicability rates.

The psychological literature therefore contains an unknown proportion of unreliable and false positive findings that may also characterize the field of stress regulation. For instance, in our previous meta-analysis, we analyzed whether self-administered mindfulness and biofeedback were effective strategies to decrease stress. We detected an effect for both strategies. However, when we applied the same publication bias techniques as we intend to apply here, we found no more evidence that self-administered mindfulness and biofeedback were successful in reducing stress. Indeed, our analyses suggest that the originally detected effect may have largely been due to publication bias (Sparacio et al., 2022). Thus, a thorough systematic assessment of the empirical evidence contained in the literature is needed (IJzerman et al., 2020).

At present, we have no way of knowing whether the two strategies are reliably effective interventions against stress. There are no current meta-analyses specifically on emotional social support and stress. Some meta-analyses do exist on the topic, but they need necessary improvements.1 For being in nature, only one meta-analysis exists on a very specific type of being in nature, “forest-breathing” (Antonelli et al., 2019), which did not account for publication bias at all. We tried to improve upon these prior approaches by synthesizing up-to-date available evidence, as well as by applying state-of-the-art bias correction techniques. In so doing, we followed a workflow similar to our previous meta-analysis on stress regulation (Sparacio et al., 2022).

To ensure methodological rigor and transparency, our materials and analysis code are available on the Open Science Framework (https://osf.io/6wpav/). As our goal is to build a database of data on different stress regulation strategies, we also added the data to PsychOpenCAMA, an existing public repository in which data from other meta-analyses are stored (Burgard et al., 2021). We already submitted data of our first pre-registered meta-analysis (Sparacio et al., 2022) to this platform on 24/09/2021. Our meta-analysis was pre-registered on the OSF (https://osf.io/c25qw). Any changes to the pre-registration were fully disclosed on our OSF page using the template provided by Moreau and Gamble (2020; Appendix A). This research was conducted in line with the CO-RE Lab Lab Philosophy v5 (Goncharova et al., 2022).

Inclusion criteria and search strategy

To frame the eligibility criteria in a structured way, we followed the Participants, Intervention, Comparator, Outcome, and Study design (PICOS) Framework (Schardt et al., 2007). We chose to only include studies on participants that are adults (people aged 18 years or older). For the current meta-analysis, we selected two interventions (being in nature and emotional social support). In case of designs comparing groups, for being in nature, we included effects based on a comparison to a control group in which participants performed the same activities (e.g., walking or viewing the surroundings) in an urban environment, or to a passive control condition (participants are in an untreated comparison group; e.g., waitlist control). For emotional social support, we included effects based on a comparison to an active control condition (in that participants were involved in tasks that were not related to stress regulation) and/or to a passive control condition. In case there were more sources of emotional social support for each study, we included the effect based on the closest connection with the participant (e.g., partners over friends, friends over strangers).

If there was more than one comparator in the same study (i.e., presence of both an active and a passive control group), we chose the contrast with the active control group. We measured the affective, the cognitive, and the physiological component of stress taken at post-test of both the experimental group and the control group. For the affective and cognitive components as well as the affective consequences, we relied on self-report measures. For the physiological component, we relied on physiological biomarkers of the stress response (e.g., heart rate, cortisol levels).

To ensure a search strategy that was reproducible, we documented 1) the exact search strategy 2) the dates on which the research was conducted 3) the exact search string. Our search strategy followed the recommendations provided by Maggio et al. (2011). The following databases were searched: ProQuest, (an online platform which covers research indexed in APA PsycArticles, APA Psycinfo, ProQuest Dissertations & Theses Global), PubMED, and Scopus. We searched the titles and abstracts of the articles.

The first author (AS) performed the literature search and excluded articles that did not match the inclusion criteria. Screening by title and abstract was carried out using Rayyan QCRI (Ouzzani et al., 2016), a web and mobile app for systematic reviews and meta-analyses. The first author then manually searched reference lists of the included studies for relevant citations and unpublished reports. Finally, we used social networks (Facebook groups and Twitter) and mailing lists (Society for Personality and Social Psychology; SPSP, European Association of Social Psychology; EASP, European Society for Cognitive and Affective Neuroscience; ESCAN; Environmental Psychology; ENVPSY) to request unpublished data. To ensure that we did not miss relevant articles, we also searched references of past meta-analyses related to being in nature and emotional social support. We included studies of existing meta-analyses that satisfied our inclusion criteria. Finally, we contacted authors that published studies on the topic to inquire whether they had any unpublished research, in-progress manuscripts, or in-press manuscripts (see our templates in Appendices B and C).

Following the inclusion criteria of our meta-analytical approach: 1) We included published articles, preprint articles, working papers, dissertations, and books (we excluded studies that were not published in English), 2) we included any type of study (randomized control trials and observational studies) that estimated the effect of (or exposure to) being in nature or emotional social support, 3) we included studies that measured at least one of the three components of the stress response or at least that measured the affective consequences of stress, 4) For being in nature we included studies with participants who performed any type of physical activity as long as the same activity was performed in the same way by the corresponding control group in a non-natural setting 5) the participants of the study had to be humans. A study was excluded 1) if it was a review (either narrative or systematic), 2) if the sampling frame of the study explicitly involved participants below 18 years of age, 3) if the data necessary to compute our analyses were missing (and not obtainable after having requested them to the authors of the paper), or 4) if other active treatments (e.g., mindfulness) were combined with the stress regulation strategies of interest (being in nature or emotional social support). We then added a sub-exclusion criterion related to emotional social support, excluding studies with types of support that were not emotional (i.e., informational or instrumental social support or social support via appraisal). A PRISMA flow chart of the overall literature search and inclusion procedure is shown in Appendices D and E.

Coding and data preparation

Two coders independently coded the data. We cross-checked the coding process for systematic coding errors twice – after the first 10% and 20% of the data – both for social support and being in nature separately. In case of systematic coding discrepancies, the two coders discussed, refined the coding scheme, thereby resolving discrepancies (in case this did not lead to convergence, the two coders consulted the second author). We used Cohen’s Kappa as a measure of inter-rater agreement. Following the guidelines of Landis and Koch (1977), we considered an agreement of κ > 0.60 for metric or multinomial variables acceptable. For binary variables (e.g., published), we assessed the coding agreement using the percentage agreement.

We extracted data for the following variables: Publication year, the number of citations of the paper by Google Scholar at date of extraction, journal name, reported overall N, gender ratio, publication status, reported effect sizes, total N, cell means, standard deviations and Ns, test statistic, degrees of freedom, the type of effect (e.g., bivariate effects, covariate-adjusted effects), whether the effect was considered focal (reported in the abstract), the design of the study, the type of population, the category of stress-regulation strategy (being in nature, emotional social support), the type of control group (no control group, active, passive, being in an urban environment, different source of emotional social support), whether it was on one of the components of stress (affective, cognitive, or physiological) or on the affective consequences of stress, and the instrument employed to assess stress levels. We converted all the relevant effect sizes (ES) to Hedges’ g, a standardized mean difference corrected for small samples (Hedges & Olkin, 1985). To convert the reported effect sizes to Hedges’ g, we primarily used the group posttest means, SDs (or SEs), and Ns. If these were not available, we computed the Hedges’ g effect size from the reported test statistics or converted from other types of reported effect sizes. The computation and conversion of all effect sizes were carried out in R, using formulas laid out in Borenstein et al. (2009; analysis code available at: https://github.com/alessandro992/Registered-report-meta-analysis).

To mitigate the effect of undisclosed participant exclusions, we checked whether the sum of group Ns approximately matched the total sample size (N +/-2). If they matched, we used the reported group Ns. If the sum of group Ns did not match the total sample size, we computed group Ns based on the reported degrees of freedom, assuming a balanced design. If only the total sample size was reported, we also assumed a balanced design and divided the total N by the number of conditions. We applied by default a correlation of .50 for within-participants designs.

Analysis strategy

Our analysis strategy closely mirrors the workflow of IJzerman et al. (2022) and Sparacio et al. (2022). Prior to conducting our analyses, we screened for influential outliers using a Baujat plot (Baujat et al., 2002) and influence diagnostics indices (Viechtbauer & Cheung, 2010). Outliers with an excessive influence on the meta-analytic model (standardized residual > 2.58) were then excluded in a sensitivity analysis. By default, we used a multilevel random-effects model using the restricted maximum-likelihood estimation with Satterthwaite’s small-sample adjustment (Pustejovsky & Tipton, 2022).2 We included all the relevant outcomes from each included study. We handled dependencies among effects by using robust variance estimation, assuming correlated and hierarchical effects (Pustejovsky & Tipton, 2022). By relying on robust variance estimation, we could simultaneously account for both types of dependencies among the effects (if effects were nested within studies, this technique allowed us to estimate effects based on the same participants). Because the data on sampling correlations among effects tend to be unavailable in the individual studies, we assumed a constant sampling correlation between the effects of .5 (see also, Kolek et al., 2022; Sparacio et al., 2022). We used a robust HTZ-type Wald test to test the equality of effect sizes across the levels of the studied moderators (Pustejovsky & Tipton, 2022).

To estimate the range of effect sizes that can be expected in similar future studies, we calculated the 95% prediction intervals. For each analysis we conducted, when the included effects (k) were less than 10, we did not interpret the estimates.3 This threshold is somewhat arbitrary, but a threshold needs to be chosen. After all, small samples have large expected sampling variability, leading to imprecise results (see also IJzerman et al., 2022; Sparacio et al., 2022).

To investigate the heterogeneity caused by variations in population characteristics or conceptual aspects of utilized study designs, we pre-registered a set of subgroup analyses for both being in nature and emotional social support: Proportion of females (versus males) in the sample, type of comparison group, and type of population (student non-clinical, non-student non-clinical, and clinical). For being in nature, we tested the type of exposure as a possible source of heterogeneity (nature walking, nature viewing, mixed). For emotional social support, we conducted two additional subgroup analyses: The type of social support (0=not specified, 1=physical, 2=verbal, 3=mixed, 4=other) and the source of social support (0=not specified, 1= stranger, 2=known person; see for more details our coding sheet; https://osf.io/4cjux/). Although we believed a priori that this coding to be exhaustive, if we realized that our coding sheet was inadequate throughout the coding process, we refined our coding scheme and we would document these changes in Appendix A: Protocols and deviations sheet. This happened for being in nature for which in studies where participants were exposed to natural settings through images, videos, or virtual reality, we introduced the term “virtual seeing.” Finally, we ran two moderation analyses to assess whether studies with high risk of bias and mathematically inconsistent means or SDs showed inflated effect sizes. In case of additional non-pre-registered subgroup analyses, we disclosed them on our OSF page using the template provided by Moreau and Gamble (2020; Appendix A).

The R code also allows the reader to easily change numerous arbitrary values (e.g., the assumed constant sampling correlation, the within-subjects correlation, etc.) to explore the impact on the results. All models were fitted using restricted maximum-likelihood estimation using R packages metafor, version 2.5 (Viechtbauer, 2010) and clubSandwich, version 0.4.2. (Pustejovsky, 2020). The data analysis was carried out in R also using the following packages: esc (Lüdecke, 2017), tidyverse (Wickham et al., 2019), lme4 (Bates et al., 2015), dmetar (Harrer et al., 2021), and psych (Revelle & Condon, 2018).

Correction for publication bias

Null or negative results are typically less likely to be published, leading to a biased sample of conducted studies. Such publication bias tends to lead to an inflation of the observed mean effect sizes and the Type I error rate (E. C. Carter et al., 2019; Hong & Reed, 2021; Ioannidis, 2008). In an effort to adjust the meta-analytic estimates for publication bias, we primarily used a selection modeling approach (McShane et al., 2016).

We employed a 3- or 4-parameter selection model (4PSM; McShane et al., 2016) and used it as the primary inferential and estimation bias-adjustment method. Selection models are a statistically principled family of models that directly model the publication selection process. The 4PSM implementation has two components: A data model of two parameters that describes how data are generated in absence of publication bias (effect size and heterogeneity parameters) and a selection model mimicking the publication process, represented by a weight parameter–likelihood that a study with non-significant results is published compared to a study with significant findings and a parameter reflecting the likelihood of the result being in the opposite direction (McShane et al., 2016). If a given set of results yielded less than four focal p-values per interval, the model dropped the fourth parameter to provide for a more stable estimation. To deal with dependencies in the data and avoid arbitrariness in the selection of effects within studies, we applied a permutation-based procedure, iteratively selecting only a single focal effect size from each independent study, estimating the model in 5,000 iterations and then picking the model yielding the median ES estimate (where both the interpretation and inference was based on that median model).4

To further explore the results of publication bias-adjustment, we did the following: First, we assessed the variability in adjusted estimates under different assumptions of the publication selection process using Vevea and Woods’ (2005) step-function models with a priori defined selection weights (instead assessing them via estimates of maximum likelihood). These step-function models allowed us to explore the results by varying the assumed severity of bias, modeling moderate, severe, and extreme selection.

Second, we employed a multi-level RVE-based implementation of the PET-PEESE model (see IJzerman et al., 2022; Sparacio et al., 2022), having the same hierarchical structure as the random-effects models. PET-PEESE regresses the effect size on a measure of precision. Because larger studies are less likely to stay unpublished, model slope is assumed to indicate the presence of small-study effects (this includes publication bias). On the other hand, model intercept can then be interpreted as an average ES for a hypothetical, infinitely precise study (Stanley & Doucouliagos, 2014). To use a measure of precision that is uncorrelated with the effect size, we used √(2/N) and a 2/N terms instead of standard error and variance for PET and PEESE, respectively.5

Third, we used a robust Bayesian model-averaging approach to integrate the selection modeling and regression-based approaches and let the data determine the contribution of each model by its relative predictive accuracy to fit the observed data (Bartoš et al., 2021). This approach effectively dodges the need of choosing among competing approaches – and commits us to only a single set of assumptions about the nature of the true biasing selection process. Substantive interpretations were guided by the estimates and inferential results of the 4PSM solely. The other exploratory bias-adjustment methods served a descriptive purpose, to provide the reader with a more comprehensive view on bias adjustment under quantitatively and qualitatively different assumptions (Vevea & Woods models and PET-PEESE, respectively) and using a more general, Bayesian model-averaging approach (RoBMA).6

The detailed specification of the employed models can be found in code in the supplementary materials. There, we also report the results for the following bias-adjustment methods: P-uniform* (Van Aert & Van Assen, 2021) and the Weighted Average of the Adequately Powered studies (WAAP-WLS; Stanley & Doucouliagos, 2017). A summary of the workflow employed to account for publication bias can be found in Appendix F.

The quality-of-evidence assessment

As one of the main objectives of every meta-analysis should be to appraise the quality and integrity of the underlying reported evidence, we assessed the risk of bias at the study level, assessed the evidential value by looking for indications of p-hacking, looked for numerical inconsistencies in the reported data, and estimated the average power in the literature to detect various magnitudes of effects.

First, we evaluated the study quality using the Revised Cochrane risk of bias tool for randomized trials (RoB 2; Sterne et al., 2019). This tool assessed the risk of bias in five predetermined domains related to the experimental design and methodology of the study in question (e.g., the randomization process or the measurement of the outcome). Based on the judgment for each individual domain, an overall algorithmic-based judgment on the risk of bias was drawn up (i.e., “high risk-of-bias”, “some concern”, or “low risk-of-bias”). The rater had the right to override the suggested risk of bias judgments when justified only by downgrading the judgment.

Second, we assessed the evidential value in a set of significant findings, using the p-curve method (Simonsohn et al., 2014). A right-skewed distribution of significant p-values indicates evidential value, i.e., that selective reporting is not the sole explanation of the observed findings. Conversely, a left-skewed p-curve points to a substantial prevalence of selective reporting or other forms of questionable research practices. To handle the dependencies among the p-values derived from the same sample, a permutation-based procedure was employed. We recomputed all focal p-values from the reported descriptive or test statistics, randomly extracted only a single effect size for each set of interdependent effects, estimated the p-curve in 200 iterations, and averaged over the set by interpreting the model having the median z-score for the right-skew of full p-distribution.

Third, we checked for numerical inconsistencies in the reported means and SDs using the GRIM (Brown & Heathers, 2016) and GRIMMER (Anaya, 2016) tests, respectively, and p-values. In case of discrete variables (e.g., Likert scales), decimals in means and SDs follow a granular pattern for each combination of N and the number of items, which makes it possible to identify instances where a given mean or SD is mathematically impossible given the reported N (Anaya, 2016; Brown & Heathers, 2016). We also screened the entire included papers for inconsistencies in the reported p-values using the statcheck package (Epskamp & Nuijten, 2016). The method works as follows: (1) article pdf files are converted to plain text, (2) they are scanned for statistical results reported in APA style, (3) test statistics and degrees of freedom are extracted to recompute the p-value, (4) which then gets compared to the reported p-value. We examined in which proportion of primary studies were the p-values inconsistent with the reported test statistics and how many of those inconsistencies led to an inferential decision error.

Fourth, we computed mean statistical power in the literature to detect various hypothetical effect sizes (.20, .50, and .70). In the supplementary materials, we also report median power to detect the bias-corrected estimates based on the 4PSM and PET-PEESE models.

The final meta-analytic dataset comprised 54 effects reported in 16 papers on being in nature (total N = 1,697, MdnN = 52.5, MAge = 31.70, SDAge = 9.09) and of 18 effects reported in 13 papers on emotional social support (total N = 3,787, MdnN = 186, MAge = 48.787.72, SDAge = 17.73). 14.92). For being in nature, the included studies were published between 1993 and 2021. Studies on emotional social support were published between 1997 and 2021. Table 1 lists the goals and conclusions of our analyses meta-analysis.

Table 1.
Goals and conclusions of the Registered Report meta-analysis. Effect sizes are reported in Hedges&#x2019; <italic>g</italic>.
Objective Conclusion 

1. To assess the overall empirical evidence of a) being in nature and b) emotional social support

 
We found an overall effect for a) being in nature and for b) emotional social support. 

2. To assess the mean effect sizes for a) being in nature and b) emotional social support on the stress response, on the different components, and on the affective consequences of stress.

 
We found a) a naïve meta-analytic estimate for being in nature on stress (g = -0.42), on the physiological (g = -0.31), and affective (g = -0.49) components and b) a naïve meta-analytic estimate for emotional social support (g = -.014). on the physiological (g = -0.26), and affective (g = -0.11) components. We were unable to calculate an estimate for both strategies on the affective consequences of stress as we had too few effects (respectively k = 6 for being in nature and k = 4 for emotional social support). 

3. To apply publication bias correction techniques to have more realistic estimates of the efficacy of a) being in nature on the stress response and of b) emotional social support on the stress response.

 
Once we applied the 4PSM, we still found an effect for a) being in nature (g = - 0.60), but b) not for emotional social support (g = -0.01). 

4. To determine whether personality traits were used as moderators in a) being in nature and b) emotional social support.

 
Personality traits were almost never used as moderators in the studies of stress for a) being in nature (n = 0) or b) emotional social support (n = 1). 
Objective Conclusion 

1. To assess the overall empirical evidence of a) being in nature and b) emotional social support

 
We found an overall effect for a) being in nature and for b) emotional social support. 

2. To assess the mean effect sizes for a) being in nature and b) emotional social support on the stress response, on the different components, and on the affective consequences of stress.

 
We found a) a naïve meta-analytic estimate for being in nature on stress (g = -0.42), on the physiological (g = -0.31), and affective (g = -0.49) components and b) a naïve meta-analytic estimate for emotional social support (g = -.014). on the physiological (g = -0.26), and affective (g = -0.11) components. We were unable to calculate an estimate for both strategies on the affective consequences of stress as we had too few effects (respectively k = 6 for being in nature and k = 4 for emotional social support). 

3. To apply publication bias correction techniques to have more realistic estimates of the efficacy of a) being in nature on the stress response and of b) emotional social support on the stress response.

 
Once we applied the 4PSM, we still found an effect for a) being in nature (g = - 0.60), but b) not for emotional social support (g = -0.01). 

4. To determine whether personality traits were used as moderators in a) being in nature and b) emotional social support.

 
Personality traits were almost never used as moderators in the studies of stress for a) being in nature (n = 0) or b) emotional social support (n = 1). 

Do being in nature and emotional social support reduce stress?

Overall, we found an effect for being in nature that held after excluding improbably large effect sizes and also after applying the correction for publication bias. Being in nature also reduced stress for both the affective and the physiological components. For emotional social support, we also found a significant overall meta-analytic estimate, but the effect disappeared after correcting for publication bias. Personality traits were almost never examined. In what follows, we present the results in greater detail separated by strategy (being in nature and emotional social support). We also present our pre-registered subgroup analyses as part of our auxiliary goals.

Does being in nature reduce stress? (Goals 1-3)

First, we investigated whether excluding outliers had a material effect on our main conclusions. For being in nature, there were five excessive effects above Cohen’s d = 2, reported in three studies.7 One of the studies reported a single effect size of Cohen’s d = 4.82, while a second study reported three effects, all of them being improbably high (-2.64, -2.60, and -2.07), and the third study reported a (highly-influential, based on large N) effect size of 2.43.8 We therefore decided to deviate from our pre-registration and excluded these outliers, given that they were so unrealistically large. We then proceeded with our pre-registered overall, bias-corrected, and component-specific effects, as well as our quality assessment of the literature.

Overall effects of being in nature on stress

After excluding outliers, the naive meta-analytic estimate was Hedges’ g = -0.42, 95% CI [-.64, -.21], p <.001, suggesting that this strategy may be effective in reducing stress levels. Forty-four percent of the coded effects were statistically significant. The 95% prediction interval (i.e., the effect size expectation for a newly conducted study) was quite wide, [-1.4, .56]. This was due to a high heterogeneity, τ = .45, with an I2 = 85.75% meaning that more than half of the observed variance was due to true heterogeneity (48.83% due to between- and 36.92% due to within-cluster heterogeneity). Contour-enhanced funnel plot and forest plot are displayed in Figure 1. Table 2 summarizes the results for the overall effect of being in nature and emotional social support.

Figure 1.
Contour-enhanced funnel plot and forest plot for being in nature after outlier exclusions.
Figure 1.
Contour-enhanced funnel plot and forest plot for being in nature after outlier exclusions.
Close modal
Table 2.
Meta-analysis for being in nature and emotional social support. Values in brackets represent 95% CI.
 k g
[95% CI] 
SE τ I2 4PSM
estimate 
4PSM
p-⁠value 
PET-PEESE
estimate 
PET-PEESE
p-⁠value 
Being in nature 54 -.42
[-⁠.64, -⁠.21] 
.11 .45 85.75% -.60
[-⁠1.02, -⁠.18] 
<.001 -.44
[-⁠.87, -⁠.02] 
.04 
Emotional social support 18 -.14
[-.24, -.04] 
.05 1.62 84% -.01
[-.26, .05] 
.195 -.11
[-.41, .2] 
.46 
 k g
[95% CI] 
SE τ I2 4PSM
estimate 
4PSM
p-⁠value 
PET-PEESE
estimate 
PET-PEESE
p-⁠value 
Being in nature 54 -.42
[-⁠.64, -⁠.21] 
.11 .45 85.75% -.60
[-⁠1.02, -⁠.18] 
<.001 -.44
[-⁠.87, -⁠.02] 
.04 
Emotional social support 18 -.14
[-.24, -.04] 
.05 1.62 84% -.01
[-.26, .05] 
.195 -.11
[-.41, .2] 
.46 

Effects of being in nature on stress after publication-bias adjustment

Our primary publication bias-correction technique, the 4PSM, indicated an effect of being in nature on stress, with Hedges’ g = -0.60, 95% CI [-1.02, -.18], p = .006. According to our predetermined inferential criteria, we thus concluded that being in nature was effective in reducing participants’ stress levels. We then used the Vevea and Woods step function models with a priori defined selection weights denoting moderate/severe/extreme selection to examine the variability in the bias-adjusted estimates. The results suggested that with a rising severity of the assumed selection bias, the effect of the intervention became smaller (and even reached an opposite direction under extreme selection), with estimates of -0.31, -0.19, and -0.01 for moderate, severe, and extreme selection, respectively (all the estimates were rather imprecise and thus non-significant, with ps equal to .01, .17, and .95, respectively). In other words, the higher the severity of publication bias the smaller the estimate of the effect gets, which implies that publication bias had a significant impact on the literature of being in nature.9

Stress component-specific effects of being in nature

For being in nature, we categorized k = 28 effects as falling in the affective component of stress (with an effect size of Hedges’ g = -0.49), and k = 20 effects as failing into the physiological component (Hedges’ g = -0.31). We did not categorize any effect as falling into the cognitive component. The difference between components was not significant (Wald’s test p = .47). Finally, we classified a set of k = 6 as being part of the affective consequences of stress; as this fell below k = 10, we did not analyze this set.

Quality-of-evidence assessment for being in nature and stress

Judging the risk of bias using the RoB2 tool, 25% of the studies were rated to be at low risk of bias overall, the majority (50%) showed some concerns, and a smaller proportion (25%) were deemed to have a high risk of bias. Figure 2 displays the risk of bias for the included studies overall and for each of the five risks of bias dimensions. We then proceeded with the assessment of the evidential value with a p-curve test.

Figure 2.
Overall risk of bias and risk of bias assessment for each of the five dimensions for being in nature.
Figure 2.
Overall risk of bias and risk of bias assessment for each of the five dimensions for being in nature.
Close modal

The full and half p-curve tests (z = -6.68 and z = 5.69, respectively) were significant (both p < .001) hinting at the presence of evidential value. However, because of the need to iteratively permute only a single focal effect from each study, the median model was based only on k = 4 effects.

After that, we screened the set of included papers with the statcheck package. Of the included papers, only n = 8 (47%) reported results in APA format. The 96% of the included effects were reported correctly, with only 4% having statcheck errors. Half of these errors changed the nature of the resulting statistical inference (results reported as significant with the actual recomputed p > .05). Regarding the presence of mathematically inconsistent means and SDs, n = 30 (51%) of the coded effects were derived from group means and SDs. Out of those effects, 63% were fully consistent with the reported cell sizes. In the remaining 37%, either the SD or both, mean and SD were mathematically inconsistent. The exclusion of these effects only led to a negligible change in the estimated ESg = 0.03).

Furthermore, studies on being in nature were not well-powered to detect the full range of hypothetical, theoretically relevant effects sizes. More specifically, the average power in the set of included studies to detect small effects (Cohen’s d = 0.20) was quite low (.17), but moderately powered (.72) to detect effects of Cohen’s d = 0.50 or larger. Overall, we thus conclude that being in nature leads to a reduction of stress.

Is the reduction of stress by being in nature moderated by personality traits? (Goal 4)

We wanted to investigate whether personality traits (e.g., neuroticism) were used in studies on stress regulation for both strategies. For being in nature, none of the included studies assessed personality traits as a potential moderator. We were thus unable to assess whether personality moderated the effect of being in nature on stress.

Being in nature: Pre-registered moderator analyses (Auxiliary Goals)

Next, we will examine several subgroup analyses, which will help us further understand the sources of heterogeneity in the literature. Please note that, due to the limited number of included studies, we were unable to apply publication-bias correction techniques and the results below should thus be interpreted with care.

Investigating heterogeneity of the being-in-nature effect: Characteristics of the population

We did not find that type of sample was related to the magnitude of the effect (Wald’s test, p = .5). For being in nature, the mean proportion of female participants across the included studies was 54.40%. We found a small and negative relationship between gender and the efficacy of being in nature in reducing stress (B = -.01, p < .001), meaning that the effect of intervention was stronger for women as compared to men (while we again point to problems in relation to representative sampling). No gender effect was detected for social support. There was also a significant moderating effect of age, (B = -.01, p = .003), where in older samples, the intervention yielded a larger reduction in stress, on average. Although sampling is rarely representative, the moderating effect of gender and age could at least partially explain the heterogeneity of our effects.

Characteristics of the being-in-nature intervention

For being in nature, we investigated whether the effect varied as a function of the type of exposure in the natural environment. For this subgroup analyses we modified our coding scheme by adding a category that could include some studies that were left out with the previous coding scheme.10 The majority of the effects came from studies in which participants walked in a natural environment (k = 23), in a sizable portion (k = 18), participants were in a mixed condition (nature viewing and walking). In the remaining set of effects, participants were in a natural viewing condition through a virtual medium (k = 7) or they were physically present in a green environment (k = 6). The effect sizes of these groups were not significantly different, p = .70.

Does emotional social support reduce stress? (Goals 1-3)

For emotional social support, none of the effects were deemed an outlier. We therefore immediately proceeded with our pre-registered overall, bias-corrected, and component-specific effects, as well as our quality assessment of the literature.

Overall effects of emotional social support on stress

The naive meta-analytic estimate suggested the presence of an effect and 55% of the effects included in our synthesis were significant. The naive meta-analytic estimate was Hedges’ g = -0.14, 95% CI [-.24, -.04], p < .001, suggesting that emotional social support is effective in reducing stress. The 95% prediction interval was large, with the true effect in a new published study being expected in the range from -.51 to .23. The heterogeneity for emotional social support was considerable, τ = .16, I2 = 84% (73.25% due to between and 10.82% due to within-cluster heterogeneity). Contour-enhanced funnel plot and forest plot are displayed in Figure 3.

Figure 3.
Contour-enhanced funnel plot and forest plot for emotional social support
Figure 3.
Contour-enhanced funnel plot and forest plot for emotional social support
Close modal

Effects of emotional social support on stress after publication-bias adjustment

Our primary publication bias-correction technique, the 4PSM, failed to find an effect for emotional social support, Hedges’ g = -0.01, 95% CI [-.26, .05], p = .195. We thus concluded that there is a lack of evidence to support the efficacy of emotional social support in reducing stress. Furthermore, the sensitivity analysis employing the Vevea and Woods (2005) step-function model failed to find an effect of the emotional social support at -.09 in case of moderate selection (p = .08), at -.04 under severe selection (p = .46) and at .02 when assuming extreme selection (p = .69). That suggests that the adjusted lack of effect was thus very stable regardless of the assumed functional form of the selection mechanisms.11

Stress component-specific effects of emotional social support

For emotional social support, almost all of the effects fell in the affective component (k = 13; Hedges’ g = -0.11), while only four effects fell into the physiological component (and we thus did not analyze the results). None of the coded effects for this strategy were categorized as being part of the cognitive component. That made any statistical comparison impossible. Finally, for emotional social support, a small proportion of the effects k = 4 was considered as being part of the affective consequences of stress (where we again did not analyze the results).

Quality-of-evidence assessment for emotional social support on stress

We then evaluated the risk of bias assessment for emotional social support. The majority of the studies (95%) were assessed as having a high risk of bias, due to the fact that most of the studies were observational and therefore not randomized. We provide an overview of the risk-of-bias assessment for emotional social support in Figure 4.

Figure 4.
Overall risk of bias and risk of bias assessment for each of the five dimensions for emotional social support.
Figure 4.
Overall risk of bias and risk of bias assessment for each of the five dimensions for emotional social support.
Close modal

P-values for both the full p-curve (z = -8.35, p < .001) and the half p-curve tests (z = -7.67, p < .001) were significant, suggesting the presence of evidential value in the given set of included significant focal effects (the p-curve test was based only on k = 6 independent effects). We then screened the included studies with the statcheck package for inconsistencies in reported test statistics and p-values. Only a minority of the included papers, k = 3, 23% of the total, were reported in a standard APA style, and two of the papers contained at least one reporting inconsistency.

Eighty-five percent of the reported results were not flagged as statistical errors by the statcheck screening, while in the remaining 15%, the reported results were statistically inconsistent. For two-thirds of those errors, the reported SDs, means, or both, were mathematically inconsistent, which thus means a low-quality statistical reporting. None of the synthesized means or SDs were inconsistent with the reported sample size.

The average power in the set of included studies for emotional social support to detect small effects (d = 0.20) was .49, but was more than adequate (.99) by conventional criteria to detect effects of d = 0.50 or larger. Overall, we conclude that we cannot find evidence in favor of emotional social support reducing stress.

Is the (lack of) reduction of stress through emotional social support moderated by personality traits? (Goal 4)

Despite this lack of support, we did examine whether personality differences could be of relevance. For emotional social support however, only one study (yielding two effects) studied trait self-esteem as the moderator of the relationship between emotional social support and mental health. Thus, based on the lack of evidence in the primary literature, it is not possible to know whether the lack of the effect of emotional social support is due to individual differences.

Emotional social support: Pre-registered moderator analyses (Auxiliary Goals)

Next, we will again examine several subgroup analyses, which will help us further understand the sources of heterogeneity in the literature. Please note again that, due to the limited number of included studies, we were unable to apply publication-bias correction techniques and the results below should thus be interpreted with care.

Investigating the lack of the emotional-social-support effect: Characteristics of the population

More than half of the overall sample was female, on average (60.72%). There was no significant effect of the proportion of men versus women as a moderator on stress reduction (B = .001; p = .13). ). We also did not find a moderating effect of mean age of the sample, (B = -.002; p = .31). Next, we investigated whether the efficacy of emotional social support differed by the type of population. For this strategy, we had an equal number of effects for each different type of population (k = 6; student non-clinical, non-student non-clinical, clinical); we again did not find a significant moderating effect (Wald test p = .29). Neither gender, age or type of population could thus explain the heterogeneity (and thus the lack) of the effect.

Characteristics of the emotional-social-support intervention

We then investigated whether the types of support differed in their ability to decrease participants’ stress levels. For the majority of the effects (k = 14), the source of emotional social support was not specified. In a negligible portion of the effects (k = 2), the emotional social support took the form of physical contact, and for k = 1, the type of social support was both verbal and mixed (being the set of effects less than 10, we did not analyze these results).

Concerning the source of emotional social support, it was not specified for k = 16 of the effects, while for k = 1, the support came from a known person, and for k = 1 the support came from a stranger.

Overall quality checks of the being-in-nature and emotional-social-support literatures: Study designs and sensitivity analyses

Study design characteristics

We investigated whether the effect sizes of the included studies varied as a function of the characteristics of the experimental design. First, we verified whether effects related to the two strategies were different in relation to the type of control group employed (active vs. passive). For both strategies, the Wald’s test was not significant, p = .12 and .17 for being in nature and emotional social support, respectively, suggesting that the type of comparison group was not associated with the magnitudes of the effect sizes.

Second, we tested whether the effects of being in nature and emotional social support varied as a function of the published status of the included articles. For being in nature the majority of the effects were extracted from published studies (k = 51), while a minor portion were extracted from non-published studies (k = 8). The difference between these two groups was not significant (p = .23). For emotional social support the 38.9% of the effects (k = 7) came from the gray literature, while the 61.1% of the effects came from the published literature. The difference between these two groups was significant (Wald’s test, p = .03) indicating that unpublished studies yielded a larger effect (Hedges’ g = -0.32) than published studies (Hedges’ g = -0.09), on average.

Sensitivity analyses

We finally conducted a set of sensitivity analyses related to our main results to assess how methodological factors were related to the magnitude and precision of the reported effects. First, we excluded effects based on inconsistent means and/or SDs. For being in nature 11 effects were excluded. However the differences between effects that were inconsistent and those of were not, was not significant (Wald’s test, p = .91). For emotional social support, we did not carry out this analysis as none of the coded effects had inconsistent means or SDs. Finally, we checked how the ES varied in relation to the risk of bias assessment.

The difference in terms of ES between studies judged to be at a high risk of bias (k = 4) was not different as compared to studies judged to have low/some concerns risk of bias (k = 13) as Wald’s test was not significant, p = .85. For social support, we were unable to carry this sensitivity analysis as we had only one effect with a risk of bias that was acceptable.

Through a Registered Report meta-analysis, we evaluated the efficacy of two stress regulation strategies: Being in nature and emotional social support. After applying our main publication-bias correction technique (and after excluding outliers with improbable effects), we only found an effect for being in nature on decrease of stress. In what follows, we outline our assessment of the quality of both literatures, interpretation of some (potentially contradictory results), limitations of our assessment and of the literatures, key areas of improvement, and some concluding remarks.

Quality assessment of the being-in-nature and emotional social support literatures

Being in nature and stress: Risk of bias. While we conclude there to be an overall effect of being-in-nature interventions on stress reduction, the being-in-nature literature is not entirely without challenges: 50% of the studies were at some risk of bias and 25% were at high risk of bias. For being in nature, the majority of high risk of bias studies were due to potential bias arising from the randomization process. Specifically, 4 studies (out of 16) were judged to have a high risk of bias, while 3 had a medium risk, suggesting that participants were not properly randomized to avoid the influence of known or unknown prognostic factors on the final results.

Many of these risks seem easily fixable. Most cross-sectional studies studying complex interactions are underpowered and need to increase power. However, even when studies are cross-sectional, causal inferences can be improved, even with modest samples. There are reasonable solutions to such “small N, large P” problems. We recommend to use machine-learning methods to 1) reduce potential confounds by adding predictors and 2) to better extract patterns from data by using random forests to identify the most important predictors of the projected outcome (cf., Wittmann et al., 2021; for a tutorial, see Szabelska et al., 2022). Once identified, researchers can generate relatively precise predictions to be tested in either longitudinal designs or experimental studies to bolster the strength of the causal inference.

Another (medium) source of bias for being in nature arose from the domain, “deviation from intended interventions” where 12 studies were judged to have a medium risk of bias mostly due to a lack of blinding of participants. In one of such studies, Lee et al. (2009), a partial solution was found: Participants visited forest and urban environments and were surveyed for how comfortable they felt and how soothed and refreshed they felt. While these variables are obvious candidates for being influenced due to demand characteristics, the researchers also collected salivary cortisol, diastolic blood pressure, and pulse rate, variables which are all much less likely to be influenced by demand characteristics.

Being in nature and stress: Statistical challenges. Further, while there were statistical problems in the being-in-nature literature (with 4% having statcheck errors and 37% had mathematically inconsistent results), these results did not materially change the outcome of our meta-analysis. Nevertheless, greater care should surely be taken to ensure test statistics are correctly reported, as these numbers are worryingly high. All of this together, however, with the fact that the literature was sufficiently powered, provides us with sufficient confidence that being in nature in fact reduces stress, at least for specific populations. The issues of high risk of bias, publication bias, and low power are not limited to emotional social support and being in nature and stress. These shortcomings extend to other stress-regulation strategies as well (see e.g., Goessl et al., 2017; Sparacio et al., 2022). Given the pervasive problems related to risk of bias, publication bias, and low power, we support others’ suggestions of implementing pre-registration or, preferably, Registered Reports that are reviewed before data collection as a means of obtaining more rigorous and reliable evidence on stress-mitigation interventions, as well as in social psychology more general.

Registered Reports will provide a much more accurate estimate of the effect sizes in this literature, and therefore, the exact efficacy of being in nature (Soderberg et al., 2021). It will therefore allow for a much better comparison to other interventions (such as biofeedback or self-administered mindfulness; Sparacio et al., 2022). Finally, posting results of stress management techniques in an open repository (e.g., PsychOpen CAMA; Burgard et al., 2021) would allow other researchers to re-analyze the data and verify the validity of the results.

Emotional social support and stress: Risk of bias. The literature on emotional social support showed no better results in terms of quality: The literature is at high risk of bias (95% of the studies were at high risk; only one of the studies was a randomized controlled trial). In 12 out of 13 studies, problems arose from the randomization process as most of the studies were observational, making almost all the studies to have an overall high risk of bias.

Of course, emotional social support is much harder to (ethically) manipulate than being in nature. It is not easy to change the nature of one’s social network and to change the level of one’s emotional social support. For instance, in the study of Levens et al. (2016), “one hundred and eight-one freshman undergraduate participants completed questionnaires assessing depressive symptoms, family and instrumental support, and perceived stress reactivity” (p. 342). How would a researcher reasonably assess the causal relationship between emotional social support and stress?

For this, we have two recommendations. First, like for being in nature, it is possible to improve the quality of causal inferences by surveying participants on a host of additional (potentially relevant) variables (such as neuroticism, attachment security, the quality of one’s social network, and so forth), and thereafter applying analytic techniques that are less prone to overfitting and to problems with collinearity (such as conditional random forests; Szabelska et al., 2022). Second, one could attempt to improve the quality of emotional social support from a support-giver by letting dyads participate in relationship-focused therapy (see, for instance, Johnson et al., 2013) and comparing this with an active and passive control condition. Furthermore, future studies on emotional social support should include older samples, which are currently undersampled, while age could potentially moderate the effects on being-in-nature.

More generally, to better map how emotional social support (if at all) and being in nature affect stress (in experimental or cross-sectional studies), we think that a much better record is needed. We therefore strongly recommend assessing a number of different variables to study how these effects differ across situations and across different people (see Appendices G and H).

Emotional social support and stress: Statistical challenges. Furthermore, the literature seemed to contain a considerable amount of problems with statistical reporting: in 15% of the cases, test statistics and p-values were mathematically inconsistent, while in 2/3 of those 15% cases means and standard deviations were inconsistent. It thus becomes very difficult to interpret the body of evidence based on a literature that has such a large number of errors.12 Overall, however, the literature was sufficiently powered to detect medium-size effects. The improvement of computational reproducibility is crucial here. We recommend doing so by building in transparency from the outset, by utilizing the OSF in one’s workflow and by utilizing a set “research workflow” (see e.g., Silan et al., 2021).

Interpreting seemingly contradictory results

At the level within our meta-analysis, there were seemingly contradictory results between different analysis methods. For the being-in-nature literature, our main inference engine, the 4PSM, converged with a technique that has become more popular as of recent, the p-curve. For the emotional-social-support literature, this was not the case. How could this be? A first part of the explanation rests on the number of effects that went into the analysis models: The number of effects that entered into the p-curve analysis was small (k = 4 and k = 6 for being in nature and emotional social support, respectively), whereas for the 4PSM this was slightly larger (k = 16 and k = 13 for being in nature and emotional social support, respectively). The second part of the explanation rests on the fact that the 4PSM has a more acceptable false-positive rate than the p-curve in the presence of significant heterogeneity (E. C. Carter et al., 2019; Hong & Reed, 2021). Overall, we thus remain with our conclusion that there is sufficient evidential value for the effect of being in nature on stress, but not for emotional social support on stress, due to our greater confidence in the 4PSM as our inference engine.

Moving forward: Learning from related constructs

In the present meta-analysis, we have focused exclusively on studies investigating the effects of interventions to reduce stress and we don’t find effects for emotional social support and stress, and some effects for being in nature and stress (but at high risk of bias). We could potentially learn from closely related constructs, such as well-being and mental health, where some highly powered cross-sectional studies were conducted and could be informative for future experimental studies on emotional social support. In relation to social support and well-being, Golden et al. (2009) find that lacking an embedding into a social network and experiencing isolation are associated with lower levels of well-being. As it pertains to being-in-nature, Soga et al. (2021) find that the frequent use of greenspace and having a green window view from within the home was associated with increased life satisfaction and happiness (both factors that contribute to subjective well-being).

Furthermore, stress may be a contributing factor to mental health, it is again but only one aspect. Mental health, again, is a multi-faceted construct, including aspects such as anxiety and depression (which itself is already very heterogeneous, see Fried, 2017), both of which were assessed in our meta-analysis. In terms of social support, a cross-sectional study of 461 participants found that people with no depression experienced significantly more social support from friends and parents than people with moderate to severe levels of depression (Alsubaie et al., 2019). In terms of being-in-nature, viewing a greenspace was associated with lower depression, anxiety, and loneliness (Soga et al., 2021).

While our meta-analysis only focuses on stress and its affective consequences, these highly-powered studies are promising for a literature that has high risk of bias and – for emotional social support – no effect. Based on the cross-sectional studies we cite here, it may well be possible to design a high-powered, pre-registered, experimental multisite study to test the potential effects of emotional social support rigorously (see also Sparacio et al., 2023).

Limitations of our assessment: Constraints on generality (Simons et al., 2017) 

The limitations of our assessment primarily apply to being in nature, as that is where we found an overall effect, even after applying our publication bias correction techniques. While there was an overall effect, there was a slight effect of gender. It is hard to infer whether this indeed will extend to other populations, given that sampling is hardly ever representative. Our recommendation is therefore to control for gender in future studies. In addition, effects of being in nature were quite consistent across non-student healthy populations and student health populations. Studies in this literature also did not vary across natural environments, being present in a green environment, viewing nature through a virtual medium, or a mixed condition. Effect sizes were similar across these different conditions and thus seemed to be quite generalizable (we don’t know the effects for clinical populations, however).

Further, the average age of the studies for being in nature was 31.70 with a SD of 9.09, thus capturing a decent segment of the population. Nevertheless, it is unclear whether the effects will hold for minors and people above 40 (90% of studies examined samples having mean age of less than ~40). The effects were studied in quite a wide variety of countries (3 in the United States, 1 in Poland, 1 in Malaysia, 4 in the United Kingdom, 1 in Japan, 1 in Finland, 1 in Germany, 1 in China, 1 in the Netherlands, 1 in South Korea, 1 in Italy, and 1 in Denmark) thus allowing for a reasonable generalizability across a few countries. However, all but two of the countries are defined by the OECD as higher income countries, and even the two on the list of lower- and middle-income countries where studies (China and Malaysia) were conducted are in the upper middle-income countries. Whether the effects of being in nature on stress are even relevant for people in LMICs is thus unknown. Finally, we had little to no information about personality characteristics and can therefore not make a reasonable assessment on how much the effects of being in nature differ across different people.

How to further improve the evidence base in stress-regulation research

The evidence base surrounding stress regulation would benefit from enhancement in quality, but such a sustained effort requires changes in researchers’ workflow. Based on the CORE Lab Lab philosophy (Goncharova et al., 2022), we recommend a number of steps: First, researchers should decide, before data collection, to what extent their research is exploratory or confirmatory. They can then adopt an exploratory (https://osf.io/96vw4/) or confirmatory (https://osf.io/mzg4q/) research template (both of which are imperfect, as no research is fully exploratory or confirmatory). If the research is mostly confirmatory, they can then decide to pre-register by themselves (minimum) or submit as a Registered Report (preferable). If the research is exploratory, they can decide to withhold part of the data for cross-validation.

If part of the data is withheld, certain journals are willing to accept this as a Registered Report (see e.g., Wittmann et al., 2022 as example of this approach). If confirmatory, before data collection, researchers should prepare their analysis script. After data collection, they should post their analysis script and explain deviations, if any. Whether exploratory or confirmatory, with the post-data analysis script, they should post the deidentified data, to the extent ethically possible. Before publication, (at least) one researcher should be invited to do code review (who then becomes a co-author of the project). To aid authors in their efforts to embrace this approach, we have prepared a very rudimentary checklist (Table 3) that may be used as baby-steps towards promoting best practices in Open Science. For the underlying reasons for this approach, see https://psyarxiv.com/6jmhe/.

Table 3.
Non-exhaustive checklist to start improving Open Science practices
Section/topic Item# Checklist item 
Confirmatory vs exploratory Choose whether the research project is mostly
exploratory or mostly confirmatory. 
Pre-registration 2a Clearly state whether the study was pre-registered or not. If pre-registered, provide the name of the registry (e.g., PROSPERO/OSF) and registration number (if applicable). 
Registered Report 2b Clearly state whether the study was conducted as a Registered Report or not. If yes, provide the name of the registry (e.g., OSF), registration number (if applicable), and link to the finalized Stage I Registered Report. 
Open data, code, and materials Provide open data (to the extent ethically permitted) with reproducible code and open materials, stored in an open repository (e.g., PsychOpen CAMA; Burgard et al., 2021). 
Code review Ask an independent researcher (preferably outside your lab) to review your code and offer that person authorship. 
Section/topic Item# Checklist item 
Confirmatory vs exploratory Choose whether the research project is mostly
exploratory or mostly confirmatory. 
Pre-registration 2a Clearly state whether the study was pre-registered or not. If pre-registered, provide the name of the registry (e.g., PROSPERO/OSF) and registration number (if applicable). 
Registered Report 2b Clearly state whether the study was conducted as a Registered Report or not. If yes, provide the name of the registry (e.g., OSF), registration number (if applicable), and link to the finalized Stage I Registered Report. 
Open data, code, and materials Provide open data (to the extent ethically permitted) with reproducible code and open materials, stored in an open repository (e.g., PsychOpen CAMA; Burgard et al., 2021). 
Code review Ask an independent researcher (preferably outside your lab) to review your code and offer that person authorship. 

Our Registered Report investigation, after applying publication bias techniques and driven by our inferential criteria, found that being in nature is an effective strategy to reduce stress. For emotional social support, we don’t find the intervention to be efficacious. While the results for being in nature may be promising, the limited quality of the literature poses a potential threat to the validity of the findings. More rigorous studies on the topic - and thus the adoption of Registered Reports, pre-registration, and data sharing - will lead to less research waste, and ultimately, to better interventions.

Underlying data

Data from this article will be shared via the OSF page (https://osf.io/6wpav/).

Extended data

To ensure methodological rigor and transparency, we have made our data and the script available on the Open Science Framework (https://osf.io/6wpav/)

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0). At least some of the data/evidence that will be used to answer the research question already exists AND is accessible in principle to the authors (e.g., residing in a public dataset or with a colleague). The authors used the data to create a coding scheme BUT the authors certify that they have not yet accessed any part of summary statistics.

We have no competing interest to declare.

Conceptualization: Alessandro Sparacio, Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman.

Data curation: Alessandro Sparacio and Ivan Ropovik.

Formal analysis: Alessandro Sparacio, Ivan Ropovik, and Hans IJzerman.

Funding acquisition: Hans IJzerman and Gabriela Jiga-Boy.

Investigation: Alessandro Sparacio.

Methodology: Alessandro Sparacio, Ivan Ropovik, and Hans IJzerman.

Project administration: Alessandro Sparacio, Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman.

Resources: Alessandro Sparacio, Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman.

Software: Alessandro Sparacio and Ivan Ropovik.

Supervision: Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman.

Validation: Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman.

Visualization: Alessandro Sparacio, Adar Cem Lağap and Ivan Ropovik.

Writing - original draft: Alessandro Sparacio, Ivan Ropovik, and Hans IJzerman.

Writing - review & editing: Alessandro Sparacio, Ivan Ropovik, Gabriela Jiga-Boy, and Hans IJzerman

The preparation of this work was partly funded by a French National Research Agency “Investissements d’avenir” program grant (ANR-15-IDEX-02) and a grant from the MIBODA project both awarded to Hans IJzerman; PRIMUS/20/HUM/009, APVV-17-0418, and NPO Systemic Risk Institute (LX22NPO5101 funded by European Union – Next Generation EU) grants awarded to Ivan Ropovik. The preparation of this work was also funded by Swansea University Strategic Partnerships Research Scholarships (SUSPRS) in 2019, awarded to Dr Gabriela Jiga-Boy from School of Psychology, Swansea University. The funding sources had no role in the study design, collection, analysis, or interpretation of the data, writing the manuscript, or the decision to submit the paper for publication. It covered funding for a joint PhD degree awarded by Swansea University (UK) and Universite Grenoble Alpes (France).

Appendix A: Protocols and Deviations

Any changes with respect to the choices established in this pre-registration will be fully disclosed on our OSF page and will be incorporated into this form: https://osf.io/6wpav/.

Appendix B: Call for Unpublished Data

Subject: Call for unpublished data for a meta-analysis: “Stress regulation via being in nature and emotional social support for adults: A pre-registered meta-analysis”

Dear Prof/Dr/Ms/Mr XXXX,

I am Alessandro Sparacio, PhD student in social psychology, at the University of Grenoble-Alpes and I’m conducting a meta-analysis on stress regulation, along with my co-authors Hans IJzerman, Ivan Ropovik, Gabriela Jiga-Boy & Patrick Forscher.

The pre-registered protocol for this meta-analysis is publicly available on the Open Science Framework (OSF) at [https://osf.io/6wpav/]

Our meta-analysis aims to address whether being in nature and emotional social support have any demonstrated efficacy in reducing stress levels.

As you have published studies relevant to this topic, we are getting in touch to see if you have any unpublished/file-drawer data, or papers in-press, which we may have missed through database searching, and which you would like to have included in the meta-analysis.

Feel free to email either the raw data (from which we will calculate summary scores) or the summary scores themselves. While any raw data emailed to us will of course remain confidential, please know that summary scores included in the meta-analysis will be made publicly available in a dataset on the OSF.

We are hoping to include as many relevant studies as possible, so any additional data is greatly appreciated.

Sincerely (also on behalf of my co-authors),

Alessandro Sparacio

This template was provided by Moreau and Gamble (2020)

Appendix C: Requesting for Specific Data

Subject: Requesting data for a meta-analysis, from your paper: ‘XXXX’

Dear Prof/Dr/Ms/Mr XXXX,

I am Alessandro Sparacio, PhD student in social psychology, at the University of Grenoble-Alpes and I’m conducting a meta-analysis on stress regulation, along with my co-authors Hans IJzerman, Ivan Ropovik, Gabriela Jiga-Boy & Patrick Forscher.

The pre-registered protocol for this meta-analysis is publicly available on the Open Science Framework (OSF) at [https://osf.io/6wpav/].

We think your study ‘XXXX’ meets inclusion criteria for our meta-analysis. However, the effect size we’re interested in (i.e., the correlation/difference between XXX and XXX) does not seem to be reported in the published paper.

We would be grateful if you could send either the summary scores or the raw data themselves (from which we can calculate the effect size). While any raw data emailed to us will of course remain confidential, please know that summary scores included in the meta-analysis will be made publicly available in a dataset on the OSF.

The latest we will be able to accept your data for inclusion is XXth of XXX, XXXX.

We are hoping to include as many relevant studies as possible, so any additional data is greatly appreciated.

Sincerely (also on behalf of my co-authors),

Alessandro Sparacio

This template was provided by Moreau and Gamble (2020)

Appendix D: Search Criteria

This template was provided by Moreau and Gamble (2020)

Appendix D: Search Criteria

This template was provided by Moreau and Gamble (2020)

Close modal

Appendix E: Search Strategy

BEING IN NATURE

PUBMED

(“natural space” OR “natural environment*” OR “natural landscape” OR “urban nature” OR “nearby nature” OR “nature view*” OR “outdoor nature” OR “natural space” OR “green area” OR “green environment” OR “nature contact” OR “contact with natur*” OR park OR “urban forest” OR “forest walking” OR “forest” OR “forest environment*” OR “shinrin” OR “forest bathing”) AND (walk* OR sitt* OR watch* OR view* OR stay* OR contact*) AND stress AND (“negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety)

Search date: 12/4/2022 Results: 151 results Notes:

PROQUEST (APA PsycArticles, APA Psycinfo, ProQuest Dissertations & Theses Global)

(“natural space” OR “natural environment*” OR “natural landscape” OR “urban nature” OR “nearby nature” OR “nature view*” OR “outdoor nature” OR “natural space” OR

“nature contact” OR “contact with natur*” OR park OR “urban forest” OR “forest walking” OR “forest environment*” OR “shinrin”

OR “forest bathing”) AND (walk* OR sitt* OR watch* OR view* OR stay*) AND stress AND (“negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety)

Search date: 12/4/2022 Results: 119 Notes:

SCOPUS

TITLE-ABS ((“greenspace*” OR “green space” OR “green landscape*” OR “natural space” OR “natural environment*” OR “natural landscape” OR “urban nature” OR “nearby nature” OR “nature view*” OR “nature viewing” OR “viewing nature” OR “outdoor nature” OR “natural space” OR “nature contact” OR “contact with natur*” OR park OR “urban forest” OR “forest walking” OR “forest environment*” OR “nature therapy” OR “nature experience” OR “forest therapy” OR “shinrin” OR “forest bathing”) AND (walk* OR sitt* OR watch* OR view* OR stay*) AND stress AND (“negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety)

Search date: 12/4/2022 Results: 196 Notes:

EMOTIONAL SOCIAL SUPPORT

PUBMED

( “emotional support” OR “emotional social support”) AND ( encourage* OR help OR assist* OR love OR trust* OR contact or touch) AND stress AND ( “negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety)

Search date: 12/4/2022 Results: 288 Notes:

PROQUEST (APA PsycArticles, APA Psycinfo, ProQuest Dissertations & Theses Global)

( “emotional support” OR “emotional social support”) AND ( encourage* OR help OR assist* OR love OR trust* OR contact or touch) AND stress AND ( “negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety )

Search date: 12/4/2022 Results: 497 Notes:

SCOPUS

TITLE-ABS ( ( “emotional support” OR “emotional social support” ) AND ( encourage* OR help OR assist* OR love OR trust* OR contact OR touch ) AND stress AND ( “negative affect” OR “positive affect” OR emotion* OR cogniti* OR ruminati* OR physiological* OR biomarker* OR depression OR anxiety ) )

Search date: 12/4/2022 Results: 413 Notes:

Appendix F: Correction for Publication Bias

  1. Primary confirmatory analysis: 4-parameter selection model (E. C. Carter et al., 2019; McShane et al., 2016).

    If there were less than four focal p-values per interval, the procedure fell back to the 3-parameter selection model. The selection models were implemented using a permutation-based procedure, iteratively selecting only a single focal effect size from each independent study, estimating the model in 5000 iterations, and averaging over the iterations by picking the model with the median ES estimate.

  2. Exploratory analyses

    2.1. Vevea and Woods (2005) step function models with a priori defined selection weights, varying the assumed severity of bias, modeling moderate, severe, and extreme selection.

    2.2. Multi-level RVE-based implementation of the PET-PEESE model (Stanley & Doucouliagos, 2014), employing √(2/N) and a 2/N terms instead of standard error and variance for PET and PEESE, respectively, as a measure of precision (see Pustejovsky, 2017). Additionally, the R code also allows the interested reader to use the 4PSM as a conditional estimator for PET-PEESE instead of traditional PET and explore the effect of such decision on the resulting inference (for more details, see IJzerman et al., 2022).

    2.3. Robust Bayesian model-averaging approach integrating the selection modeling and regression-based approaches (Bartoš et al., 2021), letting the data determine the contribution of each model by its relative predictive accuracy to fit the observed data.

Inferential criteria: Substantive inferences regarding the presence of an effect were guided by the estimates and inferential results of the 4-parameter selection model (4PSM) solely.

Appendix G: Study Protocol for Being in Nature

Based on meta-analytic synthesis of the literature, we have developed the following protocol for being in nature and stress.

Minimum Requirements:

  1. Record the length of the intervention.

  2. Record the type of green environment where participants are doing the experiment (e.g., whether is a park, farmland, near the sea, a forest, or an or an environment that has both natural elements and buildings).

  3. Record any interaction that participants have with the natural environment (e.g., whether participants are only viewing the natural environment or whether they are engaging in any other activities).

  4. If participants are in a virtual natural viewing condition, record the medium via which the intervention is administered (e.g., virtual reality).

  5. In case of a randomized controlled trial, include both a passive and active control group.

  6. Record the time between the intervention and stress measurement.

  7. Let participants fill in the Trait Anxiety scale (Spielberger, 1970), the Experiences in Close Relationships Scale (ECR-R; Fraley et al., 2011), the Social Network Index (Cohen et al., 1997), and the Big Five inventory (John, 1991).

  8. Record participants’ native language, sex, gender, geographical origin, height, weight, and smoking status (if smoker, how many cigarettes).

  9. Record whether the population is from students or not.

  10. Record whether the population is a clinical sample or not.

Ideal Requirements:

Besides self-reported measures of stress record stress reactivity of participants at least through one physiological measure (e.g., assessment of catecholamines, assessment of the ANS via skin conductance, cortisol, heart rate, systolic and diastolic blood pressure; Bally et al., 2003; Berntson et al., 1993)

Appendix H: Study Protocol for Emotional Social Support

Based on meta-analytic synthesis of the literature, we have developed the following protocol for emotional social support and stress.

Minimum Requirements:

  1. Record the type of the emotional social support (e.g., whether it was achieved through verbal expressions or via physical contact)

  2. Record the source that provided the emotional social support (e.g., whether it was the partner, a friend, or a stranger)

  3. In case of a randomized controlled trial, include an active and passive control group.

  4. Record the time between the intervention and stress measurement.

  5. Let participants fill in the Trait Anxiety scale (Spielberger, 1970), the Experiences in Close Relationships Scale (ECR-R; Fraley et al., 2011), the Social Network Index (Cohen et al., 1997), and the Big Five inventory (O. P. John et al., 1991).

  6. Record participants’ native language, sex, gender, geographical origin, height, weight, and smoking status (if smoker, how many cigarettes).

  7. Record whether the population is from students or not.

  8. Record whether the population is a clinical sample or not.

Ideal Requirements:

Besides self-reported measures of stress record stress reactivity of participants at least through one physiological measure (e.g., assessment of catecholamines, assessment of the ANS via skin conductance, cortisol, heart rate, systolic and diastolic blood pressure; Bally et al., 2003; Berntson et al., 1993).

1.

In our Stage 1 Registered Report, we had mistakenly referenced Schwarzer and Leppin (1989) and Harandi, Taghinasab, and Nayeri (2017) as focusing on emotional social support and stress. Schwarzer and Leppin (1989) focused on self-reported general health as outcome, whereas Harandi et al. (2017) focus on mental health.

2.

We switched to ordinary two-level random-effects model if (1) the multilevel model failed to converge in the overall model or in any of the subgroups; or (2) if the variance components of the model were not well identifiable (specifically, if the log-likelihood did not peak at the variance estimates for both variance components).

3.

10 is to be considered for the total effects by type of subgroup analysis, not by category. For instance, if there are 5 studies on physical social support and 15 on verbal social support we will conduct the relative subgroup analyses, as the total number of effects is 20. However, if the total number of effects is below 10, we will not run that subgroup analysis.

4.

That is, we picked the median estimate from the parameter distribution and, with it, the corresponding model that the estimate was originating from. The goal of this procedure was to preserve the mutual consistency between the estimate, z-value, CIs, and p-value.

5.

As the 4PSM tends to have more favorable error rates under many conditions than PET, the reader can also define the 4PSM as a conditional estimator for PET-PEESE instead of the traditional PET in the R code, to explore the effect of such decision on the resulting inference.

6.

Apart from reporting the results of these bias adjustments, we examined whether the primary 4/3-PSM estimate fell within the 95% credible interval of the RoBMA estimate (being based on a more general model).

7.

This is a conservative, arbitrary threshold. Richard, Bond, and Stokes-Zoota (2003) found that the empirical distribution of absolute effect sizes in social psychology is well approximated by a left-sided truncated normal distribution with μ = 0 and σ = 0.55. Given this theoretical distribution, ES > 2.0 can be seen as highly deviant, with a cumulative density of just .0003, i.e. representing 0.03% of the distribution.

8.

The given study also reported another three large effects (1.28, 1.12, and 0.87). Although not improbable on their own, they exerted considerable influence on the models, as they were based on samples with large N. These effects were not excluded in this sensitivity analysis.

9.

In addition, we performed exploratory analyses to determine how the outcome changed when we used other publication bias adjustment methods that we had previously registered. The PET-PEESE did not detect a signal Hedges’ g = -.55 , 95% CI [-1.27, .17], p = .13. The RoBMA model instead suggested the presence of an effect Hedges’ g = -.42, 95% CI [-.57, -.28].

10.

We added “virtual viewing” to indicate studies in which participants were exposed to natural environments via photos/videos /virtual reality. We documented this change in Appendix A: Protocols and deviations sheet.

11.

We also ran exploratory analyses to see how the effect varied when applying other publication bias adjustment techniques that we pre-registered. The PET-PEESE failed to find an effect, Hedges’ g = -.11, 95% CI [-.41, .20], p = .46; we reached a similar conclusion with the RoBMA model Hedges’ g = -.21, 95% CI [-.41, .00].

12.

A reviewer pointed out that previous meta-analyses seemed to suggest positive effects of social support (e.g., Chu et al., 2010; Harandi et al., 2017; Thorsteinsson & James, 1999). However, none of these meta-analyses adjusted for publication bias, nor did they remove statistically faulty studies, nor did they make an assessment of risk of study bias. In addition, all of these meta-analyses focused on social support in general (not on emotional social support specifically), with one focusing on well-being (Chu et al., 2010), another on mental health (Harandi et al., 2017), and another only on physiological measures of stress (Thorsteinsson & James, 1999).

Alsubaie, M. M., Stain, H. J., Webster, L. A. D., & Wadman, R. (2019). The role of sources of social support on depression and quality of life for university students. International Journal of Adolescence and Youth, 24(4), 484–496. https://doi.org/10.1080/02673843.2019.1568887
Anaya, J. (2016). The GRIMMER test: A method for testing the validity of reported measures of variability. PeerJ Preprints. https://doi.org/10.7287/peerj.preprints.2400v1
Antonelli, M., Barbieri, G., & Donelli, D. (2019). Effects of forest bathing (shinrin-yoku) on levels of cortisol as a stress biomarker: A systematic review and meta-analysis. International Journal of Biometeorology, 63(8), 1117–1134. https://doi.org/10.1007/s00484-019-01717-x
Bally, K., Campbell, D., Chesnick, K., & Tranmer, J. E. (2003). Effects of patient‐controlled music therapy during coronary angiography on procedural pain and anxiety distress syndrome. Critical Care Nurse, 23(2), 50–58. https://doi.org/10.4037/ccn2003.23.2.50
Bartoš, F., Maier, M., Wagenmakers, E., Doucouliagos, H., & Stanley, T. D. (2021). No need to choose: Robust Bayesian meta-analysis with competing publication bias adjustment methods. PsyArxiv.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Baujat, B., Mahé, C., Pignon, J.-P., & Hill, C. (2002). A graphical method for exploring heterogeneity in meta-analyses: Application to a meta-analysis of 65 trials. Statistics in Medicine, 21(18), 2641–2652. https://doi.org/10.1002/sim.1221
Beckes, L., & Coan, J. A. (2011). Social baseline theory: The role of social proximity in emotion and economy of action. Social and Personality Psychology Compass, 5(12), 976–988. https://doi.org/10.1111/j.1751-9004.2011.00400.x
Beil, K., & Hanes, D. (2013). The influence of urban natural and built environments on physiological and psychological measures of stress: A pilot study. International Journal of Environmental Research and Public Health, 10(4), 1250–1267. https://doi.org/10.3390/ijerph10041250
Berntson, G. G., Cacioppo, J. T., & Quigley, K. S. (1993). Respiratory sinus arrhythmia: Autonomic origins, physiological mechanisms, and psychophysiological implications. Psychophysiology, 30(2), 183–196. https://doi.org/10.1111/j.1469-8986.1993.tb01731.x
Bolger, N., DeLongis, A., Kessler, R. C., & Schilling, E. A. (1989). Effects of daily stress on negative mood. Journal of Personality and Social Psychology, 57(5), 808–818. https://doi.org/10.1037/0022-3514.57.5.808
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2010). A basic introduction to fixed-effect and random-effects models for meta-analysis. Research Synthesis Methods, 1(2), 97–111. https://doi.org/10.1002/jrsm.12
Bowers, C. A., & Gesten, E. L. (1986). Social support as a buffer of anxiety: An experimental analogue. American Journal of Community Psychology, 14(4), 447–451. https://doi.org/10.1007/bf00922628
Bratman, G. N., Hamilton, J. P., Hahn, K. S., Daily, G. C., & Gross, J. J. (2015). Nature experience reduces rumination and subgenual prefrontal cortex activation. Proceedings of the National Academy of Sciences, 112(28), 8567–8572. https://doi.org/10.1073/pnas.1510459112
Brown, N. J. L., & Heathers, J. A. J. (2016). The GRIM test. Social Psychological and Personality Science, 8(4), 363–369. https://doi.org/10.1177/1948550616673876
Brugha, T., Bebbington, P. E., MacCarthy, B., Potter, J., Sturt, E., & Wykes, T. (1987). Social networks, social support and the type of depressive illness. Acta Psychiatrica Scandinavica, 76(6), 664–673. https://doi.org/10.1111/j.1600-0447.1987.tb02937.x
Burgard, T., Bosnjak, M., & Studtrucker, R. (2021). Towards cumulative evidence and reproducible meta-analyses. Introduction and demonstration of PsychOpen CAMA. ZPID (Leibniz Institute for Psychology). https://doi.org/10.23668/PSYCHARCHIVES.4809
Carter, C. S. (1998). Neuroendocrine perspectives on social attachment and love. Psychoneuroendocrinology, 23(8), 779–818. https://doi.org/10.1016/s0306-4530(98)00055-9
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. https://doi.org/10.1177/2515245919847196
Chu, P. S., Saucier, D. A., & Hafner, E. (2010). Meta-analysis of the relationships between social support and well-being in children and adolescents. Journal of Social and Clinical Psychology, 29(6), 624–645. https://doi.org/10.1521/jscp.2010.29.6.624
Coan, J. A., Beckes, L., Gonzalez, M. Z., Maresh, E. L., Brown, C. L., & Hasselmo, K. (2017). Relationship status and perceived support in the social regulation of neural responses to threat. Social Cognitive and Affective Neuroscience, 12(10), 1574–1583. https://doi.org/10.1093/scan/nsx091
Coan, J. A., & Sbarra, D. A. (2015). Social Baseline Theory: The social regulation of risk and effort. Current Opinion in Psychology, 1, 87–91. https://doi.org/10.1016/j.copsyc.2014.12.021
Coan, J. A., Schaefer, H. S., & Davidson, R. J. (2006). Lending a hand: Social regulation of the neural response to threat. Psychological Science, 17(12), 1032–1039. https://doi.org/10.1111/j.1467-9280.2006.01832.x
Cohen, S. (2004). Social relationships and health. American Psychologist, 59(8), 676–684. https://doi.org/10.1037/0003-066x.59.8.676
Cohen, S., Doyle, W. J., Skoner, D. P., Rabin, B. S., & Gwaltney, J. M., Jr. (1997). Social ties and susceptibility to the common cold. Journal of the American Medical Association, 277(24), 1940–1944. https://doi.org/10.1001/jama.1997.03540480040036
Cohen, S., & Wills, T. A. (1985). Stress, social support, and the buffering hypothesis. Psychological Bulletin, 98(2), 310–357. https://doi.org/10.1037/0033-2909.98.2.310
Ditzen, B., Neumann, I. D., Bodenmann, G., von Dawans, B., Turner, R. A., Ehlert, U., & Heinrichs, M. (2007). Effects of different kinds of couple interaction on cortisol and heart rate responses to stress in women. Psychoneuroendocrinology, 32(5), 565–574. https://doi.org/10.1016/j.psyneuen.2007.03.011
Du, J., Huang, J., An, Y., & Xu, W. (2018). The relationship between stress and negative emotion: The mediating role of rumination. Clinical Research and Trials, 4(1), 1–5. https://doi.org/10.15761/crt.1000208
Ein-Dor, T., Coan, J. A., Reizer, A., Gross, E. B., Dahan, D., Wegener, M. A., Carel, R., Cloninger, C. R., & Zohar, A. H. (2015). Sugarcoated isolation: Evidence that social avoidance is linked to higher basal glucose levels and higher consumption of glucose. Frontiers in Psychology, 6, 492. https://doi.org/10.3389/fpsyg.2015.00492
Epskamp, S., & Nuijten, M. B. (2016). Statcheck: Extract statistics from articles and recompute p values. http://CRAN.R-project.org/package=statcheck
Fraley, R. C., Heffernan, M. E., Vicary, A. M., & Brumbaugh, C. C. (2011). The experiences in close relationships—Relationship Structures Questionnaire: A method for assessing attachment orientations across relationships. Psychological Assessment, 23(3), 615–625. https://doi.org/10.1037/a0022898
Fried, E. I. (2017). The 52 symptoms of major depression: Lack of content overlap among seven common depression scales. Journal of Affective Disorders, 208, 191–197. https://doi.org/10.1016/j.jad.2016.10.019
Goessl, V. C., Curtiss, J. E., & Hofmann, S. G. (2017). The effect of heart rate variability biofeedback training on stress and anxiety: A meta-analysis. Psychological Medicine, 47(15), 2578–2586. https://doi.org/10.1017/s0033291717001003
Golden, J., Conroy, R. M., Bruce, I., Denihan, A., Greene, E., Kirby, M., & Lawlor, B. A. (2009). Loneliness, social support networks, mood and wellbeing in community-dwelling elderly. International Journal of Geriatric Psychiatry, 24(7), 694–700. https://doi.org/10.1002/gps.2181
Goncharova, M., Silan, M. A., lameh, J. E., Dujols, O., Stoianova, T., Sparacio, A., Adetula, A., & IJzerman, H. (2022). The CO-RE Lab Lab Philosophy. https://psyarxiv.com/6jmhe/
Grazuleviciene, R., Vencloviene, J., Kubilius, R., Grizas, V., Dedele, A., Grazulevicius, T., Ceponiene, I., Tamuleviciute-Prasciene, E., Nieuwenhuijsen, M. J., Jones, M., & Gidlow, C. (2015). The effect of park and urban environments on coronary artery disease patients: A randomized trial. BioMed Research International, 2015, 1–9. https://doi.org/10.1155/2015/403012
Harandi, T. F., Taghinasab, M. M., & Nayeri, T. D. (2017). The correlation of social support with mental health: A meta-analysis. Electronic Physician, 9(9), 5212–5222. https://doi.org/10.19082/5212
Harrer, M., Cuijpers, P., Furukawa, T. A., Ebert, D. D. (2021). Doing meta-analysis with R: A hands-on guide. Chapman Hall/CRC Press. https://doi.org/10.1201/9781003107347
Hedges, L. V., Olkin, I. (1985). Statistical methods for meta-analysis. Academic Press.
Hong, S., Reed, W. R. (2021). Using Monte Carlo experiments to select meta-analytic estimators. Research Synthesis Methods, 12(2), 192–215. https://doi.org/10.1002/jrsm.1467
IJzerman, H., Hadi, R., Coles, N. A., Paris, B., Sarda, E., Fritz, W., Klein, R. A., Ropovik, I. (2022). Social Thermoregulation: A Meta-Analysis. https://doi.org/10.31234/osf.io/fc6yq
IJzerman, H., Lewis, N. A. Jr., Przybylski, A. K., Weinstein, N., DeBruine, L., Ritchie, S. J., Vazire, S., Forscher, P. S., Morey, R. D., Ivory, J. D., Anvari, F. (2020). Use caution when applying behavioural science to policy. Nature Human Behaviour, 4(11), 1092–1094. https://doi.org/10.1038/s41562-020-00990-w
Ioannidis, J. P. A. (2008). Why most discovered true associations are inflated. Epidemiology, 19(5), 640–648. https://doi.org/10.1097/ede.0b013e31818131e7
John, L. K., Loewenstein, G., Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
John, O. P., Donahue, E. M., Kentle, R. L. (1991). Big Five Inventory (BFI) [Data set]. APA PsycTests.
Johnson, S. M., Moser, M. B., Beckes, L., Smith, A., Dalgleish, T., Halchuk, R., Hasselmo, K., Greenman, P. S., Merali, Z., Coan, J. A. (2013). Soothing the threatened brain: Leveraging contact comfort with emotionally focused therapy. PLoS One, 8(11), e79314. https://doi.org/10.1371/journal.pone.0079314
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Jr., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š., Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., … Nosek, B. A. (2018). Many labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490. https://doi.org/10.1177/2515245918810225
Kolek, L., Ropovik, I., Sisler, V., van Oostendorp, H., Brom, C. (2022). Video games and attitude change: A Meta-analysis. PsyArxiv. https://doi.org/10.31234/osf.io/8y7jn
Lakey, B., Cronin, A. (2008). Low social support and major depression: Research, theory and methodological issues. In K. S. Dobson D. J. A. Dozois (Eds.), Risk factors in depression (pp. 385–408). Elsevier Academic Press. https://doi.org/10.1016/b978-0-08-045078-0.00017-4
Landis, J. R., Koch, G. G. (1977). The measurement of observer agreement forcategorical data. Biometrics, 33(1), 159–174. https://doi.org/10.2307/2529310
Lazarus, R. S., Folkman, S. (1984). Stress, appraisal, and coping. Springer.
Lee, J., Park, B.-J., Tsunetsugu, Y., Kagawa, T., Miyazaki, Y. (2009). Restorative effects of viewing real forest landscapes, based on a comparison with urban landscapes. Scandinavian Journal of Forest Research, 24(3), 227–234. https://doi.org/10.1080/02827580902903341
Levens, S. M., Elrahal, F., Sagui, S. J. (2016). The role of family support and perceived stress reactivity in predicting depression in college freshman. Journal of Social and Clinical Psychology, 35(4), 342–355. https://doi.org/10.1521/jscp.2016.35.4.342
Lüdecke, D. (2017). esc: Effect Size Computation for Meta Analysis [R package version 0.3.1]. https://CRAN.R-project.org/package=esc
Maggio, L. A., Tannery, N. H., Kanter, S. L. (2011). Reproducibility of literature search reporting in medical education reviews. Academic Medicine, 86(8), 1049–1054. https://doi.org/10.1097/acm.0b013e31822221e7
Marselle, M., Irvine, K., Warber, S. (2014). Examining group walks in nature and multiple aspects of well-being: A large-scale study. Ecopsychology, 6(3), 134–147.
Maxwell, S. E., Lau, M. Y., Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498. https://doi.org/10.1037/a0039400
McShane, B. B., Böckenholt, U., Hansen, K. T. (2016). Adjusting for publication bias in meta-analysis. Perspectives on Psychological Science, 11(5), 730–749. https://doi.org/10.1177/1745691616662243
Moreau, D., Gamble, B. (2022). Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychological Methods, 27(3), 426–432. https://doi.org/10.1037/met0000351
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), 943–952. https://doi.org/10.1126/science.aac4716
Ouzzani, M., Hammady, H., Fedorowicz, Z., Elmagarmid, A. (2016). Rayyan—a web and mobile app for systematic reviews. Systematic Reviews, 5(1), 210–220. https://doi.org/10.1186/s13643-016-0384-4
Pessoa, L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9(2), 148–158. https://doi.org/10.1038/nrn2317
Phelps, E. A. (2006). Emotion and cognition: Insights from studies of the human amygdala. Annual Review of Psychology, 57(1), 27–53. https://doi.org/10.1146/annurev.psych.56.091103.070234
Pustejovsky, J. (2017). You wanna PEESE of d’s? https://www.jepusto.com/pet-peese-performance/
Pustejovsky, J. (2020). clubSandwich: Cluster-robust (sandwich) variance estimators with small-sample corrections [R package version 0.4.2]. https://CRAN.R-project.org/package=clubSandwich
Pustejovsky, J., Tipton, E. (2022). Meta-analysis with robust variance estimation: Expanding the range of working models. Prevention Science, 23(3), 425–438. https://doi.org/10.1007/s11121-021-01246-3
Puterman, E., DeLongis, A., Pomaki, G. (2010). Protecting us from ourselves: Social support as a buffer of trait and state rumination. Journal of Social and Clinical Psychology, 29(7), 797–820. https://doi.org/10.1521/jscp.2010.29.7.797
Reblin, M., Uchino, B. N. (2008). Social and emotional support and its implication for health. Current Opinion in Psychiatry, 21(2), 201–205. https://doi.org/10.1097/yco.0b013e3282f3ad89
Revelle, W., Condon, D. M. (2018). Reliability. In P. Irwing, T. Booth, D. Hughes (Eds.), The Wiley Handbook of Psychometric Testing (pp. 709–749). Wiley-Blackwell. https://doi.org/10.1002/9781118489772.ch23
Revenson, T. A., Schiaffino, K. M., Majerovitz, S. D., Gibofsky, A. (1991). Social support as a double-edged sword: The relation of positive and problematic support to depression among rheumatoid arthritis patients. Social Science Medicine, 33(7), 807–813. https://doi.org/10.1016/0277-9536(91)90385-p
Richard, F. D., Bond, C. F., Jr., Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology, 7(4), 331–363. https://doi.org/10.1037/1089-2680.7.4.331
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638–641. https://doi.org/10.1037/0033-2909.86.3.638
Schardt, C., Adams, M. B., Owens, T., Keitz, S., Fontelo, P. (2007). Utilization of the PICO framework to improve searching PubMed for clinical questions. BMC Medical Informatics and Decision Making, 7(1), 1–6. https://doi.org/10.1186/1472-6947-7-16
Schneiderman, N., Ironson, G., Siegel, S. D. (2005). Stress and health: Psychological, behavioral, and biological determinants. Annual Review of Clinical Psychology, 1(1), 607–628. https://doi.org/10.1146/annurev.clinpsy.1.102803.144141
Schwarzer, R., Leppin, A. (1989). Social support and health: A meta-analysis. Psychology Health, 3(1), 1–15. https://doi.org/10.1080/08870448908400361
Simons, D. J., Shoda, Y., Lindsay, D. S. (2017). Constraints on generality (COG): A proposed addition to all empirical papers. Perspectives on Psychological Science, 12(6), 1123–1128. https://doi.org/10.1177/1745691617708630
Simonsohn, U., Nelson, L. D., Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534–547. https://doi.org/10.1037/a0033242
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., Esterling, K. M., Nosek, B. A. (2021). Initial evidence of research quality of registered reports compared with the standard publishing model. Nature Human Behaviour, 5(8), 990–997. https://doi.org/10.1038/s41562-021-01142-4
Soga, M., Evans, M. J., Tsuchiya, K., Fukano, Y. (2021). A room with a green view: The importance of nearby nature for mental health during the COVID-19 pandemic. Ecological Applications, 31(2), 2248. https://doi.org/10.1002/eap.2248
Sparacio, A., Ropovik, I., Jiga-Boy, G. M., Forscher, P. S., Paris, B., IJzerman, H. (2022). Stress regulation via self-administered mindfulness and biofeedback interventions in adults: A pre-registered meta-analysis. PsyArXiv. https://psyarxiv.com/zpw28/
Stanley, T. D., Doucouliagos, H. (2014). Meta-regression approximations to reduce publication selection bias. Research Synthesis Methods, 5(1), 60–78. https://doi.org/10.1002/jrsm.1095
Stanley, T. D., Doucouliagos, H. (2017). Neither fixed nor random: Weighted least squares meta-regression. Research Synthesis Methods, 8(1), 19–42. https://doi.org/10.1002/jrsm.1211
Sterne, J. A. C., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., Cates, C. J., Cheng, H.-Y., Corbett, M. S., Eldridge, S. M., Emberson, J. R., Hernán, M. A., Hopewell, S., Hróbjartsson, A., Junqueira, D. R., Jüni, P., Kirkham, J. J., Lasserson, T., Li, T., … Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ, 366, l4898. https://doi.org/10.1136/bmj.l4898
Sutton, A. J., Duval, S. J., Tweedie, R. L., Abrams, K. R., Jones, D. R. (2000). Empirical assessment of effect of publication bias on meta-analyses. BMJ, 320(7429), 1574–1577.
Szabelska, A., Pollet, T. V., Dujols, O., Klein, R. A., IJzerman, H. (2022). A tutorial for exploratory research: An eight-step approach. Psyarxiv. https://psyarxiv.com/cy9mz/download/?format=pdf
Thorsteinsson, E. B., James, J. E. (1999). A meta-analysis of the effects of experimental manipulations of social support during laboratory stress. Psychology Health, 14(5), 869–886. https://doi.org/10.1080/08870449908407353
Ulrich, R. S. (1979). Visual landscapes and psychological well-being. Landscape Research, 4(1), 17–23. https://doi.org/10.1080/01426397908705892
Ulrich, R. S. (1983). Aesthetic and affective response to natural environment. In I. Altman J. F. Wohlwill (Eds.), Behavior and the Natural Environment (pp. 85–125). Springer. https://doi.org/10.1007/978-1-4613-3539-9_4
Van Aert, R. C. M., Van Assen, M. A. L. M. (2021). Correcting for publication bias in a meta-analysis with the p-uniform* method. https://osf.io/preprints/metaarxiv/zqjr9/
Vevea, J. L., Woods, C. M. (2005). Publication bias in research synthesis: Sensitivity analysis using a priori weight functions. Psychological Methods, 10(4), 428–443. https://doi.org/10.1037/1082-989x.10.4.428
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. https://doi.org/10.18637/jss.v036.i03
Viechtbauer, W., Cheung, M. W.-L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1(2), 112–125. https://doi.org/10.1002/jrsm.11
Watson, D., Clark, L. A., Carey, G. (1988). Positive and negative affectivity and their relation to anxiety and depressive disorders. Journal of Abnormal Psychology, 97(3), 346–353. https://doi.org/10.1037/0021-843x.97.3.346
Wickham, H., Averick, M., Bryan, J., Chang, W., McGowan, L., François, R., Grolemund, G., Hayes, A., Henry, L., Hester, J., Kuhn, M., Pedersen, T., Miller, E., Bache, S., Müller, K., Ooms, J., Robinson, D., Seidel, D., Spinu, V., … Yutani, H. (2019). Welcome to the tidyverse. Journal of Open Source Software, 4(43), 1686. https://doi.org/10.21105/joss.01686
Wittmann, A., Braud, M., Dujols, O., Forscher, P., IJzerman, H. (2022). Individual differences in adapting to temperature in French students are only related to attachment avoidance and loneliness. Royal Society Open Science, 9(5), 201068. https://doi.org/10.1098/rsos.201068
Zellars, K. L., Perrewé, P. L. (2001). Affective personality and the content of emotional social support: Coping in organizations. Journal of Applied Psychology, 86(3), 459–467. https://doi.org/10.1037/0021-9010.86.3.459
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material