The present preregistered study aimed to test the efficacy of a strategic resource use intervention among Hungarian first-year economics students in a gateway math course (N=261). According to the results, the intervention did not have an impact on test scores, grade point average, or dropout rate, and it did not lead to psychological benefits. Despite following the instructions regarding the adaptation of the materials to the given university context, we found neither the expected main nor moderated effects among these students. There were some dissimilarities between the original strategic resource use intervention (Chen et al., 2017) and the present one. We detail the potential adaptation-, implementation-related and theoretical reasons behind the lack of expected effect in the discussion.
The strategic resource use intervention was designed by Chen et al. (2017) to contribute to and extend students’ intention to use multiple beneficial resources when preparing for challenging exams. This intervention had previously been tested in small (Chen et al., 2017) and large-scale studies (Chen et al., 2022) and demonstrated favorable effects regarding course performance. However, little is known about the strategic resource use intervention outside the US context. In the present preregistered Hungarian intervention, we adapted this intervention to an Eastern-European economics class context. We chose a mathematics course for first-year Economics students that has been historically one of the most challenging courses for students. The goal of the present work was taking the first steps in the examination of the potential benefits of this intervention for Hungarian economics students compared to their American peers (Chen et al., 2022).
Based on prior research, using systematic strategies in various decisions pays off. Prior comprehensive studies showed that people who are characterized by a strategic mindset (in terms of asking themselves about better strategies to cope with challenges or insufficient progress) displayed better academic performance at college; reported better progress toward important goals concerning their profession, studies or health; and they also practiced more and performed better on challenging tasks (Chen et al., 2020). One of the academic implementations of this strategic mindset is related to the strategic resource use intervention (Chen et al., 2017).
This intervention belongs to the family of wise social psychological interventions in the field of education (Walton & Wilson, 2018; Yeager & Walton, 2011) which aim to capitalize on precise psychological mechanisms and unleash students’ learning potential. Regarding the content of the strategic resource use intervention, it is close to the educational self-regulation interventions (e.g., Bembenutty, 2013; Pape et al., 2013; Weinstein & Acee, 2013). According to a recent meta-analysis, the family of self-regulated learning interventions can contribute to enhanced metacognitive strategies, resource management strategies, motivational outcomes, and academic performance (see Theobald, 2021).
More precisely, it is a self-administered intervention that focuses on motivating students’ strategic use of their learning resources (Chen et al., 2017), addressing the common issue that many students do not strategically self-regulate their learning (e.g., Zimmerman, 2011; Zimmerman & Martinez-Pons, 1988). The theoretical rationale of this intervention is grounded in the idea that effective use of metacognitive self-regulation—where students actively direct their mental processes toward their learning goals—can increase motivation and improve academic performance (e.g., Pintrich & De Groot, 1990; Richardson et al., 2012). This intervention encourages students to elaborate various ways to approach their preparation for exams and the learning process effectively using the available resources (e.g., practice exam exercises, recorded talks, discussions with classmates, certain text-book chapters, etc.). The intervention helps them to strategize about when and how to prepare and use particular resources to master the materials in a way that is required for the exams. The goal-directed planning of the learning process using various adequate resources is supposed to help students to translate their intentions into actions (Gollwitzer, 1999; Gollwitzer & Brandstätter, 1997) and thus perform better on high-stakes exams. This approach in the context chosen for adaptation was informed by a “pre-test” discussion with students from a previous cohort, which revealed that many were relying primarily on their notes to prepare for exams, indicating a need for broader and more strategic resource use. The intervention aims to instill better resource use behaviors, potentially leading to improved academic outcomes.
To our best knowledge, besides the seminal paper on US students (Chen et al., 2017), only one published large-scale study (Chen et al., 2022) replicated the effectiveness of this intervention in a large and diverse US sample, and they also found that an application-based, more advanced form of the strategic resource use intervention (i.e., “Exam Playbook”) led to a 0.18 standard deviation increase benefit on the students’ exam scores. Two boundary conditions might be mentioned: first, the earlier students started to use the intervention material, the benefits were higher, and those students who received the intervention close to their exam could benefit less from it. Second, the authors mentioned that the intervention led to better exam performance if students received the intervention multiple times.
To our best knowledge, this intervention has never been tested in a non-U.S. sample, and it has never been replicated by a research team that is independent of the authors of the original study. In the present preregistered study, using an active control group and after a careful adaptation to the academic context, we aimed to fill this gap and to adapt the strategic resource use intervention to the Hungarian higher education context in an introductory math course for economics students. Following the guidelines of Bryan, Yeager and O’Brian (2019), before running the study, we got in touch with the authors of the original studies and asked for the materials of the Chen et al. (2017) work along with their advice concerning the adaptation.
As this is a relatively recent intervention without accumulated knowledge about its boundary conditions, we aimed to examine our preregistered hypotheses as similarly as possible to the original study. Accordingly, we aimed to assess the psychological and academic performance benefits of the intervention. To follow a logical order, we reordered the preregistered hypotheses. Compared to the control condition, we expected that the treatment would lead to better immediate psychological outcomes, including higher self-reported self-efficacy in preparing for the midterm exam (H1) and stronger intentions to use effective learning strategies while preparing for the upcoming midterm (H2). Additionally, in terms of immediate measures of effort, we hypothesized that, in the treatment condition (compared to the control), students would spend more time on practice exercises for the exam (H3) and more time reviewing the solutions to these practice tasks (H4). Regarding academic outcomes, we anticipated that the treatment (compared to the control) would result in higher scores on the upcoming midterm exam (H5), improved course grades (H6), and lower dropout rates (H7). We also expected these outcomes to hold true even after controlling for individual differences related to midterm scores, course grades and dropout rates, such as gender, minority status, first-generation status, lower prior grades, and previous failed math exams. In addition, we expected that the time spent on practice exercises and solution checking (which were part of both the intervention and control materials) would mediate the relationship between the conditions and midterm scores (H8a, H8b) as well as final course grades (H9a, H9b). We also aimed to explore the moderating effects of the relevant individual difference’ variables such as prior grades, gender, minority status, and first-generation status. We assumed that female, minority, and first-generation students profit more from the intervention than privileged students.
Method
Participants and University Context
In the original study by Chen et al. (2017), 171 students participated in the first intervention (Study 1), and 190 students participated in the second intervention (Study 2). These sample sizes were sufficient to detect an effect size of at least 2.0 percentage points according to the a priori power analysis. To be comparable with these samples and to reach ~180 students, we gathered data in a first-year mathematics class of 500 students, where we expected an approximately 40% participation rate. Participants of this study were first-year students (N = 261; Mage = 19.42 (missing = 11); SDage = 1.62 (missing = 11); 38.7% female, 3.07% non-binary (missing = 11); 90.42% Caucasian (missing = 11)) of the economics program at a selective and prestigious public university (3rd out of the 38 Hungarian universities; HVG, 2023). Only one-fifth (20.69%) of the students did not have a parent with a post-secondary education which is a low number compared to other higher educational institutions.
Targeted students were enrolled in an introductory math class where participation in the study was voluntary (and not financially compensated). Participation in the intervention was encouraged in two ways: first, the course leader (second author of the present study) created unique practice exercises with solutions to support the preparation for the midterm. The first author integrated these practice exercises and solutions into the intervention materials so that students could access them only through participation (active consent was also requested to use their data for research purposes). Second, going through the practice exercises and participating in the study was highly recommended by the course leader (second author). The intervention materials were distributed to the students two weeks before the first midterm of the course. Following the preregistered stopping rule (preregistration link: https://osf.io/ehndb), inclusion in the analysis was closed three days before the first midterm.
From all 314 survey response attempts, fifty-three were removed for the following reasons: thirty-four were registered after the preregistered stopping rule, thirteen did not reach any of the conditions, and three students attempted twice (leading to six duplicated attempts). Of the remaining 261 randomly allocated students (ntreatment = 130, ncontrol = 131), seventeen did not provide an appropriate Student ID, which prevented us from linking their responses to their midterm scores and course grades, and 11 did not finish participating in the program. However, students who did not provide correct Student IDs and who did not finish the materials were retained in the analyses (see Figure 1).
Notes. Immediate: refers to immediate psychological outcomes and measures of effort recorded at the end of the program; Scores: refers to the scores from the first midterm exam; Grades: refers to the course grades at the end of the semester.
Notes. Immediate: refers to immediate psychological outcomes and measures of effort recorded at the end of the program; Scores: refers to the scores from the first midterm exam; Grades: refers to the course grades at the end of the semester.
Procedure and Intervention Content
The strategic resource use intervention content was adapted from materials of Chen et al. (2017), primarily adjusted for an introductory math course, and integrated unique practice exercises with their solutions to support the preparation for the midterm. The current intervention was administered in one session lasting 36.52 minutes (without outliers based on interquartile range, IQR). The study was approved by the ethics committee of the first author’s university.
The intervention content was presented as an online survey in which participants were asked to share their math-related attitudes and behaviors. 50% of participants were randomly assigned to receive the intervention materials. Exam preparation exercises were incorporated into both the intervention and control content. In the treatment condition, students read a message “that successful high achievers use resources strategically when preparing for exam” (Chen et al., 2017, p. 776) and then completed the first exam preparation exercise. After the preparation exercise (which also served as a prompt to think through the tasks of the upcoming exam), students in the treatment indicated on a checklist which of 20 course-relevant resources they intended to use to prepare effectively for the midterm exam. The instructions were exactly the same as the original. The resources also overlapped significantly with those presented in the original intervention (i.e. they included lecture notes, textbook readings, consultation with the instructor, peer discussions, mentoring, practice exam questions, etc.) and, based on the suggestions from the course instructor, also included some new elements, such as watching video materials from the previous year and using specific notes and books (see Appendix S1 in the Supplementary Materials: https://osf.io/u8jh6). Subsequently, as in the original intervention, participants answered two open-ended questions. In the first, based on the checklist, they were expected to explain why the selected resource would be useful for exam preparation and how it could help them learn effectively. In this exercise, as in the original intervention, students were encouraged to think carefully about the purpose of each resource by providing a model example. In the second open-ended question, again in line with the original work, they were asked to plan their learning process for the exam and demonstrate when, where and how they planned to use their chosen resources and were also encouraged to create a plan that was specific, realistic and useful for their exam preparation to help them effectively turn their strategic intentions into concrete actions. This exercise was also supported by a model example.
In the present work, we implemented an active control group with a very similar structure to the intervention condition (the median of the control material duration was 34.53 minutes without outliers based on IQR). However, in this case, students could provide a detailed description of how to maintain healthy sleeping habits. Students in the control condition were asked about their sleeping habits and then reminded that students often sacrifice sleep during intense exam periods, which can affect their performance. They were also asked to give advice to other students on how to practice healthy sleep habits as the academic load increases.
The randomly allocated intervention and control materials were embedded in an exam preparation exercise with pre- and post-measure. Similarly to the original materials (Chen et al., 2017), in this study, students were informed that they can obtain up to 20 points on their upcoming midterm exam. They were then asked several questions: their desired grade for the upcoming midterm, their motivation to prepare for the midterm, the amount of effort they felt they had already invested in the math course, and their anticipated location and method of studying. For the latter, they were asked to indicate how confident they were about where and how they planned to prepare. In the pre-intervention survey, before the random allocation (to the intervention and control groups), they responded to a survey of eight items regarding the perceived effectiveness of their past learning behaviors during the course. In the post-intervention survey, we used the same measure with a modification to reflect on the students’ intentions about how they would approach studying for the upcoming math exam. Furthermore, we asked students about their perceived self-efficacy to cope with various obstacles during the preparation for the exams. Finally, sociodemographic variables were asked.
After the informed consent, participants were randomly assigned to conditions automatically; therefore, everybody was blind to the conditions. To keep students’ sensitive information separated from course performance, raw data with Student IDs was accessed only by this study’s first and last authors. After linking responses to the official academic performance records, these sensitive identifiers were removed from the database. The second and third authors could access data only after this step. Anonymized data is available on the project’s OSF page: https://osf.io/xgtns/. Beyond the randomization of intervention and control materials, the integrated math practice exercises and solutions were identical in these conditions.
Measures
Pre-intervention self-reflection on learning. This measure was initially adapted by Chen et al. (2017) from the metacognitive self-regulation subscale of Pintrich et al.’s (1991) Motivated Strategies for Learning Questionnaire. The eight-item scale evaluated how much students adapted their learning to the class, adjusted their learning approaches when they were ineffective, and reflected on their learning effectiveness and performance. The last item from the initial adaptation of Chen et al. (2017) was slightly modified to be more realistic for the situation of the participants (the modified version stated “After completing any task related to the class, I reflected on how my effectiveness was connected to the way I approached the task.”). Other items were not modified. Each item was scored on a five-point Likert scale ranging from 1 (almost never) to 5 (most of the time). Although the scale showed acceptable internal consistency (α = .78), confirmatory factor analysis showed poor fit for the one-factor model (CFI = .793, TLI = .710, RMSEA = .126 [90% CI = 0.103 - 0.149]), while exploratory factor analysis suggested a two-factor structure. Three items were removed due to low factor loadings, low communality, or high cross-loading. The final two-factor model demonstrated excellent fit (CFI = .998, TLI = .994, RMSEA = .019 [90% CI = 0.000 - 0.093]). The two reliable subfactors, Monitoring Learning Effectiveness (λ=0.489 to 0.784, ω=0.690) and Dealing with Difficulties (λ=0.557 to 0.768, ω=0.615), were named based on item content. Standardized parameter estimates are reported in Appendix S2, Table S1 in the Supplementary Materials (https://osf.io/u8jh6).
Post-intervention intentions to approach learning. The measure was also based on an adaptation of the metacognitive self-regulation subscale of Pintrich et al.’s (1991) Motivated Strategies for Learning Questionnaire (Chen et al., 2017) as the pre-intervention measure. However, whereas the pre-intervention scale focused on learning strategies related to past class preparation behaviors, the items on the post-intervention scale focused on students’ intentions to use these effective learning approaches while preparing for the upcoming midterm exam. Respondents indicated the extent to which they intended to use these approaches on a five-point Likert scale ranging from 1 (not at all) to 5 (completely). Similarly to the pre-intervention scale, confirmatory factor analysis showed poor fit for the one-factor model (CFI = .687, TLI = .562, RMSEA = .188 [90% CI = 0.167 - 0.210]), while exploratory factor analysis suggested a two-factor structure. Three items were removed due to low factor loadings, low communality, or high cross-loading. The final two-factor model demonstrated excellent fit (CFI = .995, TLI = .988, RMSEA = .037 [90% CI = 0.000 - 0.099]). The two reliable subfactors, Monitoring Learning Effectiveness (λ=0.575 to 0.837, ω=0.788) and Dealing with Difficulties (λ=0.713 to 0.778, ω=0.715), were named based on item content. Standardized parameter estimates are reported in Appendix S2, Table S1 in the Supplementary Materials (https://osf.io/u8jh6).
Post-intervention self-efficacy to prepare for the midterm. An eight-item scale was used to measure participants’ post-intervention self-efficacy to use necessary learning strategies for midterm preparation like keeping their attention on learning, dealing with distractions, maintaining interest and perseverance in learning, achieving learning goals in preparation, and working hard on preparatory tasks. Respondents indicated their self-efficacy level using a five-point Likert scale ranging from 1 (not at all) to 5 (completely). The confirmatory factor analysis showed inadequate fit for the one-factor model (CFI = .923, TLI = .893, RMSEA = .086 [90% CI = 0.064 - 0.109]), while exploratory factor analysis suggested a one-factor structure. Two items were removed due to their low communality and high uniqueness values. The final one-factor model demonstrated acceptable fit on the two of three fit indices (CFI = .944, TLI = .907, RMSEA = .097 [90% CI = 0.065 - 0.132]). Parameter estimates (reported in Appendix S2, Table S1 in the Supplementary Materials: https://osf.io/u8jh6) revealed a well-defined factor (λ=0.569 to 0.815, ω=0.832).
Post-intervention academic performance records. The course instructor (second author) provided the students’ official first midterm scores (20-point scale) and final course grades (from October until mid-January) on a five-point scale (1 = worst grade ~E/F in the US grading system, 5 = best grade ~A in the US grading system).
Analytic Strategy
Statistical analyses were performed with R 4.2.2 (R Core Team, 2022). With the use of OLS multiple regression models, we were interested in assessing the difference between the treatment and control groups regarding midterm scores, grade point average, and time spent with practice exercises/solution checking. Logistic regression was used to assess the difference between the treatment and control groups in terms of dropout rates. The difference between the treatment and control groups was examined using structural equation modeling, focusing on the latent factors of self-efficacy in preparing for the midterm and the two subscales of the post-intervention self-reflection on effective learning. In the case of assessing the difference regarding the subscales of post-intervention self-reflection on effective learning between treatment and control groups, we controlled for the relevant pre-intervention self-reflection on effective learning factor. The use of structural equation modeling and the inclusion of control variables in the analysis of psychological outcomes are deviations from our preregistered analysis plan. For assessing the effect of the intervention on grade point average, midterm scores, and the dropout rates, we first ran a simple model (analyzing only condition as a predictor), then controlled for gender, minority status, first-generation status, prior grades, and prior failed math exams.
Similarly to the original intervention (Chen et al., 2017), we first implemented intention-to-treat (everybody was analyzed who reached the random condition assignment), then the treatment-on-the-treated analyses (only those students were analyzed who followed the instructions). For inclusion in the treatment-on-the-treated analyses, in line with the preregistered plan, independent coders assessed whether students’ responses to the open-ended questions adhered to the instructions. Cohen’s Kappa for one open-ended question was 1.00, indicating perfect agreement between coders, whereas Cohen’s Kappa for the other open-ended question was 0.92, reflecting a very high level of agreement. In cases of inconsistencies, the authors resolved discrepancies through discussion and agreement on the appropriate coding. Data were analyzed for all students randomly assigned to the treatment vs. control condition regarding immediate outcomes. Only those students who provided a correct student ID were included in the analyses of academic records (midterm scores, course grades, dropouts). See the sample characteristics and attrition in Figure 1.
We also explored the moderating effects of the relevant individual difference variables such as prior grades, gender, minority status, and first-generation status. Finally, using structural equation modeling, we also aimed to test the mediating role of time spent with the exercises/solutions between the conditions and the midterm scores/grades. The use of structural equation modeling for the mediation analyses was not included in the preregistered analysis plan. The data and analysis code of the present study can be found on the project’s OSF page: https://osf.io/xgtns/.
Results
Random assignment. Random assignment to conditions was mainly successful. Except for gender (z = 2.787, p = 0.005), no other variable assessed before the intervention differed significantly by condition (age, first-generation status, pre-intervention self-reported grade point average, and pre-intervention self-reflection on effective learning: all ps > 0.07).
Immediate Psychological Outcomes
Post-intervention self-efficacy to prepare for the midterm. There was no significant difference between conditions in terms of respondents’ self-efficacy to prepare for the midterm after the intervention, b = -0.004, z(247) = -0.064, p = 0.949, d = -0.004.
Post-intervention intentions to approach learning. Counterintuitively, after controlling for pre-intervention Monitoring learning effectiveness subscale of self-reflection on learning, the intervention resulted in a lower level of post-intervention intentions to use the specific effective learning approaches while preparing for the upcoming midterm exam (compared to the control group), b = -0.179, z(248) = -2.14, p = 0.032, d = -0.128. After controlling for the pre-intervention Dealing with difficulties subscale of self-reflection on learning, there was no significant difference between conditions regarding the post-intervention Dealing with difficulties subscale (p=0.645).
Immediate Measures of Effort
Time spent with practice exercises. Results show no significant difference between the treatment and the control conditions on time spent with practice exercises (b = -0.79, t(248) = -0.548, p = 0.584, d = -0.07) with the same result in the case of the treatment-of-the-treated (b = 0.56, t(203) = 0.345, p = 0.731, d = 0.05). Students spent approximately seventeen minutes on the practice tasks in the treatment (M =16.39, SD = 11.09) and the control conditions (M = 17.18, SD = 11.56).
Time spent with solution checking. Results show no significant difference between the treatment and the control conditions in time spent checking the detailed and explained solutions (b = -0.69, t(239) = -0.868, p = 0.386, d = -0.11) which was also true in the case of the treatment-of-the-treated (b = -0.91, t(199) = -1.080, p = 0.281, d = -0.16). Students spent approximately three minutes checking the solutions in the treatment condition (M =2.58, SD = 5.26) and three and a half minutes in the control condition (M = 3.39, SD = 6.42).
Academic Outcomes
First midterm scores. An OLS regression analysis tested the effects of conditions on the first midterm exam. The results show no significant difference between the treatment and control conditions (b = -0.47, t(242) = -0.795, p = 0.427, d = 0.102), with students scoring an average of 12.72 points (SD = 4.68) in the control and 12.25 points (SD = 4.50) in the treatment groups out of a maximum of 20 points. There were no significant differences in the effect of the two conditions on the first midterm scores after controlling for individual differences (ps > 0.40) as well as in the case of the treatment-of-the-treated (ps > 0.26).
Math course grades. An OLS regression analysis, reported below, tested the effects of conditions on the course grade. Results show no significant difference between the treatment (M = 3.64, SD = 1.24) and the control (M = 3.76, SD = 1.18) conditions (b = -0.120, t(242) = -0.775, p = 0.439, d = -0.10). There were no significant differences in the effect of the two conditions on the math course grades after controlling for individual differences (ps > 0.38) as well as in the case of the treatment-of-the-treated (ps > 0.29).
Dropout. The results of the logistic regression show no significant difference between the treatment (n=8) and the control (n=3) conditions in the dropout numbers (B = 0.495, z(259) = 1.213, p = 0.225). There were no significant differences in the effect of the two conditions on math course attrition after controlling for individual differences (ps > 0.20) as well as in the case of the treatment-of-the-treated (ps > 0.79).
Mediation analyses. We did not find any significant mediation of time spent with exercises/solutions between the condition and the midterm scores/grade results (all ps > 0.36).
Moderator analyses. The ratio of minority-status students was 5.6% (n = 14) which did not allow us to conduct a well-powered interaction analysis with minority status. We did not find a significant interaction of the conditions with gender (p < 0.22) and first-generation status (p < 0.75) in regards of the course grade. However, the interaction between conditions and prior self-reported grade point average showed a significant interaction (b = -0.34, t(238) = -2.165, p = 0.031, d = -0.28), without significant simple effects plus and minus one standard deviation of prior grade point average around the mean (pbelow the mean=0.196, pabove the mean=0.068).
Discussion
The present preregistered study aimed to test the efficacy of a strategic resource use intervention (Chen et al., 2017) among Hungarian first-year economics students in an introductory math course. Although in a “pre-test” discussion with students from the previous cohort we found that the majority of them used only their notes to prepare for the exam and not other resources, and in the current study the baseline measure in terms of self-reflection on learning strategies showed a potential need for this type of intervention (M = 3.14, SD = 0.866 for the Monitoring Learning Effectiveness subscale, and M = 3.28, SD = 0.799 for the Dealing with Difficulties subscale), applying an adaptation of the original materials to the given university context did not lead to either a main or a moderated effect among these students (with only two exceptions). Specifically, in contrast to the control condition, the intervention had no effect on academic performance (i.e., midterm scores, final course grade, or course dropout rate) or on immediate measures of study effort (i.e., time spent on the midterm practice test and time spent reviewing practice test solutions). Furthermore, among the expected psychological benefits, the intervention had no effect on self-efficacy for midterm preparation and on the Dealing with difficulties subscale of self-reflection on learning. However, after controlling for pre-intervention Monitoring learning effectiveness subscale of self-reflection on learning, the intervention showed a negative effect on post-intervention intentions to use the specific learning approaches while preparing for the upcoming midterm exam.
Contrary to expectations, exam practice and solution review time (which were part of the intervention and control materials) did not mediate between conditions and academic performance outcomes (in terms of midterm scores, and final course grade). In addition, the results do not show a significant interaction of conditions with gender and first-generation status on the course grade, while the small proportion (5.6%) of minority-status students did not allow us to conduct a well-powered interaction analysis with this individual difference variable. However, among the relevant individual difference variables, the interaction of conditions and self-reported prior GPA showed a significant effect on the final course grade. According to the results, the intervention had a (non-significant) positive effect on the grades of students with below-average prior GPAs and a non-significant negative effect on the grades of students with above-average prior GPAs.
In sum, the intervention did not lead to the expected academic performance and psychological benefits. The backlash against using specific learning approaches during exam preparation might be related to a form of reactance. It is possible that students were already aware of these resources, and the intervention, which they may have perceived as an attempt to persuade them to use these resources strategically, caused a reluctance to adopt these approaches (Steindl et al., 2015). Additionally, the intervention encouraged participants to select useful exam preparation resources, explain their choices, and plan when, where, and how to use them, using supportive, neutral, and non-controlling language. However, the incentive (access to exam exercises only through participation) may have created a controlling context, making students feel pressured. Controlling contexts, as shown in previous research (e.g., Deci et al., 1999), reduce autonomous motivation, which might also explain the lower intention to use these approaches. In this context, the treatment’s heavy focus on resources and learning strategies, compared to the control’s focus on sleep habits, may have heightened reactance and contributed to worse outcomes in the treatment group. The other unexpected result provided some non-significant hints that this intervention might have some benefits for those students who have lower academic performance. Although these results do not seem to be counterintuitive (students with lower academic performance might use less diverse and appropriate learning strategies), the present sample could not provide sufficient statistical power to confirm these tentative results. Therefore, based on these unexpected results, we do not obtain a lot of beneficial tips regarding how to develop the intervention materials or the implementation. However, we found it useful to take a closer look at the difference between the present attempt and the original intervention to learn from these results and discover potential reasons why the present intervention was less successful than we expected.
Although the original materials and protocol were closely followed during their adaptation, the lack of replication of the previous beneficial effects of the intervention may be due to some dissimilarities between the original strategic resource use intervention (Chen et al., 2017) and the present one (e.g., Bryan et al., 2019). Considering potential differences in the implementation context and target audience characteristics (e.g., McPartlan et al., 2020; Walton & Yeager, 2020), we can identify some apparent but likely less significant differences. For instance, the original intervention was tested among undergraduate students in an introductory statistics course, while the current intervention targeted first-year economics students in a math course. Both courses, despite their differences, were prerequisites for academic advancement. It is worth mentioning that the intervention might function differently in various class climates across institutions, and an instructor who lectures on a specific material can create a particular class culture. However, cultural differences can appear not only at this micro level.
Additionally, large-scale cultural differences might be seen as important contextual factors between the two populations. However, based on the available information about the target audiences of both the original and current interventions, we cannot identify salient or meaningful differences that could explain the lack of effects from a cultural standpoint. For example, data from the 2022 Programme for International Student Assessment (PISA) shows that Hungarian students, the OECD average, and their American peers demonstrate similar results in areas like persistence and confidence in self-directed learning. While Hungarian students exhibit intelligence mindset beliefs similar to the OECD average, they have a slightly more fixed mindset compared to their American peers and report a higher sense of belonging than both the OECD average and US students. Hungarian schools also experience greater shortages of educational materials and staff compared to the USA and OECD countries, meaning they are less equipped than the OECD average or US educational institutions (OECD, 2022). Additionally, the need for improved strategic resource use for learning was identified in both target audiences. Given these comparisons, from this cross-cultural perspective, it is likely that the intervention was implemented for a comparable target audience with similar “Client Characteristics” (see McPartlan et al., 2020; Weiss et al., 2014).
Nevertheless, we identified additional differences related to measurement, materials, and intervention implementation, which may explain the lack of the expected effect. Details of these differences are provided in Table 1. To facilitate the use of insights from this work in intervention design and to support further replication of the Strategic Resource Use Intervention, we present these details in the following structure: First, we describe the pre-intervention differences, followed by a comparison of the intervention materials, post-intervention measures, and outcome measures. Next, we outline the additional instructional strategies or learning protocols, variations in treatment exposure (timing and dosage), and finally, variations in study design features, such as the control condition and incentives. We aim to be precise and detail-oriented in reporting these differences, which might provide insights and hints about mistakes to avoid in future adaptations of this intervention.
Program Elements | Original Intervention (Chen et al., 2017) | Hungarian adaptation (present study) | Assumed Impact of Changes on Findings |
1. Pre-intervention, intervention and post-intervention measures and materials | |||
1.1. Pre-intervention instructions and measures | Before the pre-exam survey, participants were reminded that their upcoming exam was worth 100 points. | At the beginning of the preparation tasks, all students were reminded that the tasks were worth a total of 20 points. | Minor. |
Students were asked to report their desired grade on the upcoming exam. They measured students’ pre-exam negative affects, such as anxiety, nervousness, fearfulness, stress about the exam, and the perceived control over the outcomes of the test. | They were asked to write down their desired grade on the upcoming exam, but we did not ask them to report their affective states and perceived control over their performance. | Minor. | |
At the beginning of the original survey, students were requested to reply to the following questions: - “How motivated are you to get that grade?” - “How important is it to you to achieve that grade?” - “How confident are you in achieving that grade?” | At the beginning of the Hungarian replication, these questions were slightly modified and adjusted to the context, and we added an extra question regarding efforts: - “How motivated are you to prepare for the first midterm?”, - “How well do you know where you are going to study for the first midterm?” - “How well do you know how you are going to study for the first midterm?” - “So far this semester, how much effort have you put into completing the math course?” | Minor. | |
1.2. Intervention materials | In the first part of the intervention, students in the treatment condition read a message telling them that successful high achievers use resources strategically when preparing for exams. | This part of the intervention was adapted without alteration. | No difference. |
The Strategic Resource Use exercise prompted students to consider the upcoming exam format deliberately. | Both treatment and control conditions included a sample midterm exam (tasks and instructions in the upcoming midterm exam format). Consequently, students did not have to “deliberately consider” its format. | Minor (actively working on the sample exam, rather than just thinking about it, could even have a positive impact). | |
Students were asked to indicate class resources they wanted to use to maximize their learning effectiveness (from a list of 15 available). The comprehensive class resource checklist was developed in collaboration with the course instructor. | This part of the intervention was adapted without meaningful change (with a list of 20 relevant resources with significant overlap with elements used by Chen et al., 2017 ). | Minor. | |
Students in the treatment condition answered two open-ended questions after completing the checklist. | This part of the intervention was adapted without alteration. | No difference. | |
First, they described why they thought each selected resource would be useful for their exam preparation. They were also asked explicitly to describe how each chosen resource would help them study effectively. A model example was provided to students. | This part of the intervention was adapted without alteration. | No difference. | |
Second, the students were asked to describe how they plan to prepare for the upcoming exam 1. They were told that their plan should be as specific as possible, while it should be also realistic and useful for their exam preparation. The instruction was not explicitly suggesting that they had to link “when-where-how” they plan to use each chosen resource. However, a model example was provided to students which included these. | This part of the intervention was adapted without alteration. | No difference. | |
The instructions for the two open-ended questions were supplemented with specific examples to help students with their explanations. | This part of the intervention was adapted without alteration. | No difference. | |
1.3. Post-intervention measures | Self-reflections on how to learn effectively - at the end of the class (immediately after students received their grades) in Study 2, an eight-item Self-Reflection on Learning scale was administered. | The same scale was used, but it was administered before and immediately after intervention in both conditions. Before the intervention, scale items were focused on the Self-Reflection on previous Learning for class, and the same scale (with only slight modification of the items) was used to check manipulation immediately after the intervention. Items of the post-intervention Self-Reflection on Learning scale focused on the intentions for midterm exam preparation. | Minor. |
1.4. Outcome measures: Academic performance and demographic data | The instructor provided them with the outcome academic performance data. | We received midterm scores and grades on the course from the course instructor. | No difference. |
They received the demographic and prior data from the educational institution directly. | We did not receive official prior performance data or demographic data of the students. These were self-reported in the survey by them. | Minor. | |
2. Instructional strategies/learning protocols | |||
2.1. Booster 1: Reminders | Weekly remindersof available resourceswith suggestions for what to use during that week were used for all students (regardless of condition) about what they should do on that week. | Reminders were not used in either of the conditions. | Major 1 (additional instructional strategy/learning protocol). |
2.2. Booster 2: Post-exam reflection/ resource-evaluation measures | Beyond the pre-exam surveys, all students received post-exam reflection/resource-evaluation measures (one after first midterm, another after second midterm). Self-reported effectiveness of resource use - in each post-exam survey, in both conditions, students indicated which resources they had used to study for their exams and how useful they had found each resource. The post-exam survey measures were identical for all students regardless of condition. In the post-exam survey, students were asked to report the degree of control that they perceived they had over their exam performance and the extent to which they had planned their studying ahead of time, and how well they had kept to their plans. | Post-exam measures were not used in this study. | Major 2 (additional instructional strategy/learning protocol). |
Timing of the post-exam surveys | Students received the post-exam surveysimmediately after obtaining their exam grades and they had 2-4 days to reply to it. | Post-exam surveys were not applied in this study. | The lack of a post-exam survey is assumed to have had a major impact, although its timing is not relevant in this comparison. |
3. Treatment exposure | |||
3.1. Treatment-dosage (the number of pre-exam surveys) | Students had the opportunity to take the pre-exam surveys twice (first before the first midterm and second before the second midterm exam) | Students had the opportunity to take the survey only once (before the first midterm exam). | Major 3 (repeated exposure significantly improved outcomes in the original work, whereas a single session showed no significant effect). |
3.2. Timing of the intervention | The authors sent out the pre-exam intervention/control materials approximately 10 days before exams, and they closed it ~7 days before the exam. | The pre-exam survey was administered only once (which contained either the treatment or control messages), 14 days before the first midterm exam, and closed the survey 3 days before the midterm exam date. Most students completed the program exactly 8 days before the first midterm (63% control, 65% treatment). A total of 92% finished between 6 and 9 days before the midterm, with almost all finishing at least 6 days before (96% control, 95% treatment). (It was administered approximately 2 months before the exam period had started.) | Minor. |
4. Study Design Features | |||
4.1. Control condition | After being reminded that their upcoming exam was worth 100 points, and filling out the pre-intervention measures, they simply received a normal exam reminder. | After being reminded that their upcoming exam was worth 20 points, and filling out the pre-intervention measures, students in the active control condition received an alternative task that had a very similar structure to the intervention condition. In the control condition, students could provide a detailed description of how to maintain healthy sleeping habits. Students were asked about their sleeping habits and then reminded that students often sacrifice sleep during intense exam periods, which can affect their performance. They were also asked to give advice to other students on how to practice healthy sleep habits as the academic load increases. | Major 4 (the control content on “how to maintain healthy sleeping habits” could have positively influenced outcome variables, potentially reducing the likelihood of significant differences between the originally intended control and treatment conditions—unintentionally, both may have functioned as potentially effective treatment conditions) |
4.2. Incentives | All students in the class had the opportunity to participate in the program. They were promised to receive extra homework credit points for participation. | All students in the class had the opportunity to participate in the program, which -regardless of the conditions- included a sample midterm exam (tasks and instructions in the upcoming midterm exam format). Students were informed and encouraged by the course instructor to participate in the program. | Major 5 (sample midterm exam tasks may have had an adverse effect by distracting students' attention from the strategic resource use intervention materials). |
Program Elements | Original Intervention (Chen et al., 2017) | Hungarian adaptation (present study) | Assumed Impact of Changes on Findings |
1. Pre-intervention, intervention and post-intervention measures and materials | |||
1.1. Pre-intervention instructions and measures | Before the pre-exam survey, participants were reminded that their upcoming exam was worth 100 points. | At the beginning of the preparation tasks, all students were reminded that the tasks were worth a total of 20 points. | Minor. |
Students were asked to report their desired grade on the upcoming exam. They measured students’ pre-exam negative affects, such as anxiety, nervousness, fearfulness, stress about the exam, and the perceived control over the outcomes of the test. | They were asked to write down their desired grade on the upcoming exam, but we did not ask them to report their affective states and perceived control over their performance. | Minor. | |
At the beginning of the original survey, students were requested to reply to the following questions: - “How motivated are you to get that grade?” - “How important is it to you to achieve that grade?” - “How confident are you in achieving that grade?” | At the beginning of the Hungarian replication, these questions were slightly modified and adjusted to the context, and we added an extra question regarding efforts: - “How motivated are you to prepare for the first midterm?”, - “How well do you know where you are going to study for the first midterm?” - “How well do you know how you are going to study for the first midterm?” - “So far this semester, how much effort have you put into completing the math course?” | Minor. | |
1.2. Intervention materials | In the first part of the intervention, students in the treatment condition read a message telling them that successful high achievers use resources strategically when preparing for exams. | This part of the intervention was adapted without alteration. | No difference. |
The Strategic Resource Use exercise prompted students to consider the upcoming exam format deliberately. | Both treatment and control conditions included a sample midterm exam (tasks and instructions in the upcoming midterm exam format). Consequently, students did not have to “deliberately consider” its format. | Minor (actively working on the sample exam, rather than just thinking about it, could even have a positive impact). | |
Students were asked to indicate class resources they wanted to use to maximize their learning effectiveness (from a list of 15 available). The comprehensive class resource checklist was developed in collaboration with the course instructor. | This part of the intervention was adapted without meaningful change (with a list of 20 relevant resources with significant overlap with elements used by Chen et al., 2017 ). | Minor. | |
Students in the treatment condition answered two open-ended questions after completing the checklist. | This part of the intervention was adapted without alteration. | No difference. | |
First, they described why they thought each selected resource would be useful for their exam preparation. They were also asked explicitly to describe how each chosen resource would help them study effectively. A model example was provided to students. | This part of the intervention was adapted without alteration. | No difference. | |
Second, the students were asked to describe how they plan to prepare for the upcoming exam 1. They were told that their plan should be as specific as possible, while it should be also realistic and useful for their exam preparation. The instruction was not explicitly suggesting that they had to link “when-where-how” they plan to use each chosen resource. However, a model example was provided to students which included these. | This part of the intervention was adapted without alteration. | No difference. | |
The instructions for the two open-ended questions were supplemented with specific examples to help students with their explanations. | This part of the intervention was adapted without alteration. | No difference. | |
1.3. Post-intervention measures | Self-reflections on how to learn effectively - at the end of the class (immediately after students received their grades) in Study 2, an eight-item Self-Reflection on Learning scale was administered. | The same scale was used, but it was administered before and immediately after intervention in both conditions. Before the intervention, scale items were focused on the Self-Reflection on previous Learning for class, and the same scale (with only slight modification of the items) was used to check manipulation immediately after the intervention. Items of the post-intervention Self-Reflection on Learning scale focused on the intentions for midterm exam preparation. | Minor. |
1.4. Outcome measures: Academic performance and demographic data | The instructor provided them with the outcome academic performance data. | We received midterm scores and grades on the course from the course instructor. | No difference. |
They received the demographic and prior data from the educational institution directly. | We did not receive official prior performance data or demographic data of the students. These were self-reported in the survey by them. | Minor. | |
2. Instructional strategies/learning protocols | |||
2.1. Booster 1: Reminders | Weekly remindersof available resourceswith suggestions for what to use during that week were used for all students (regardless of condition) about what they should do on that week. | Reminders were not used in either of the conditions. | Major 1 (additional instructional strategy/learning protocol). |
2.2. Booster 2: Post-exam reflection/ resource-evaluation measures | Beyond the pre-exam surveys, all students received post-exam reflection/resource-evaluation measures (one after first midterm, another after second midterm). Self-reported effectiveness of resource use - in each post-exam survey, in both conditions, students indicated which resources they had used to study for their exams and how useful they had found each resource. The post-exam survey measures were identical for all students regardless of condition. In the post-exam survey, students were asked to report the degree of control that they perceived they had over their exam performance and the extent to which they had planned their studying ahead of time, and how well they had kept to their plans. | Post-exam measures were not used in this study. | Major 2 (additional instructional strategy/learning protocol). |
Timing of the post-exam surveys | Students received the post-exam surveysimmediately after obtaining their exam grades and they had 2-4 days to reply to it. | Post-exam surveys were not applied in this study. | The lack of a post-exam survey is assumed to have had a major impact, although its timing is not relevant in this comparison. |
3. Treatment exposure | |||
3.1. Treatment-dosage (the number of pre-exam surveys) | Students had the opportunity to take the pre-exam surveys twice (first before the first midterm and second before the second midterm exam) | Students had the opportunity to take the survey only once (before the first midterm exam). | Major 3 (repeated exposure significantly improved outcomes in the original work, whereas a single session showed no significant effect). |
3.2. Timing of the intervention | The authors sent out the pre-exam intervention/control materials approximately 10 days before exams, and they closed it ~7 days before the exam. | The pre-exam survey was administered only once (which contained either the treatment or control messages), 14 days before the first midterm exam, and closed the survey 3 days before the midterm exam date. Most students completed the program exactly 8 days before the first midterm (63% control, 65% treatment). A total of 92% finished between 6 and 9 days before the midterm, with almost all finishing at least 6 days before (96% control, 95% treatment). (It was administered approximately 2 months before the exam period had started.) | Minor. |
4. Study Design Features | |||
4.1. Control condition | After being reminded that their upcoming exam was worth 100 points, and filling out the pre-intervention measures, they simply received a normal exam reminder. | After being reminded that their upcoming exam was worth 20 points, and filling out the pre-intervention measures, students in the active control condition received an alternative task that had a very similar structure to the intervention condition. In the control condition, students could provide a detailed description of how to maintain healthy sleeping habits. Students were asked about their sleeping habits and then reminded that students often sacrifice sleep during intense exam periods, which can affect their performance. They were also asked to give advice to other students on how to practice healthy sleep habits as the academic load increases. | Major 4 (the control content on “how to maintain healthy sleeping habits” could have positively influenced outcome variables, potentially reducing the likelihood of significant differences between the originally intended control and treatment conditions—unintentionally, both may have functioned as potentially effective treatment conditions) |
4.2. Incentives | All students in the class had the opportunity to participate in the program. They were promised to receive extra homework credit points for participation. | All students in the class had the opportunity to participate in the program, which -regardless of the conditions- included a sample midterm exam (tasks and instructions in the upcoming midterm exam format). Students were informed and encouraged by the course instructor to participate in the program. | Major 5 (sample midterm exam tasks may have had an adverse effect by distracting students' attention from the strategic resource use intervention materials). |
This section will outline the key differences between the original intervention and the present one, emphasizing their significance. We think that the five major differences between the original intervention and its Hungarian version are the following: in the present case, beyond the intervention, students did not receive (1) weekly reminders of what tasks they needed to complete during the week, (2) nor did they receive post-exam surveys in which students were asked to reflect on the implementation of strategic use (i.e., reflect on the resources they used, how useful they found each resource, the control they felt over their exam performance, the extent to which they had planned their studying ahead of time, and how closely they followed their plan), which may provide an additional layer of reinforcement of the effects of the intervention (e.g., Sebesta & Bray Speth, 2017). Furthermore, in the present study, (3) students had the opportunity to meet with the intervention materials only once and not before two midterm exams, (4) in the control condition, students received an active control material (that included the practice exercises), and (5) in the treatment, they received the intervention materials together with the practice exercises and not separately.
Among these differences, we can categorize the first and the second ones as instructional strategies/learning protocols that can evidently lead to positive changes in academic performance (Sebesta & Bray Speth, 2017; Theobald, 2021). For example, weekly to-do lists can enhance academic performance by providing specific tasks that support students in attaining their goals (Gollwitzer & Sheeran, 2006; Locke & Latham, 2002). Additionally, reminders can diminish cognitive load and direct attention towards goals by making them more salient, which may enhance the probability of the desired behavior being performed (Gravert, 2022). This booster combines goal-setting, task management, and reminders to help students by scheduling weekly tasks, prompting class preparation, and providing a clear list of assignments, preventing workload from piling up before exams. The other booster (i.e. the post-exam survey) targeted students’ self-reflection and self-assessment of their learning process and used resources which can also increase self-regulated learning (Panadero et al., 2017). This process may assist students in evaluating their learning and identifying potential areas for improvement in relation to their midterm results. It could also encourage more conscious and successful future application of strategic resource use, which may ultimately lead to enhanced future performance. We did not capitalize on the opportunity of these two types of boosters as we supposed that the intervention will significantly influence the students’ self-regulatory behaviors for the preparation for the midterm exam. Furthermore, our intention was to test the effect of the strategic resource use intervention on its own, without any other factors influencing the results.
In the original intervention and in our case, there were two midterm exams. However, while the original intervention was delivered before both the first and second exams, in the present case, students received the intervention only once, prior to the first midterm exam, with no repetition before the second exam (i.e., therefore, the students did not receive the “full” intervention—as framed by Chen et al., 2017). While in the original paper of Chen et al. (2017), the intent-to-treat analyses showed that the treatment had a significant positive effect on course grades (including all participants, not separated by treatment dose –i.e., the number of pre-exam surveys taken), they reported a treatment dosage effect. According to their results, students who participated in the intervention twice reached significantly higher course grades compared to those who participated in the intervention only once. Although they only analyzed the treatment-dosage effect between treatment groups, the open science protocol they followed (i.e., they made their data and analysis code available via OSF; Chen, 2017) allowed us to test whether they found a significant difference between the control condition and those who received only one treatment dose. The results show that the one-session intervention resulted in a non-significant difference (compared to the control group) in their case as well (ps > 0.11). This additional analysis of the original data was conducted after the intervention had been implemented and the results were known, so its findings and insights could only be incorporated into a subsequent implementation.
The fourth difference is related to the active control group. We found it useful to implement an active control group to have balanced attrition regarding the control and the intervention groups in terms of finishing the materials (differential attrition). In business-as-usual control groups, participants need to make zero effort to complete the control materials; however, in the intervention condition, participants need to make efforts. In this case, some differential completion differences are inevitable. In the original paper, this might be one of the reasons why there are disappearing students between the methods and the results sections. By using an active control group, we aimed to avoid such imbalances. Nevertheless, because the content of the active control group encouraged participants to adopt healthier sleep habits to support their learning performance, this may eliminate any potential between-group differences, as the control message may also have had some positive effect on participants’ academic performance (e.g., Curcio et al., 2006; Okano et al., 2019). In further support of this assumption, in a recent paper, Creswell and colleagues (2023) examined the effect of nighttime sleep on academic performance among first-year college students and found that “every hour of nightly sleep lost was associated with a 0.07 decrease in end-of-term GPA” (p. 5).
The fifth major difference relates to the incentives used to encourage participation in the study. In the original study, participants were compensated with extra credit points, which was not a feasible option in the current study’s context. Instead, students received verbal encouragement from the course instructor and were motivated to participate in the present study by having the opportunity to take a sample midterm exam and receive explanations of correct and incorrect answers for each task. This incentive worked well, as the opportunity encouraged a relatively large number of students in the math class to participate in the study (47% responded before the preregistered stopping rule). However, the “hook” (presence) of the sample midterm tasks may have had an adverse effect by distracting students’ attention from the strategic resource use intervention materials, which may have also resulted in their shallower integration. Furthermore, the use of these valuable resources (i.e., sample midterm problems) as incentives in both groups may have motivated students in the control condition to seek additional resources as well. This incentive may also affect participation in both groups by motivating those who (even without the intervention) are more likely to seek out learning resources that can support their academic performance (self-selection). However, this latter concern can be mitigated by the results of the baseline measure of strategic use of learning resources, which showed a lack of strong strategic use of resources in both groups.
It is likely that the present study did not replicate the positive effects of the strategic resource use intervention on academic performance because of these above-mentioned differences (while it replicated the non-significant results of the one-dosage strategic resource use intervention, which information was extractable from Chen, 2017). The academic benefits of the original (Chen et al., 2017) and its application-based, scaled effectiveness study (Chen et al., 2022) and the lack of these positive effects in the current one-session strategic resource use intervention study may suggest that the positive effects of this program may require the implementation of the entire “Exam Playbook,” and not just a “pure” Strategic Resource Use intervention which involves only “strategizing about how to approach their learning effectively, deliberately choosing the specific resources that would foster their mastery of the learning content, and then planning how they would use these resources to study the class material” (Chen et al., 2017, p. 775). Another possible explanation for the absence of these positive effects could be the differences between the programs, such as the inclusion of presumably effective content in the active control condition and the potentially distracting incentive, which may have diverted attention away from the strategic resource use intervention content. Nevertheless, according to the results of the current study, the one-time application of the strategic resource use intervention compared to the active control condition (which promoted healthy sleep habits) did not appear to be effective enough (in terms of improving academic performance) to trigger general self-reflection about effective learning and to support the planning and implementation of these plans, and it may require more structured guidance for students to achieve significantly higher academic benefits.
Limitations
In the current study, the same methods and materials were used in the preexam intervention as in the original study, and the differences between the two settings (freshman statistics vs. freshman math courses in countries where “strategic resource use” is not as common, and the slight differences in the list of resources) can be “believed to be irrelevant for obtaining the evidence about the same finding” (Nosek et al., 2022, p. 722). From this perspective, the present replication could be called a direct replication of the preexam strategic resource use intervention. However, if we consider both the boosters around the intervention as part of the intervention and the potentially influential active control condition, then the use of the terms direct or conceptual replication would be incorrect. Consequently, despite our best intentions, the current study was not a true replication of the original work (Chen et al., 2017). Future replication studies of the Strategic Resource Use intervention might more closely follow the more recent (Chen et al., 2022) original “Exam Playbook”, which includes several additional elements of the intervention when adapting and testing its effects in a new context.
In addition to these differences, the selection of the target sample size, while closely following the original work, was too small to conduct a well-powered interaction analysis with the individual difference variable of minority status. The analysis of this interaction could be conducted in a significantly larger sample. Furthermore, the randomization caused an imbalance regarding the gender ratio in the two conditions. Although this study was, to our knowledge, the first strategic resource use intervention to be conducted outside the United States, it used only a Hungarian sample and was not representative in any respect of the Hungarian higher education context. Future replication studies of this intervention should address various boundary conditions we highlighted in the discussion of the present work and test the efficacy of the intervention in various cultural and educational contexts.
Future Directions
In the current one-session strategic resource use intervention study, the positive effects on academic performance were not replicated. However, given the benefits of strategic resource use intervention on academic performance published in the works of Chen et al. (Chen et al., 2017, 2022) and the lack of these positive effects in the current study, it is worth considering the differences between these interventions highlighted in the current paper. According to these identified differences, strategic resource use intervention may need to be applied multiple times and combined with other instructional/learning protocol elements to achieve its positive effects. These differential elements include (1) the weekly reminders of available resources with suggestions for what to use that week, (2) the post-exam measures that support reflection on exam preparation and evaluation of resources and resource use, (3) the multiple doses of the intervention (administered before each exam), (4) the business-as-usual control group in which participants need to make zero effort to complete the control materials, and (5) the credit points as an incentive for participation. Future studies may achieve the previously published benefits by adapting the full and combined/expanded interventions and testing their effects in a new context.
Conclusion
The present study aimed to replicate a one-session adaptation of the strategic resource use intervention by Chen et al. (2017), carefully selecting a target audience to ensure cultural fit, relevance and adherence to the original protocol. During the adaptation process, only non-substantive modifications were made to the materials. However, the current study did not replicate the previously observed beneficial effects. While some contextual differences likely played a role in this outcome, several minor and some meaningful measurement-, material-, and implementation-related differences were identified that are worthy of consideration in future research to enhance both intervention design and replication efforts.
Deviations from Preregistration
The deviations from preregistration are summarized in the Preregistration Deviations Table (Appendix S2, Table S2 in the Supplementary Materials: https://osf.io/u8jh6), following the template provided by Willroth and Atherton (2024).
Contributions
Contributed to conception and design: GO, JS, OF
Contributed to acquisition of data: OF, JS
Contributed to analysis and interpretation of data: JS
Drafted and/or revised the article: JS, GO
Approved the submitted version for publication: OF, GO, JS
Acknowledgements
We thank Csilla Majoros and Gabriella Svantnerné Sebestyén, who supported our work by coding participants’ texts in the intervention condition that was used in the ‘treatment on the treated’ analysis.
The analysis plan was preregistered prior to the start of data collection and made publicly available in OSF at https://osf.io/ehndb.
Funding Information
JS was supported by the ÚNKP-22-4 New National Excellence Program of the Ministry for Culture and Innovation from the source of the National Research, Development and Innovation Fund, and GO was supported by Northern French Strategic Dialogue (Phase 1 & 2) and STaRS Grants. The research of OF reported in this paper is part of project no. BME-NVA-02, implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the TKP2021 funding scheme.
Competing Interests
No competing interests to disclose.
Data Accessibility Statement
All materials, data, and analyses scripts can be found on the Open Science Framework at https://osf.io/xgtns/.