Event boundaries are important moments throughout an ongoing activity that influence perception and memory. They allow people to parse continuous activities into meaningful events, encode the temporal sequence of events and bind event information together in episodic memory (DuBrow & Davachi, 2013). Thus, drawing attention to event boundaries may facilitate these important perceptual and encoding processes. In the current study, we used emotionally arousing stimuli to guide attention to event boundaries because this type of stimulus has been shown to influence perception and attention. We evaluated whether accentuating event boundaries with commercials improves memory and whether emotional stimuli further enhance this effect. A total of 97 participants watched a television episode in which we manipulated commercial break locations (boundary, non-boundary, no commercial) and the type of commercial (emotional, neutral) and then completed memory tasks. Overall, placing emotionally arousing commercials at event boundaries increased memory for the temporal order of events, but no other effects of accentuating event boundaries were observed. Thus, drawing attention to event boundaries—via emotionally charged commercials—increases the likelihood that people will perceive the change in events, update their mental model accordingly and better integrate temporal information from the just-encoded event.

At any given moment our perceptual system deals with an overwhelming amount of sensory input and yet we are able to readily experience and act upon our environment. Our perceptual system is able to manage and interpret this input, in part, because it breaks the activity down into individual events. As a simple everyday example, the act of driving to work can be broken down into several smaller sub-events such as opening your car door, starting the car’s ignition, backing out of your garage, driving the route and parking your car in the company’s parking lot. Although this series of events may be experienced as a seamless flow of activity, our perceptual system segments the activity into separate, meaningful units. The point at which one event ends and another event begins is referred to as an event boundary.

Previous work has demonstrated that while watching a video people tend to perceive remarkably similar event boundaries (Newtson, 1976), which are often associated with perceptual changes such as motion, body position and luminance (Cutting et al., 2012; Newtson et al., 1977) as well as the introduction of new characters, a change in goals, spatial location or time (Magliano et al., 2001). Further, perception of these event boundaries remains quite stable over time, such that people identify similar event boundaries when watching the same movie up to one year apart (Speer et al., 2003). While event boundary perception is fairly consistent within and across individuals, some individuals are more likely to identify normative event boundaries, whereas others are more likely to identify idiosyncratic boundaries. Importantly, these individual differences predict later memory performance such that more normative event segmentation is associated with better memory for the activity (Bailey et al., 2013; Flores et al., 2017; Gold et al., 2017; Kurby & Zacks, 2011; Sargent et al., 2013; Zacks et al., 2006). Thus, the perception of event boundaries is important for comprehension and memory (Boltz, 1992; DuBrow & Davachi, 2013; Ezzyat & Davachi, 2011; Schwan et al., 2000; Speer & Zacks, 2005).

Event boundaries serve as anchors by helping people segment complex activity into discrete, meaningful units (Kurby & Zacks, 2008). People tend to recall more actions (Schwan et al., 2000) and better recognize pictures related to event boundaries better than those related to non-boundaries (Newtson & Engquist, 1976). Further, removing scenes that contain event boundaries impairs memory for the activity more so than removing scenes from the middle of an ongoing event (Schwan & Garsoffky, 2004). Thus, the perception of boundaries is important for long-term memory.

Event boundary perception is also important for encoding the temporal sequence of an activity into episodic memory. To evaluate this relationship, Davachi and colleagues have evaluated the effect of event boundaries on temporal memory for text (Ezzyat & Davachi, 2011) and images (DuBrow & Davachi, 2013, 2014). Specifically, DuBrow and Davachi (2014) asked participants to encode a sequence of images of everyday objects and celebrity headshots and then tested their temporal memory by asking which of two objects was presented more recently. Critically, half of the image pairs came from the same event (or category, e.g., two celebrities) and half came from different events (e.g., a celebrity and an object), but all pairs had the same number of intervening items between them at encoding. They found that memory for temporal order was better for items from the same event than items from different events. Thus, a shared context is likely to help bind items together sequentially within an event. Similarly, memory for information from a previous event is worse after people encounter a spatial event boundary (i.e., walking through a doorway; Hornsby et al., 2016; Radvansky & Copeland, 2006). Thus, event boundaries signal a context change, or a new event, and trigger the binding of information from the just-encoded event. However, less binding occurs between information from different events (see also, DuBrow & Davachi, 2013, 2014; Silva et al., 2019).

Given that event boundaries seem to help structure events in memory, perhaps drawing attention to them would promote this binding process and thus benefit memory? A few previous studies have addressed this question by manipulating an activity in such a way that emphasizes important event boundaries. For instance, Boltz (1992) found improved memory when event boundaries were emphasized through the use of commercial breaks. In this study, participants watched a television episode with commercial breaks inserted at event boundaries, at non-boundaries, or not at all. After watching the episode, participants completed several memory tasks. Boltz found that when event boundaries were emphasized, memory improved in comparison to the no commercial (control) condition, whereas when the events were interrupted by placing commercials in the middle of an event, memory suffered. Similarly, Gold et al. (2017) edited films to include visual and auditory cues such as the movie slowing down, arrows pointing to important objects, and a bell ringing at event boundaries or non-boundaries. They found that cueing event boundaries improved memory, particularly memory for the cued information, which is consistent with prior work (Boltz, 1992; Schwan et al., 2000). However, inconsistent with Boltz’s findings, Gold et al. (2017) found that cueing non-boundaries also improved memory but to a lesser degree than cueing boundaries.

This research indicates that drawing attention to event boundaries improves overall memory for an event. It is possible that this boundary effect further increases the binding of information within the same event (Ezzyat & Davachi, 2011), thereby improving overall memory. However, these prior interventions did not evaluate temporal memory for order of information within the same event versus across different events.

Another factor that has been shown to affect how individuals attend to and encode information is emotion. It is thought that our perceptual and attentional systems prioritize emotional information due to their importance for survival (e.g., Brosch et al., 2010; New et al., 2007; Seligman, 1971) because positive and negative arousing stimuli better capture a viewer’s attention (e.g., Bannerman et al., 2009; Hansen & Hansen, 1988; Ohman et al., 2001). How, then, does emotional processing affect perception of event structure and later memory for a dynamic activity?

In the current study, we evaluated this question through the use of commercials that varied in emotional intensity and their effects on memory for a television episode. We made use of commercials because the goal of a commercial is to capture the viewer’s attention to the target content (Teixeira et al., 2010). Emotion can help advertisers reach this goal because viewers tend to better remember commercials with emotional content (Mehta & Purvis, 2006). However, this previous work has not evaluated the effect that emotional content in commercials has on the neighboring information in a television episode. Thus, we have two competing hypotheses.

The first hypothesis is that highly arousing, emotional commercials will further draw viewers’ attention to event boundaries, promote binding for information from the just-encoded event and improve memory. If this hypothesis were true, then emphasizing event boundaries using emotionally arousing commercial breaks will result in better memory performance compared to neutral commercials or no commercials. The second hypothesis is that emotionally arousing commercial breaks will draw the viewers’ attention away from the just-encoded event. Prior work has shown that an emotional stimulus disrupts consolidation of the preceding information resulting in poor memory for that information (Strange et al., 2003; Tulving, 1969). If this hypothesis were true, then emotionally arousing commercial breaks could result in worse memory performance than the neutral or no commercial breaks due to an interruption in encoding or consolidation. However, we must note a possible issue with using emotionally arousing commercials: Emotional content that is too arousing can have a detrimental effect on memory (Yerkes-Dodson law). Further, extremely arousing content may be driven by bottom-up processing (Bannerman et al., 2009; Ohman et al., 2001) and may shift attention away from semantic content towards perceptual details (Brewin et al., 1996; Sussman et al., 2016). Such changes in attention may alter how people segment and later remember information (Sherrill et al., 2019). Thus, in the current study, we were careful to select commercials with optimal arousal levels.

The purpose of the current study is to investigate whether accentuating event boundaries improves memory (i.e., boundary effect; conceptually replicating the effect reported by Boltz, 1992; Gold et al., 2017) and the degree to which emotionally arousing content enhances the boundary effect. Finally, to the extent that boundary accentuation and emotional arousal influence overall memory, we evaluated whether this improvement was a result of enhanced within-event binding. If so, we expected that memory for temporal order would be better for information within the same event as compared to information from different events (DuBrow & Davachi, 2014; Ezzyat & Davachi, 2011). For example, we expected that people would be more accurate at remembering that Axel told Magnus he is suspected of being a spy before Axel and Magnus discuss defecting to the U.S. because these subevents were from the same larger event (“spy discussion” event) compared to remembering that Axel and Magnus discuss defecting to the U.S. before Magnus met Jack at a café because these subevents were from different larger events (“spy discussion” event and “café” event).

Participants

There were 97 participants (Female = 45, Age M = 19.84, SD = 3.54) from a large Midwest university’s undergraduate psychology research pool randomly assigned to one of five commercial conditions: Boundary—Emotional (n =20); Boundary—Neutral (n = 16); Non-Boundary—Emotional (n = 18); Non-Boundary—Neutral (n = 18); and No Commercial (n = 25). We conducted a power analyses using G*Power version 3.1.9.6, for a one-way ANOVA with a one-tailed hypothesis, set power of .95, alpha of .05 and an effect size of Cohen’s f = 1.55 (the most conservative effect size estimate from Boltz, 1992) and found that a total sample size of 30 participants would be sufficient to detect such effects of interests. However, we recruited nearly 100 participants, which was more similar to the sample size reported in Boltz (1992). Participants were naïve to the purpose of the experiment. Study procedures were approved by the large Midwest university’s Institutional Review Board and all participants received class credit for their participation.

Apparatus

The experiment was performed using Mac Minis, which have 2.5 GHz dual-core Intel Core i5 with 3MB L3 cache (Turbo Boost up to 3.1 GHz), 4 GB of 1600MHz DDR3 SDRAM, 500 GB 5400-rpm hard drive, and an Intel HD Graphics 4000. Stimuli were presented on 19-inch Dell E1914H monitors set to a refresh rate of 75 Hz and a resolution of 1024 x 768. The experiment was conducted using Experiment Builder 2.1.140 for Mac OS X. Participants listened to the experimental audio using Maxwell HP-100 headphones and made responses with Logitech K120 keyboards.

Stimuli

Television Episodes

As in Boltz (1992), the experimental stimuli used were episodes four and six from A Perfect Spy mini-series.1 The duration of Episode 4 is 53:26 minutes, and the duration of Episode 6 is 56:12 minutes. Both episodes have frame rates of 25 Hz. The two episodes originally do not include any commercial breaks; however, the episodes were experimentally manipulated to present commercials at boundary and non-boundary locations. These locations have been identified using Boltz’s (1992, p. 104-105) descriptions in Appendixes A and B. Each episode had 7 large events separated by a total of 6 event boundaries. The mean event length across both episodes was 454 seconds (range 40-870 seconds).

Commercials

Prior to the current study, a pilot study was conducted to select the commercials to be included in the main experiment. A separate group of 33 participants (Female = 17, Age M = 36.03, SD = 9.07) was recruited from Amazon Mechanical Turk and asked to rate the emotional intensity of a series of 47 commercials. These commercials were selected from YouTube and ranged in content, including both negative and positive content. We only assessed emotional intensity/arousal, not valence. In this pilot study, the participants provided informed consent, completed several demographic questions, then watched the commercials and rated each commercial on a sliding scale from 0 to 7 (0 = no emotional intensity, 7 = extreme emotional intensity), and finally were debriefed.

The mean of each commercial emotional intensity rating was calculated to determine the most and the least emotionally intense commercials. The commercials with the highest mean intensity ratings (ranging from 4.45-6.12) were placed into the emotional condition, whereas commercials with the lowest mean intensity ratings (ranging from 0.41-1.38) were placed into the neutral condition. Then within each condition, commercials were randomly grouped together to create 2-2 ½ minute commercial breaks. There were a total of six emotional commercial breaks and six neutral commercial breaks. Examples of commercials used to create an emotional commercial break were anti-bullying PSAs, anti-drinking and driving PSAs, and animal abuse commercials. Examples for the neutral commercials were infomercials, lawyer advertisements, and shampoo commercials. The order of commercial breaks was randomized and then inserted at either the boundary or non-boundary locations throughout the television episode.

Filler and Memory Tasks

Filler Task

After viewing the entire episode, participants were asked to type out as many country names as they could think of into an empty text field for two minutes. The purpose of this filler task was to distract the participants and clear working memory of the just watched episode content prior to any of the memory tests.

Recall Task

After the filler task, participants were asked to describe the events in the episode in as much detail as possible in the order that they had occurred. Participants typed their recall responses into an empty text field, and they were given five minutes to complete this task. The recall task score was measured by using Flores et al. (2017) normalized recall method, which is strongly related to hand-scored recall (r = 0.77) for shorter videos of everyday activities. For each participant, we calculated the total number of recall response words divided by the total number of clauses in the episode. The number of clauses were determined from the episode summaries in Appendices A and B from Boltz (1992). Episode 4 contained 81 clauses and Episode 6 contained 74 clauses.

Figure 1. Example Trials from the Recall, Recognition and Temporal Order Memory Tasks
Figure 1. Example Trials from the Recall, Recognition and Temporal Order Memory Tasks
Close modal

Recognition Task

The recognition task consisted of 24 three-second video clips (12 from each episode) that were randomly ordered. These clips were taken from approximately every four minutes throughout the episode. On each trial, participants saw a ready check screen, and then one video clip was presented, followed by a response screen (Figure 1). Participants had to decide whether the clip came from the episode they watched by selecting “Yes, I watched this” (pressing ‘x’) or from a different episode, which they had not seen, by selecting “No, I did not watch this” (pressing ‘m’). For example, participants were presented with a three-second clip of Magnus and Jack walking in a park, a scene from Episode 4. After the clip was over, the response screen appeared, and they were asked to decide whether or not that scene came from the episode that they watched. If the participant had been assigned to the Episode 4 condition, they would press “x” for a correct response; if the participant had been assigned to the Episode 6 condition, they would press “m” for a correct response. After making their decision, participants saw a confidence rating scale and indicated how confident they were in their recognition response on a confidence scale ranging from Not Confident (1) to Very Confident (7).

Temporal Order Task

In this task, participants viewed 12 pairs of three-second video clips that all came from the episode they watched. Their task was to decide which clip occurred first (Video Clip 1 or Video Clip 2) in the episode (Figure 1). The video clip pairs were either from within the same event or across different, but contiguous events as determined by the event boundaries from Boltz’s (1992, p. 104-105) Appendixes A and B. Each trial took approximately nine seconds, plus the response time: Video Clip 1 was presented followed by a three-second black screen, and then Video Clip 2 was presented. A response screen appeared asking which clip occurred first in the original episode. The response screen remained present until the participant responded either by pressing ‘x’ (Video Clip 1) or ‘m’ (Video Clip 2). After making their decisions, participants saw a confidence rating scale and indicated how confident they were in their choice, Not Confident (1) to Very Confident (7).

Six trials consisted of two video clip pairings that come from within the same event (i.e., no intervening event boundary between the two clips, same event condition). The other six trials consisted of two video clip pairings that come from different, but adjacent events (i.e., intervening event boundary between the two clips, different event condition). These clips occurred approximately two minutes apart to ensure the difficulty of the same and different event pairs were roughly equivalent. Different event clips were taken from ±60 seconds from a boundary whereas same event clips were taken from ±60 seconds from the exact midpoint of that event. Figure 2 shows a schematic depiction of how these temporal order memory items were arranged throughout Episode 6.

Figure 2. Schematic of Episode 6 including Locations of Event Boundaries, Commercial Break Placement and Locations of Temporal Order Memory Stimuli
Figure 2. Schematic of Episode 6 including Locations of Event Boundaries, Commercial Break Placement and Locations of Temporal Order Memory Stimuli
Close modal

Design

The design of this experiment is a 2 (Episode: Episode 4 vs. Episode 6) x 5 (Commercial Condition: Boundary—Emotional, Boundary—Neutral, Non-Boundary—Emotional, Non-Boundary—Neutral, and No Commercial) between-subject factorial design. The Commercial Condition variable was created by merging two variables, Commercial Break Location (Boundary vs. Non-Boundary vs. No Commercial) and Commercial Type (Emotional vs. Neutral). The Emotional and Neutral levels of the Commercial Type variable were nested within the Boundary and Non-Boundary levels of the Commercial Break Location variable; however, these variables were not fully crossed given that there were no commercial breaks in the No Commercial level. Commercial Condition was used as a predictor in all of the memory task analyses reported in the results section. Each participant watched 1 of 2 episodes. For some participants, the episode was interrupted by six commercial breaks, which occurred either at boundary or non-boundary locations and consisted of either emotional or neutral commercials. For other participants, the episode played continuously without commercial breaks (see Figure 2 for a schematic of an episode, including boundaries, commercial breaks and temporal memory pairs).

The Commercial Break Location and Episode variables were all counterbalanced for a total of ten groups. The temporal order task also included a within-subject variable, Pair Type, which had different versus same event conditions. The different event condition included two clips that were separated by a narrative event boundary (i.e., two clips from two different events), whereas the same event condition included two clips that were within the same narrative event boundaries (i.e., two clips from the same event). These event boundary locations were determined by Boltz’s (1992, p. 104-105) descriptions in Appendixes A and B.

All Commercial Conditions responded to the same stimuli in the temporal order memory task. For all groups, the different event pairs were always separated by a narrative event boundary and the same event pairs were not. However, for two groups (Boundary—Emotional & Boundary—Neutral), the different event pairs were also always separated by commercial breaks, whereas the same event pairs were not. Whereas for two groups (Non-Boundary—Emotional & Non-Boundary—Neutral), the commercial breaks were not placed at narrative event boundaries but instead during the middle of an event. This resulted in some of the different and same event pairs being separated by a commercial break and others not (see Figure 2).

Procedure

The participants read instructions describing that they would watch a television episode and then complete several memory tasks. Then they began watching the selected television episode (4 or 6) from the mini-series A Perfect Spy. After watching the full episode, participants completed the 2-minute filler task, followed by the recall task (5 minutes), recognition task (24 trials), and then the temporal order task (12 trials). Following the completion of the memory tasks, participants then indicated whether they had previously seen the episode or not. Participants were then debriefed and thanked for participating.

All analyses were conducted in R version 4.0.3 (RStudio Team, 2020). Recall performance was modeled with the lm function, both Recognition (Old/New) and Temporal Order (Accuracy) were modeled with the glmer function using the lme4 package (Bates et al., 2015). Significance tests for these models were performed with Type III Wald Chi-square tests using the car package and Anova function (Fox & Weisberg, 2019). Post hoc comparisons were conducted with a Bonferroni correction using the emmeans package (Lenth, 2020). Confidence ratings for the Recognition and Temporal Order tasks were analyzed with cumulative link mixed models (CLMM) created using the ordinal package v2019.12-10 and the clmm function (Christensen, 2019). Significance tests for the CLMM models were performed with Type II Wald Chi-square tests using the RVAideMemoire package and Anova.clmm function (Hervé, 2020). Graphs were created using the ggplot2 package (Wickham, 2016). The confidence ratings graphs were also generated using the ggeffects package (Lüdecke, 2018) guided by Barlaz (2020).

The independent variables included in the analyses were effect coded: Commercial Condition levels were Boundary—Emotional (+1, 0, 0, 0), Boundary—Neutral (0, +1, 0, 0), Non-Boundary—Emotional (0, 0, +1, 0), Non-Boundary—Neutral (0, 0, 0, +1), and No Commercial (-1, -1, -1, -1), and the Pair Type levels were Different Event (+1) and Same Event (-1). The Episode variable was not of theoretical interest and did not significantly interact with the Commercial Condition variable in any analysis, thus, all of the reported analyses are collapsed across Episode. The data from one participant was removed from all analyses due to failing to attend to the experimental tasks. All other data removed from cleaning procedures are explained in each of the task sections.

Recall Task

Recall performance was scored as the number of recall response words divided by the number of clauses and is plotted by Commercial Condition in Figure 3. A one-way between-subjects Type III Wald Chi-square test revealed no significant effect of Commercial Condition on recall performance, F(4, 92) = 1.02, p = 0.400 (Boundary—Emotional: M = 1.79, SE = 0.161; Boundary—Neutral: M = 1.92, SE = 0.180; Non-Boundary—Emotional: M = 1.53, SE = 0.170; Non-Boundary—Neutral: M = 1.88, SE = 0.170; No Commercial: M = 1.94, SE = 0.144). Therefore, we failed to find any significant effect of commercial condition on recall performance.

Figure 3. Recall Performance as a Function of Commercial Condition

Note: The bar graph shows recall performance for each level of Commercial Condition with 95% confidence interval error bars. Recall performance is scored as the total number of recall response words divided by the total number of possible clauses.

Figure 3. Recall Performance as a Function of Commercial Condition

Note: The bar graph shows recall performance for each level of Commercial Condition with 95% confidence interval error bars. Recall performance is scored as the total number of recall response words divided by the total number of possible clauses.

Close modal

Recognition Task

Signal Detection. Responses were removed if a participant’s response time on a given recognition trial was less than 150 msec or greater than 10 seconds, which resulted in the removal of 21 recognition trials from the recognition task analyses (0.93% overall data). The signal detection analysis was performed with a mixed effects probit model (Wright & London, 2009). The observed responses “old” or “new” were the DV and the fixed effects were the 2 (Item Type; old vs. new) x 5 (Commercial Condition: Boundary—Emotional, Boundary—Neutral, Non-Boundary—Emotional, Non-Boundary—Neutral, and No Commercial) with Participant and Recognition Item included as random intercepts and Item Type by-Participant included as a random slope. Recognition Item was a categorical variable that uniquely identified each of the 24 Recognition task stimuli.

Table 1 provides the parameter estimates from the Recognition Task mixed effects probit model. A Type III Wald Chi-square test was performed using the car package and Anova function (Fox & Weisberg, 2019) to test overall bias (c) and sensitivity (d’) and determine whether they differed across Commercial Condition. We found participants were sensitive at discriminating between new and old items (d’ = 2.82, z = 24.18, p <.001) but no significant difference was detected for a bias (c = -0.17, z = -1.55, p = .122) to select new or old. Neither sensitivity (d’), χ2 (4) = 0.92, p = .921 (see Figure 4), nor bias, (c) χ2 (4) = 0.365, p = .985, differed significantly by Commercial Condition. These results indicate that participants were sensitive to determining whether a video clip was new or old, without a detected overall bias, and that no significant difference was observed across the Commercial Conditions.

Table 1. Recognition Task – Mixed Effects Probit Model Parameter Estimates
Fixed Effects Estimate SE z value Pr(>|z|) 
Intercept (Bias) -0.165 0.107 -1.55 0.122 
Item Type (Sensitivity) 2.821 0.117 24.18 <.001 
CC (Boundary—Emotional) 0.015 0.086 0.18 0.860 
CC (Boundary—Neutral) -0.050 0.091 -0.55 0.586 
CC (Non-Boundary—Emotional) 0.075 0.090 0.83 0.406 
CC (Non-Boundary—Neutral) -0.035 0.089 -0.39 0.696 
Item Type x CC (Boundary—Emotional) 0.044 0.192 0.23 0.819 
Item Type x CC (Boundary—Neutral) -0.121 0.204 -0.59 0.553 
Item Type x CC (Non-Boundary—Emotional) 0.026 0.201 0.13 0.897 
Item Type x CC (Non-Boundary—Neutral) 0.016 0.198 0.08 0.937 
Fixed Effects Estimate SE z value Pr(>|z|) 
Intercept (Bias) -0.165 0.107 -1.55 0.122 
Item Type (Sensitivity) 2.821 0.117 24.18 <.001 
CC (Boundary—Emotional) 0.015 0.086 0.18 0.860 
CC (Boundary—Neutral) -0.050 0.091 -0.55 0.586 
CC (Non-Boundary—Emotional) 0.075 0.090 0.83 0.406 
CC (Non-Boundary—Neutral) -0.035 0.089 -0.39 0.696 
Item Type x CC (Boundary—Emotional) 0.044 0.192 0.23 0.819 
Item Type x CC (Boundary—Neutral) -0.121 0.204 -0.59 0.553 
Item Type x CC (Non-Boundary—Emotional) 0.026 0.201 0.13 0.897 
Item Type x CC (Non-Boundary—Neutral) 0.016 0.198 0.08 0.937 

Note: The intercept of the model is the overall bias (c). The Item Type estimate is overall sensitivity (d’). Fixed effects without Item Type adjust bias (c) for each level of Commercial Condition. Fixed effects with Item Type adjust sensitivity (d’) for each level of Commercial Condition. Commercial Conditions = CC. Model was performed with effect coding (CC: Boundary—Emotional = ‘+1,0,0,0’, Boundary—Neutral = ‘0,+1,0,0’, Non-Boundary—Emotional = ‘0,0,+1,0’, Non-Boundary—Neutral = ‘0,0,0,+1’, No Commercial = ‘-1,-1,-1,-1’)]. Standard Error = SE.

Figure 4. Recognition Task - Sensitivity (d’) as a Function of Commercial Condition

Note: The bar graph shows the recognition task sensitivity (d’) for each Commercial Condition with 95% confidence interval error bars. All CIs are calculated from the overall sensitivity (Item Type) standard error.

Figure 4. Recognition Task - Sensitivity (d’) as a Function of Commercial Condition

Note: The bar graph shows the recognition task sensitivity (d’) for each Commercial Condition with 95% confidence interval error bars. All CIs are calculated from the overall sensitivity (Item Type) standard error.

Close modal

Confidence Ratings. The recognition confidence ratings (1-7) were analyzed as ordinal data with a cumulative link mixed model (CLMM). Multiple confidence ratings per participant were collected. In the following analysis, only correct trials were included, which led to removing 11.4% of trials. The CLMM model included Commercial Condition (Boundary—Emotional, Boundary—Neutral, Non-Boundary—Emotional, Non-Boundary—Neutral, and No Commercial) as a fixed effect and Participant was treated at its intercept as a random effect. The model had a logit link function.

Table 2 provides the parameter estimates from the Recognition Task Confidence Ratings CLMM. Figure 5 displays the predicted probability of response for each of the confidence ratings by the five Commercial Conditions from the Recognition Task CLMM. To test whether confidence ratings significantly differ by Commercial Condition, the Recognition Task CLMM was analyzed with a Type II Wald Chi-square test (Mangiafico, 2016), which resulted in no significant main effect of Commercial Condition, χ2 (4, N=97) = 4.38, p = .357. Table 3 provides the Type II Wald Chi-square test output for the levels of Commercial Condition estimated marginal means, standard errors, as well as the lower and upper confidence intervals.

Table 2. Recognition Task: Confidence Ratings - Cumulative Link Mixed Model Parameter Estimates
Threshold Coefficient Estimate SE 
 1|2 -5.621 0.321 
 2|3 -4.525 0.208 
 3|4 -3.254 0.145 
 4|5 -2.326 0.124 
 5|6 -1.580 0.115 
 6|7 -0.753 0.110 
Fixed Effects Estimate SE 
 CC (Boundary—Emotional) 0.118 0.211 
 CC (Boundary—Neutral) 0.111 0.231 
 CC (Non-Boundary—Emotional) 0.203 0.224 
 CC (Non-Boundary—Neutral) -0.043 0.218 
Random Effect Variance SD 
 Participant 0.848 0.921 
Threshold Coefficient Estimate SE 
 1|2 -5.621 0.321 
 2|3 -4.525 0.208 
 3|4 -3.254 0.145 
 4|5 -2.326 0.124 
 5|6 -1.580 0.115 
 6|7 -0.753 0.110 
Fixed Effects Estimate SE 
 CC (Boundary—Emotional) 0.118 0.211 
 CC (Boundary—Neutral) 0.111 0.231 
 CC (Non-Boundary—Emotional) 0.203 0.224 
 CC (Non-Boundary—Neutral) -0.043 0.218 
Random Effect Variance SD 
 Participant 0.848 0.921 

Note: Cumulative link mixed model (CLMM) parameter estimates for the confidence ratings from the recognition task. The CLMM had a logit link function. (A) Threshold estimates for each confidence rating response option, (B) Fixed effects included in the CLMM, and (C) the random effect included in the CLMM. Commercial Conditions = CC. Model was performed with effect coding (CC: Boundary—Emotional = ‘+1,0,0,0’, Boundary—Neutral = ‘0,+1,0,0’, Non-Boundary—Emotional = ‘0,0,+1,0’, Non-Boundary—Neutral = ‘0,0,0,+1’, No Commercial = ‘-1,-1,-1,-1’). Standard Error = SE. Standard Deviation = SD.

Figure 5. Recognition Task - Predicted Probability of Confidence Rating Response as a Function of Commercial Condition

Note: The stacked bar plot provides the predicted probability of confidence rating response as a function of Commercial Condition.

Figure 5. Recognition Task - Predicted Probability of Confidence Rating Response as a Function of Commercial Condition

Note: The stacked bar plot provides the predicted probability of confidence rating response as a function of Commercial Condition.

Close modal
Table 3. Recognition Task: Confidence Ratings - Estimated Marginal Means, SE, and CIs
Commercial Condition emmean SE asymp.LCL asymp.UCL 
Boundary—Emotional 3.13 0.249 2.64 3.62 
Boundary—Neutral 3.12 0.278 2.58 3.66 
Non-Boundary—Emotional 3.21 0.267 2.69 3.74 
Non-Boundary—Neutral 2.97 0.259 2.46 3.47 
No Commercial 2.62 0.223 2.19 3.06 
Commercial Condition emmean SE asymp.LCL asymp.UCL 
Boundary—Emotional 3.13 0.249 2.64 3.62 
Boundary—Neutral 3.12 0.278 2.58 3.66 
Non-Boundary—Emotional 3.21 0.267 2.69 3.74 
Non-Boundary—Neutral 2.97 0.259 2.46 3.47 
No Commercial 2.62 0.223 2.19 3.06 

Note: Estimated Marginal Means = emmeans. Standard Error = SE. Asymptote Lower Confidence Interval = asymp.LCL. Asymptote Upper Confidence Interval = asymp.UCL.

Temporal Order Task

On this task, participants saw two video clips and had to determine which clip occurred earlier in the watched episode. Each participant responded to 12 video clip pairings. These video clips were taken either from the same event or from two different events (for example, see Figure 2). Similar to the recognition task, responses on the temporal order task were removed if a participant’s response time on a given trial was less than 150 msec or greater than 10 seconds, which resulted in the removal of 34 temporal order task trials from the temporal order task analyses (3% overall data). Further, data were removed if a participant’s cumulative accuracy was less than chance (50%), which resulted in five participants being removed from the analyses (Boundary—Emotional, n = 1; Boundary—Neutral, n = 1; Non-Boundary—Emotional, n = 2; and No Commercial, n = 1). Therefore, the temporal order task analyses included the data of 92 participants and a total of 1045 observation data points.

Accuracy. Performance was analyzed using a mixed effects logistic regression because the dependent variable, accuracy, was binary (0 = incorrect response; 1 = correct response). The model had a fixed effects structure, which was a 2 (Pair Type: Same vs. Different Events) x 5 (Commercial Condition: Boundary—Emotional, Boundary—Neutral, Non-Boundary—Emotional, Non-Boundary—Neutral, and No Commercial) mixed design. Pair Type was within-subject and Commercial Condition was between-subject. Participant was treated at its intercept as a random effect in the model.

Figure 6 displays the predicted accuracy on a given trial from the model for each Pair Type by the five Commercial Conditions in the temporal order task. Table 4 provides the parameter estimates from the Temporal Order Task Accuracy model. To test whether accuracy performance significantly differed by Pair Type, Commercial Condition, or their interaction, the temporal order mixed effects logistic model was analyzed with a Type III Wald Chi-square test using the car package and Anova function (Fox & Weisberg, 2019). Table 5 provides the Type III Wald Chi-square test output for the Pair Type by Commercial Condition estimated marginal means, standard errors, as well as the lower and upper confidence intervals. There was a significant main effect of Pair Type, χ2 (1, N=92) = 6.39, p = .012, indicating that accuracy was significantly higher for video clips within the same event than for those from different events. There was no significant main effect of Commercial Condition, χ2 (4, N=92) = 1.49, p = 0.829, nor a significant interaction of Pair Type x Commercial Condition, χ2 (4, N=92) = 6.79, p = 0.147. However, post-hoc comparisons with a Bonferroni correction (5 tests) were performed comparing the accuracy performance of the two levels of Pair Type for each Commercial Condition. Accuracy for Same Events (M = 2.00, SE = 0.302) was significantly higher than accuracy for Different Events (M = 0.835, SE = 0.226) in the Boundary—Emotional condition (p = .006). No other comparisons differed significantly.

Figure 6. Temporal Order - Predicted Accuracy as a Function of Pair Type by Commercial Condition

Note: The bar graph shows the temporal order task predicted accuracy on a given trial for each level of the Pair Type by each level of Commercial Condition with 95% confidence interval error bars.

Figure 6. Temporal Order - Predicted Accuracy as a Function of Pair Type by Commercial Condition

Note: The bar graph shows the temporal order task predicted accuracy on a given trial for each level of the Pair Type by each level of Commercial Condition with 95% confidence interval error bars.

Close modal
Table 4. Temporal Order Task – Mixed Effects Logistic Model Parameter Estimates for Accuracy
Fixed Effects Estimate SE z value Pr(>|z|) 
Intercept 1.272 0.090 14.17 <.001 
PT (Different) -0.193 0.077 -2.53 0.012 
CC (Boundary—Emotional) 0.144 0.175 0.82 0.411 
CC (Boundary—Neutral) -0.148 0.176 -0.84 0.401 
CC (Non-Boundary—Emotional) -0.091 0.178 -0.51 0.611 
CC (Non-Boundary—Neutral) 0.084 0.172 0.49 0.627 
PT (Different) x CC (Boundary—Emotional) -0.387 0.158 -2.46 0.014 
PT (Different) x CC (Boundary—Neutral) 0.155 0.155 1.00 0.317 
PT (Different) x CC (Non-Boundary—Emotional) 0.182 0.158 1.15 0.251 
PT (Different) x CC (Non-Boundary—Neutral) 0.077 0.154 0.50 0.618 
Fixed Effects Estimate SE z value Pr(>|z|) 
Intercept 1.272 0.090 14.17 <.001 
PT (Different) -0.193 0.077 -2.53 0.012 
CC (Boundary—Emotional) 0.144 0.175 0.82 0.411 
CC (Boundary—Neutral) -0.148 0.176 -0.84 0.401 
CC (Non-Boundary—Emotional) -0.091 0.178 -0.51 0.611 
CC (Non-Boundary—Neutral) 0.084 0.172 0.49 0.627 
PT (Different) x CC (Boundary—Emotional) -0.387 0.158 -2.46 0.014 
PT (Different) x CC (Boundary—Neutral) 0.155 0.155 1.00 0.317 
PT (Different) x CC (Non-Boundary—Emotional) 0.182 0.158 1.15 0.251 
PT (Different) x CC (Non-Boundary—Neutral) 0.077 0.154 0.50 0.618 

Note: Pair Types = PT. Commercial Conditions = CC. Model was performed with effect coding [(PT: Different = +1, Same = -1; CC: Boundary—Emotional = ‘+1,0,0,0’, Boundary—Neutral = ‘0,+1,0,0’, Non-Boundary—Emotional = ‘0,0,+1,0’, Non-Boundary—Neutral = ‘0,0,0,+1’, No Commercial = ‘-1,-1,-1,-1’)]. Standard Error = SE.

Table 5. Temporal Order Task: Accuracy - Estimated Marginal Means, SE, and CIs
Pair Type Commercial Condition emmean SE asymp.LCL asymp.UCL 
Different Event Boundary—Emotional 0.84 0.226 0.39 1.28 
Same Event Boundary—Emotional 2.00 0.302 1.41 2.59 
Different Event Boundary—Neutral 1.09 0.263 0.57 1.60 
Same Event Boundary—Neutral 1.16 0.266 0.64 1.68 
Different Event Non-Boundary—Emotional 1.17 0.272 0.64 1.70 
Same Event Non-Boundary—Emotional 1.19 0.270 0.66 1.72 
Different Event Non-Boundary—Neutral 1.24 0.252 0.75 1.73 
Same Event Non-Boundary—Neutral 1.47 0.267 0.95 2.00 
Different Event No Commercial 1.06 0.218 0.64 1.49 
Same Event No Commercial 1.50 0.238 1.04 1.97 
Pair Type Commercial Condition emmean SE asymp.LCL asymp.UCL 
Different Event Boundary—Emotional 0.84 0.226 0.39 1.28 
Same Event Boundary—Emotional 2.00 0.302 1.41 2.59 
Different Event Boundary—Neutral 1.09 0.263 0.57 1.60 
Same Event Boundary—Neutral 1.16 0.266 0.64 1.68 
Different Event Non-Boundary—Emotional 1.17 0.272 0.64 1.70 
Same Event Non-Boundary—Emotional 1.19 0.270 0.66 1.72 
Different Event Non-Boundary—Neutral 1.24 0.252 0.75 1.73 
Same Event Non-Boundary—Neutral 1.47 0.267 0.95 2.00 
Different Event No Commercial 1.06 0.218 0.64 1.49 
Same Event No Commercial 1.50 0.238 1.04 1.97 

Note: Estimated Marginal Mean = emmean. Standard Error = SE. Asymptote Lower Confidence Interval = asymp.LCL. Asymptote Upper Confidence Interval = asymp.UCL.

Confidence Ratings. Similar to the confidence ratings from the recognition task, the confidence ratings (1-7) from the temporal order task were also analyzed as ordinal data with a cumulative link mixed model (CLMM). Multiple confidence ratings per participant were collected. In the following analysis, only correct trials were included, which led to removing 22.8% of trials. The CLMM model had the same fixed and random effects structures as the mixed effects logistic model for the temporal order accuracy data, 2 (Pair Type: Same vs. Different Events) x 5 (Commercial Condition: Boundary—Emotional, Boundary—Neutral, Non-Boundary—Emotional, Non-Boundary—Neutral, and No Commercial) mixed design. Pair Type was within-subject and Commercial Condition was between-subject. Participant was treated at its intercept as a random effect in the model. The model had a logit link function.

Table 6 provides the parameter estimates from the Temporal Order Task Confidence Ratings CLMM. Figure 7 displays the predicted probability of each confidence rating response for each of the five Commercial Conditions and for each Pair Type. To test whether confidence ratings significantly differed by Pair Type, Commercial Condition, or their interaction, the Temporal Order CLMM was analyzed with a Type II Wald Chi-square test (Mangiafico, 2016). Table 7 provides the Type II Wald Chi-square test output for the Pair Type by Commercial Condition estimated marginal means, standard errors, as well as the lower and upper confidence intervals. There was a significant main effect of Pair Type, χ2 (1, N=92) = 13.77, p <.001, indicating that confidence ratings were significantly higher for video clips within the same event than for those from different events. There was no significant main effect of Commercial Condition, χ2 (4, N=92) = 3.31, p = 0.507, nor a significant interaction of Pair Type x Commercial Condition, χ2 (4, N=92) = 2.24, p = 0.691.

Table 6. Temporal Order Task: Confidence Ratings - Cumulative Link Mixed Model Parameter Estimates
Threshold Coefficient Estimate SE 
 1|2 -5.038 0.364 
 2|3 -3.925 0.250 
 3|4 -3.100 0.203 
 4|5 -2.150 0.172 
 5|6 -1.380 0.159 
 6|7 -0.363 0.150 
    
Fixed Effects Estimate SE 
 PT (Different) -0.255 0.076 
 CC (Boundary—Emotional) 0.330 0.286 
 CC (Boundary—Neutral) -0.079 0.309 
 CC (Non-Boundary—Emotional) 0.041 0.312 
 CC (Non-Boundary—Neutral) 0.111 0.289 
 PT (Different) x CC (Boundary—Emotional) -0.075 0.152 
 PT (Different) x CC (Boundary—Neutral) 0.097 0.157 
 PT (Different) x CC (Non-Boundary—Emotional) 0.026 0.164 
 PT (Different) x CC (Non-Boundary—Neutral) 0.111 0.149 
    
Random Effect Variance SD 
 Participant 1.340 1.158 
Threshold Coefficient Estimate SE 
 1|2 -5.038 0.364 
 2|3 -3.925 0.250 
 3|4 -3.100 0.203 
 4|5 -2.150 0.172 
 5|6 -1.380 0.159 
 6|7 -0.363 0.150 
    
Fixed Effects Estimate SE 
 PT (Different) -0.255 0.076 
 CC (Boundary—Emotional) 0.330 0.286 
 CC (Boundary—Neutral) -0.079 0.309 
 CC (Non-Boundary—Emotional) 0.041 0.312 
 CC (Non-Boundary—Neutral) 0.111 0.289 
 PT (Different) x CC (Boundary—Emotional) -0.075 0.152 
 PT (Different) x CC (Boundary—Neutral) 0.097 0.157 
 PT (Different) x CC (Non-Boundary—Emotional) 0.026 0.164 
 PT (Different) x CC (Non-Boundary—Neutral) 0.111 0.149 
    
Random Effect Variance SD 
 Participant 1.340 1.158 

Note: Cumulative link mixed model (CLMM) parameter estimates for the confidence ratings from the temporal order task. The CLMM had a logit link function. (A) Threshold estimates for each confidence rating response option, (B) Fixed effects included in the CLMM, and (C) the random effect included in the CLMM. Pair Types = PT. Commercial Conditions = CC. Model was performed with effect coding [(PT: Different = +1, Same = -1; CC: Boundary—Emotional = ‘+1,0,0,0’, Boundary—Neutral = ‘0,+1,0,0’, Non-Boundary—Emotional = ‘0,0,+1,0’, Non-Boundary—Neutral = ‘0,0,0,+1’, No Commercial = ‘-1,-1,-1,-1’)]. Standard Error = SE. Standard Deviation = SD.

Figure 7. Temporal Order - Predicted Probability of Confidence Rating Response as a Function of Pair Type by Commercial Condition

Note: The stacked bar plot provides the predicted probability of confidence rating response as a function of Pair Type by Commercial Condition.

Figure 7. Temporal Order - Predicted Probability of Confidence Rating Response as a Function of Pair Type by Commercial Condition

Note: The stacked bar plot provides the predicted probability of confidence rating response as a function of Pair Type by Commercial Condition.

Close modal
Table 7. Temporal Order Task: Confidence Ratings - Estimated Marginal Means, SE, and CIs
Pair Type Commercial Condition emmean SE asymp.LCL asymp.UCL 
Different Event Boundary—Emotional 2.66 0.374 1.93 3.39 
Same Event Boundary—Emotional 3.32 0.379 2.58 4.06 
Different Event Boundary—Neutral 2.42 0.404 1.63 3.21 
Same Event Boundary—Neutral 2.74 0.411 1.93 3.55 
Different Event Non-Boundary—Emotional 2.47 0.404 1.68 3.26 
Same Event Non-Boundary—Emotional 2.93 0.424 2.10 3.76 
Different Event Non-Boundary—Neutral 2.63 0.378 1.89 3.37 
Same Event Non-Boundary—Neutral 2.92 0.378 2.17 3.66 
Different Event No Commercial 1.84 0.319 1.22 2.47 
Same Event No Commercial 2.67 0.330 2.02 3.32 
Pair Type Commercial Condition emmean SE asymp.LCL asymp.UCL 
Different Event Boundary—Emotional 2.66 0.374 1.93 3.39 
Same Event Boundary—Emotional 3.32 0.379 2.58 4.06 
Different Event Boundary—Neutral 2.42 0.404 1.63 3.21 
Same Event Boundary—Neutral 2.74 0.411 1.93 3.55 
Different Event Non-Boundary—Emotional 2.47 0.404 1.68 3.26 
Same Event Non-Boundary—Emotional 2.93 0.424 2.10 3.76 
Different Event Non-Boundary—Neutral 2.63 0.378 1.89 3.37 
Same Event Non-Boundary—Neutral 2.92 0.378 2.17 3.66 
Different Event No Commercial 1.84 0.319 1.22 2.47 
Same Event No Commercial 2.67 0.330 2.02 3.32 

Note: Estimated Marginal Mean = emmean. Standard Error = SE. Asymptote Lower Confidence Interval = asymp.LCL. Asymptote Upper Confidence Interval = asymp.UCL.

The main goals of the current experiment were to investigate whether accentuating event boundaries improves memory and whether emotionally arousing content enhances this effect. First, we evaluated the overall effect of accentuating event boundaries on memory accuracy. Surprisingly, we found no effect of commercials on recall or recognition memory performance. That is, commercial breaks placed at event boundary locations throughout an episode did not improve memory compared to a control group, nor did commercials placed at non-boundaries impair memory, unlike the effects reported in Boltz (1992). Further, we found no significant effect of the commercial’s emotional content on recall or recognition performance, regardless of whether these commercials were placed at boundaries or non-boundaries.

Most importantly, though, we did find a significant boundary effect for temporal memory but only when the commercials contained emotionally arousing content (Boundary—Emotional condition). That is, participants were better able to remember the temporal order of two video clips when they came from the same event as compared to when they came from two different, but contiguous events. The observed performance difference between the same and different event conditions cannot be explained by the two video clips being closer or farther apart in time because all video clip pairs were approximately two minutes apart (see Figure 2). Instead, it is possible that the emotional commercials disrupted consolidation of the information at the end of the preceding event (Strange et al., 2003; Tulving, 1969), which led to poor temporal order memory for events coming before and after the emotional commercial break. However, we did not find a similar effect of emotion on memory for temporal order in the Non-Boundary—Emotional condition, indicating that the combination of a narrative event boundary and emotional information produced this effect. Further, it does not seem as if the emotional commercials impaired memory in the Boundary—Emotional condition, but rather improved temporal memory for information within the same events relative to other conditions (see Figure 6).

Thus, a more likely possibility is that the emotional content made the narrative event boundaries more salient. Event boundaries serve as anchors in memory and help to bind subevents within a larger event (DuBrow & Davachi, 2013, 2014; Ezzyat & Davachi, 2011). Emphasizing event boundaries with emotional commercials may have ensured that participants perceived the event boundaries, segmented the activity and sequentially bound information together from the preceding event.

Interestingly, prior work from Davachi and colleagues has observed the boundary effect in temporal memory without the use of boundary cues or emotional content. However, their prior work used written texts or object-face stimuli in which the events were considerably shorter (e.g., 2 seconds, DuBrow & Davachi, 2016; approximately 4-7 sentences, Ezzyat & Davachi, 2011) than those in the current study (approximately 7½ minutes). Moreover, the prior work used stimuli with more event boundaries and more temporal order memory pairs than we used in the current study. For instance, Dubrow and Davachi (2016) had 80 event boundaries and approximately 192 order memory pairs (16 series * 12 pairs). Each of our television episodes had 6 event boundaries and 12 order memory pairs. Thus, it is possible that more trials were needed to detect the same event effect within the conditions that experienced no commercials or neutral commercials at event boundaries. This is a testable hypothesis for future research.

A growing body of literature has shown that emotional stimuli are prioritized in working memory. However, emotional stimuli can either disrupt inter-item binding or enhance short-term processing (see Bennion et al., 2013 for review). Our study sought to evaluate these effects when one is asked to remember a continuous stream of on-going activity. We found that placing emotional commercials at narrative event boundaries (i.e., important changes in the plot of the television episode) may have drawn attention to and enhanced the processing of the just-encoded event. Future work should continue to investigate the specific role of emotion on the perception of ongoing events.

We should note that this study was not a direct replication of Boltz (1992). Instead, our goal was to extend her work by manipulating the content of the commercial breaks (emotional vs. neutral content). We attempted to conduct many aspects of the current study as similarly as possible to Boltz’s original study based on the details provided in the manuscript—e.g., the specific television episodes and the types of memory measures. However, our failure to replicate her effects on recall and recognition may be due to a few methodological differences. First, Boltz’s (1992) recall task allowed participants to write down what they could remember from the viewed episode for 20 minutes, whereas we only allowed them 5 minutes to type what they remembered, and Boltz’s dependent measures were collected through paper-based methods whereas ours were computer-based. Additionally, while prior work has shown normalized word counts are highly correlated with hand scored-recall (Flores et al., 2017), this correlation is based on scoring of shorter videos of everyday activities as opposed to longer professionally filmed videos, like A Perfect Spy. Further, our recognition and temporal order stimuli were presumably different from those used in Boltz (1992) because her specific stimuli were not provided. Therefore, one possible reason that we failed to replicate Boltz’s findings from the recognition and part of the temporal order tasks may be because different stimuli were used in the current study than Boltz (1992).

Limitations

One limitation is that the effect of emotional valence was not evaluated in this study. Both positive and negative emotionally intense commercial breaks were used because they were both rated as emotionally intense. However, most commercials contained negative events such as a deadly car crash due to drinking and driving, or the abuse of animals. Future research could distinguish whether memory is differentially affected by positive and negative valence of a commercial. Further, commercials appearing in higher rated programs tend to be remembered more than lower rated programs (Barclay et al., 1965), thus the commercials’ attentional effects may vary depending on the viewer’s interest in A Perfect Spy.

The current study found that accentuating event boundaries with emotionally arousing content increased memory for the sequence of events in the television episode. The link between event boundary perception and memory is exciting and provides a potential avenue for improving memory. Future interventions may be created to improve event boundary perception and thus improve memory. Such interventions could have widespread impact on students learning new material, employees trying to learn new skills, older adults who demonstrate age-related declines in memory, and many other populations that have difficulty learning and remembering new information. However, before investing time in developing these interventions, more research must be done to evaluate the most effective way to guide event boundary encoding, the mechanisms by which such manipulations affect memory and how long the effects persist.

Importantly, the current study provides a potential mechanism by which event segmentation interventions may improve memory for dynamic, everyday activities. Drawing attention to event boundaries—via emotionally charged commercials—increases the likelihood that people will perceive the change in events, update their mental model accordingly and better integrate information from the just-encoded event. Thus, event boundaries trigger the binding of information from the just-encoded event and help to temporally organize the actions in episodic memory.

Contributed to conception and design: HRB, JJP, JSR. Contributed to acquisition of data: JJP, JSR, HRB. Contributed to analysis and interpretation of data: JJP, HRB, JSR. Drafted and/or revised the article: JJP, JSR, HRB. Approved the submitted version for publication: JJP, JSR, HRB.

We thank Dr. Lester Loschky and other members of the Event Cognition Lab at Kansas State University for discussion related to experimental design and stimuli. We also thank Jaydan Bruna, Becca Ryan, and Nick Parker, who helped us with data collection.

No competing interests exist.

All participant data and analysis scripts can be found on the paper’s project page on OSF at https://osf.io/fx925/. Requests for stimuli may be made to the corresponding author.

1.

Boltz (1992, p. 94): “A Perfect Spy miniseries was produced by BBC-TV Productions. The use of all filmed material in this experiment conformed to the specifications of the House Report on piracy and counterfeiting amendments (H.R. 97-495, pp. 8, 9).”

Bailey, H. R., Zacks, J. M., Hambrick, D. Z., Zacks, R. T., Head, D., Kurby, C. A., & Sargent, J. Q. (2013). Medial temporal lobe volume predicts elders’ everyday memory. Psychological Science, 24(7), 1113–1122. https://doi.org/10.1177/0956797612466676
Bannerman, R. L., Milders, M., de Gelder, B., & Sahraie, A. (2009). Orienting to threat: Faster localization of fearful facial expressions and body postures revealed by saccadic eye movements. Proceedings of the Royal Society B: Biological Sciences, 276(1662), 1635–1641. https://doi.org/10.1098/rspb.2008.1744
Barclay, W. D., Doub, R. M., & McMurtrey, L. T. (1965). Recall of TV commercials by time and program slot. Journal of Advertising Research, 5(2), 41–47.
Barlaz, M. (2020). Ordinal logistic regression in R. https://marissabarlaz.github.io/portfolio/ols/
Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. https://doi.org/10.18637/jss.v067.i01
Bennion, K. A., Ford, J. H., Murray, B. D., & Kensinger, E. A. (2013). Oversimplifcation in the study of emotional memory. Journal of the International Neuropsychological Society, 19(9), 953–961. https://doi.org/10.1017/s1355617713000945
Boltz, M. (1992). Temporal accent structure and the remembering of filmed narratives. Journal of Experimental Psychology: Human Perception and Performance, 18(1), 90–105. https://doi.org/10.1037/0096-1523.18.1.90
Brewin, C. R., Dalgleish, T., & Joseph, S. (1996). A dual representation theory of posttraumatic stress disorder. Psychological Review, 103(4), 670–686. https://doi.org/10.1037/0033-295x.103.4.670
Brosch, T., Pourtois, G., & Sander, D. (2010). The perception and categorisation of emotional stimuli: A review. Cognition & Emotion, 24(3), 377–400. https://doi.org/10.1080/02699930902975754
Christensen, R. H. B. (2019). ordinal - Regression Models for Ordinal Data [R package version 2019.12-10.]. https:// CRAN.R-project.org/package=ordinal
Cutting, J. E., Brunik, K. L., & Candan, A. (2012). Perceiving event dynamics and parsing in Hollywood films. Journal of Experimental Psychology: Human Perception and Performance, 38(6), 1476–1490. https://doi.org/10.1037/a0027737
DuBrow, S., & Davachi, L. (2013). The influence of context boundaries on memory for the sequential order of events. Journal of Experimental Psychology: General, 142(4), 1277–1286. https://doi.org/10.1037/a0034024
DuBrow, S., & Davachi, L. (2014). Temporal memory is shaped by encoding stability and intervening item reactivation. Journal of Neuroscience, 34(42), 13998–14005. https://doi.org/10.1523/jneurosci.2535-14.2014
DuBrow, S., & Davachi, L. (2016). Temporal binding within and across events. Neurobiology of Learning and Memory, 134, 107–114. https://doi.org/10.1016/j.nlm.2016.07.011
Ezzyat, Y., & Davachi, L. (2011). What constitutes an episode in episodic memory? Psychological Science, 22(2), 243–252. https://doi.org/10.1177/0956797610393742
Flores, S., Bailey, H. R., Eisenberg, M. L., & Zacks, J. M. (2017). Event segmentation improves event memory up to one month later. Journal of Experimental Psychology: Learning, Memory, and Cognition, 43(8), 1183–1202. https://doi.org/10.1037/xlm0000367
Fox, J., & Weisberg, S. (2019). An R Companion to Applied Regression (3rd ed.). Sage.
Gold, D. A., Zacks, J. M., & Flores, S. (2017). Effects of cues to event segmentation on subsequent memory. Cognitive Research: Principles and Implications, 2(1), 1. https://doi.org/10.1186/s41235-016-0043-2
Hansen, C. H., & Hansen, R. D. (1988). Finding the face in the crowd - an anger superiority effect. Journal of Personality and Social Psychology, 54(6), 917–924. https://doi.org/10.1037/0022-3514.54.6.917
Hervé, M. (2020). RVAideMemoire: Testing and Plotting Procedures for Biostatistics [R package version 0.9-78.]. https://CRAN.R-project.org/package=RVAideMemoire
Hornsby, A. J., Bisby, J. A., Wang, A., Bogus, K., & Burgess, N. (2016). The role of spatial boundaries in shaping long-term event representations. Cognition, 154, 151–164. https://doi.org/10.1016/j.cognition.2016.05.013
Kurby, C. A., & Zacks, J. M. (2008). Segmentation in the perception and memory of events. Trends in Cognitive Sciences, 12(2), 72–79. https://doi.org/10.1016/j.tics.2007.11.004
Kurby, C. A., & Zacks, J. M. (2011). Age differences in the perception of hierarchical structure in events. Memory & Cognition, 39, 75–91. https://doi.org/10.3758/s13421-010-0027-2
Lenth, R. V. (2020). emmeans: Estimated Marginal Means, aka Least-Squares Means [R package version 1.5.3.]. https://CRAN.R-project.org/package=emmeans
Lüdecke, D. (2018). ggeffects: Tidy data frames of marginal effects from regression models. Journal of Open Source Software, 3(26), 772. https://doi.org/10.21105/joss.00772
Magliano, J. P., Miller, J., & Zwaan, R. A. (2001). Indexing space and time in film understanding. Applied Cognitive Psychology, 15(5), 533–545. https://doi.org/10.1002/acp.724
Mangiafico, S. S. (2016). Summary and Analysis of Extension Program Evaluation in R [Version 1.18.7.]. rcompanion.org/handbook/
Mehta, A., & Purvis, S. C. (2006). Reconsidering recall and emotion in advertising. Journal of Advertising Research, 46(1), 49–56. https://doi.org/10.2501/s0021849906060065
New, J., Cosmides, L., & Tooby, J. (2007). Category-specific attention for animals reflects ancestral priorities, not expertise. Proceedings of the National Academy of Sciences of the United States of America, 104(42), 16598–16603. https://doi.org/10.1073/pnas.0703913104
Newtson, D. (1976). Foundations of attribution: The perception of ongoing behavior. New Directions in Attribution Research, 1, 223–248.
Newtson, D., & Engquist, G. (1976). The perceptual organization of ongoing behavior. Journal of Experimental Social Psychology, 12(5), 436–450. https://doi.org/10.1016/0022-1031(76)90076-7
Newtson, D., Engquist, G. A., & Bois, J. (1977). The objective basis of behavior units. Journal of Personality and Social Psychology, 35(12), 847–862. https://doi.org/10.1037/0022-3514.35.12.847
Ohman, A., Flykt, A., & Esteves, F. (2001). Emotion drives attention: Detecting the snake in the grass. Journal of Experimental Psychology: General, 130(3), 466–478. https://doi.org/10.1037/0096-3445.130.3.466
Radvansky, G. A., & Copeland, D. E. (2006). Walking through doorways causes forgetting: Situation models and experienced space. Memory & Cognition, 34(5), 1150–1156. https://doi.org/10.3758/bf03193261
RStudio Team. (2020). RStudio: Integrated Development Environment for R. http://www.rstudio.com/
Sargent, J. Q., Zacks, J. M., Hambrick, D. Z., Zacks, R. T., Kurby, C. A., Bailey, H. R., Eisenberg, M. L., & Beck, T. M. (2013). Event segmentation ability uniquely predicts event memory. Cognition, 129(2), 241–255. https://doi.org/10.1016/j.cognition.2013.07.002
Schwan, S., & Garsoffky, B. (2004). The cognitive representation of filmic event summaries. Applied Cognitive Psychology, 18(1), 37–55. https://doi.org/10.1002/acp.940
Schwan, S., Garsoffky, B., & Hesse, F. W. (2000). Do film cuts facilitate the perceptual and cognitive organization of activity sequences? Memory & Cognition, 28(2), 214–223. https://doi.org/10.3758/bf03213801
Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2(3), 307–320. https://doi.org/10.1016/s0005-7894(71)80064-3
Sherrill, A. M., Kurby, C. A., Lilly, M. M., & Magliano, J. P. (2019). The effects of state anxiety on analog peritraumatic encoding and event memory: Introducing the stressful event segmentation paradigm. Memory, 27(2), 124–136. https://doi.org/10.1080/09658211.2018.1492619
Silva, M., Baldassano, C., & Fuentemilla, L. (2019). Rapid memory reactivation at movie event boundaries promotes episodic encoding. The Journal of Neuroscience, 39(43), 8538–8548. https://doi.org/10.1523/jneurosci.0360-19.2019
Speer, N. K., Swallow, K. M., & Zacks, J. M. (2003). Activation of human motion processing areas during event perception. Cognitive, Affective, & Behavioral Neuroscience, 3(4), 335–345. https://doi.org/10.3758/cabn.3.4.335
Speer, N. K., & Zacks, J. M. (2005). Temporal changes as event boundaries: Processing and memory consequences of narrative time shifts. Journal of Memory and Language, 53(1), 125–140. https://doi.org/10.1016/j.jml.2005.02.009
Strange, B. A., Hurlemann, R., & Dolan, R. J. (2003). An emotion-induced retrograde amnesia in humans is amygdala- and B-adrenergic-dependent. Proceedings of the National Academy of Sciences, 100(23), 13626–13631. https://doi.org/10.1073/pnas.1635116100
Sussman, T. J., Jin, J., & Mohanty, A. (2016). Top-down and bottom-up factors in threat-related perception and attention in anxiety. Biological Psychology, 121, 160–172. https://doi.org/10.1016/j.biopsycho.2016.08.006
Teixeira, T. S., Wedel, M., & Pieters, R. (2010). Moment-to-moment optimal branding in TV commercials: Preventing avoidance by pulsing. Marketing Science, 29(5), 783–804. https://doi.org/10.1287/mksc.1100.0567
Tulving, E. (1969). Retrograde amnesia in free recall. Science, 164, 88–90. https://doi.org/10.1126/science.164.3875.88
Wickham, H. (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York.
Wright, D. B., & London, K. (2009). Multilevel modelling: Beyond the basic applications. British Journal of Mathematical and Statistical Psychology, 62(2), 439–456. https://doi.org/10.1348/000711008x327632
Zacks, J. M., Speer, N. K., Vettel, J. M., & Jacoby, L. L. (2006). Event understanding and memory in healthy aging and dementia of the Alzheimer type. Psychology and Aging, 21(3), 466–482. https://doi.org/10.1037/0882-7974.21.3.466
This is an open access article distributed under the terms of the Creative Commons Attribution License (4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Supplementary Material